Modeling Common Ground for Multimodal Communication - 2018
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore a comprehensive lecture on modeling common ground for multimodal communication presented by James Pustejovsky from Brandeis University. Delve into the evolving landscape of human-computer interactions, focusing on cooperation in shared workspaces to achieve common goals. Examine a prototype system where people and avatars collaborate to build block world structures through language, gesture, vision, and action. Learn about the VoxML modeling language, which encodes objects with rich semantic typing and action affordances, enabling contextually salient inferences in a 3D simulation environment. Discover how this platform facilitates the study of computational issues in multimodal communication and the establishment of common ground in discourse. Gain insights from a walk-through of multimodal communication in a shared task, illustrating the practical applications of this research.
Syllabus
Modeling Common Ground for Multimodal Communication -- James Pustejovsky (Brandeis U) - 2018
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent