YoVDO

Modeling Common Ground for Multimodal Communication - 2018

Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube

Tags

Computational Linguistics Courses Artificial Intelligence Courses Computer Vision Courses Gesture Recognition Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive lecture on modeling common ground for multimodal communication presented by James Pustejovsky from Brandeis University. Delve into the evolving landscape of human-computer interactions, focusing on cooperation in shared workspaces to achieve common goals. Examine a prototype system where people and avatars collaborate to build block world structures through language, gesture, vision, and action. Learn about the VoxML modeling language, which encodes objects with rich semantic typing and action affordances, enabling contextually salient inferences in a 3D simulation environment. Discover how this platform facilitates the study of computational issues in multimodal communication and the establishment of common ground in discourse. Gain insights from a walk-through of multimodal communication in a shared task, illustrating the practical applications of this research.

Syllabus

Modeling Common Ground for Multimodal Communication -- James Pustejovsky (Brandeis U) - 2018


Taught by

Center for Language & Speech Processing(CLSP), JHU

Related Courses

Miracles of Human Language: An Introduction to Linguistics
Leiden University via Coursera
Language and Mind
Indian Institute of Technology Madras via Swayam
Text Analytics with Python
University of Canterbury via edX
Playing With Language
TED-Ed via YouTube
Computational Language: A New Kind of Science
World Science U