YoVDO

Modeling Common Ground for Multimodal Communication - 2018

Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube

Tags

Computational Linguistics Courses Artificial Intelligence Courses Computer Vision Courses Gesture Recognition Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive lecture on modeling common ground for multimodal communication presented by James Pustejovsky from Brandeis University. Delve into the evolving landscape of human-computer interactions, focusing on cooperation in shared workspaces to achieve common goals. Examine a prototype system where people and avatars collaborate to build block world structures through language, gesture, vision, and action. Learn about the VoxML modeling language, which encodes objects with rich semantic typing and action affordances, enabling contextually salient inferences in a 3D simulation environment. Discover how this platform facilitates the study of computational issues in multimodal communication and the establishment of common ground in discourse. Gain insights from a walk-through of multimodal communication in a shared task, illustrating the practical applications of this research.

Syllabus

Modeling Common Ground for Multimodal Communication -- James Pustejovsky (Brandeis U) - 2018


Taught by

Center for Language & Speech Processing(CLSP), JHU

Related Courses

Como criar aplicativos com mĂșltiplas telas para iPhone e iPad
Universidade Estadual de Campinas via Coursera
AR Development Techniques 02: Lighting and Physics
LinkedIn Learning
Introduction to Sprite Kit with Swift 3
Udemy
Computer Vision Projects
YouTube
OpenCV and Python Tutorial
YouTube