YoVDO

Teaching Language to Deaf Infants with a Robot and a Virtual Human

Offered By: Association for Computing Machinery (ACM) via YouTube

Tags

ACM SIGCHI Courses Robotics Courses Interaction Design Courses

Course Description

Overview

Explore an innovative approach to teaching language to deaf infants using a multi-agent system combining a robot and virtual human. Delve into the challenges of providing sufficient language exposure during critical developmental periods, especially for deaf infants born to hearing parents. Examine the design and implementation of an integrated system engineered to augment language exposure for 6-12 month old infants. Discover how the team addressed the complexities of human-machine design for infants, considering the limitations of screen-based media and robots in language learning. Learn about the system's ability to provide visual language and facilitate socially contingent human conversational exchange. Analyze case studies demonstrating successful engagement of the technology with both deaf and hearing infants. Gain insights into the interdisciplinary team's combined goals, system design, robot and virtual human components, and evaluation process. Understand the design lessons learned and potential implications for future research in accessible and inclusive education for infants with hearing impairments.

Syllabus

Intro
Minimal Language Input In Deaf Infants
Design Challenge
Interdisciplinary Team
Combined Goals
System Design
Robot Design
Virtual Human Design
System Evaluation
Albert
Perception System
Interaction Design
Bella
Celia
Design Lessons
Conclusions


Taught by

ACM SIGCHI

Related Courses

Human Computer Interaction
Independent
Human-Computer Interaction Design
University of California, San Diego via Coursera
Interaction Techniques
University of California, San Diego via Coursera
From Point of View to Prototype
University of California, San Diego via Coursera
Prototyping Interaction
Amsterdam University of Applied Sciences via iversity