Teaching Language to Deaf Infants with a Robot and a Virtual Human
Offered By: Association for Computing Machinery (ACM) via YouTube
Course Description
Overview
Explore an innovative approach to teaching language to deaf infants using a multi-agent system combining a robot and virtual human. Delve into the challenges of providing sufficient language exposure during critical developmental periods, especially for deaf infants born to hearing parents. Examine the design and implementation of an integrated system engineered to augment language exposure for 6-12 month old infants. Discover how the team addressed the complexities of human-machine design for infants, considering the limitations of screen-based media and robots in language learning. Learn about the system's ability to provide visual language and facilitate socially contingent human conversational exchange. Analyze case studies demonstrating successful engagement of the technology with both deaf and hearing infants. Gain insights into the interdisciplinary team's combined goals, system design, robot and virtual human components, and evaluation process. Understand the design lessons learned and potential implications for future research in accessible and inclusive education for infants with hearing impairments.
Syllabus
Intro
Minimal Language Input In Deaf Infants
Design Challenge
Interdisciplinary Team
Combined Goals
System Design
Robot Design
Virtual Human Design
System Evaluation
Albert
Perception System
Interaction Design
Bella
Celia
Design Lessons
Conclusions
Taught by
ACM SIGCHI
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Control of Mobile Robots
Georgia Institute of Technology via Coursera Artificial Intelligence Planning
University of Edinburgh via Coursera