Bridging the Data Gap Between LLMs and Children - Understanding Higher-Level Intelligence
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the "data gap" between Large Language Models (LLMs) and human children in this 48-minute talk by Michael Frank from Stanford University. Delve into the reasons behind LLMs requiring 3-5 orders of magnitude more training data than children, examining perspectives such as innate knowledge, active and social learning, multimodal information, and evaluation differences. Gain insights into new data on multimodal input richness and the consequences of evaluation disparities. Investigate how the cognitive science concept of competence/performance distinctions applies to LLMs, enhancing your understanding of higher-level intelligence from AI, psychology, and neuroscience perspectives.
Syllabus
Bridging the data gap between LLMs and children
Taught by
Simons Institute
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent