A Bayesian View of Inductive Learning in Humans and Machines - 2004
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore a comprehensive lecture on the Bayesian approach to inductive learning in humans and machines, delivered by Josh Tenenbaum from MIT in 2004 at the Center for Language & Speech Processing (CLSP), JHU. Delve into the fascinating world of human cognition and machine learning as Tenenbaum explains how people, even young children, can make successful generalizations from limited evidence. Discover the role of domain-general rational Bayesian inferences constrained by implicit theories in various task domains, including biological property generalization and word meaning acquisition. Examine the interaction between domain theories and everyday inductive leaps, and learn how these theories generate hypothesis spaces for Bayesian generalization. Investigate the potential for acquiring these theories through higher-order statistical inferences. Finally, uncover how this approach to modeling human learning inspires new machine learning techniques for semi-supervised learning, enabling generalizations from minimal labeled examples with the aid of large unlabeled datasets. This 1 hour and 26 minute talk offers valuable insights for researchers, students, and professionals interested in cognitive science, artificial intelligence, and machine learning.
Syllabus
A Bayesian view of inductive learning in humans and machines – Josh Tenenbaum (MIT) - 2004
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent