Building Machines that Discover Generalizable, Interpretable Knowledge
Offered By: Paul G. Allen School via YouTube
Course Description
Overview
Explore a cutting-edge lecture on program induction and its potential to revolutionize artificial intelligence. Delve into Kevin Ellis's presentation on "Building Machines that Discover Generalizable, Interpretable Knowledge," which examines how program induction systems can represent knowledge as programs and learn by synthesizing code. Discover case studies in vision, natural language, and learning-to-learn that demonstrate machines capable of acquiring new knowledge from modest experience, strongly generalizing that knowledge, representing it interpretably, and applying it to diverse problems. Learn about a novel neuro-symbolic algorithm for Bayesian program synthesis that integrates program synthesis technologies with symbolic, probabilistic, and neural AI traditions. Gain insights from Ellis, a final-year MIT graduate student, on the future of AI and its potential to mimic human-like learning and problem-solving abilities across various domains.
Syllabus
Allen School Colloquium: Kevin Ellis (MIT)
Taught by
Paul G. Allen School
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent