Could a Purely Self-Supervised Foundation Model Achieve Grounded Language Understanding?
Offered By: Santa Fe Institute via YouTube
Course Description
Overview
Explore a thought-provoking lecture by Stanford University Professor Chris Potts examining the potential for purely self-supervised foundation models to achieve grounded language understanding. Delve into topics including classical AI approaches, brain-mimicking systems, conceptions of semantics, and the challenges of behavioral testing for foundation models. Analyze the metaphysics and epistemology of understanding, and discover findings on causal abstraction in large networks. Gain insights into cutting-edge AI research and its implications for language comprehension and artificial intelligence development.
Syllabus
Intro
Could a purely self-supervised Foundation Model achieve grounded language understanding?
Could a Machine Think? Classical Al is unlikely to yield conscious machines, systems that mimic the brain might
A quick summary of "Could a machine think?"
Foundation Models (FMs)
Self-supervision
Two paths to world-class Al chess?
Conceptions of semantics
Bender & Koller 2020: Symbol streams lack crucial information
Multi-modal streams
Metaphysics and epistemology of understanding
Behavioral testing: Tricky with Foundation Models
Internalism at work: Causal abstraction analysis
Findings of causal abstraction in large networks
Taught by
Santa Fe Institute
Tags
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent