Could a Purely Self-Supervised Foundation Model Achieve Grounded Language Understanding?
Offered By: Santa Fe Institute via YouTube
Course Description
Overview
Explore a thought-provoking lecture by Stanford University Professor Chris Potts examining the potential for purely self-supervised foundation models to achieve grounded language understanding. Delve into topics including classical AI approaches, brain-mimicking systems, conceptions of semantics, and the challenges of behavioral testing for foundation models. Analyze the metaphysics and epistemology of understanding, and discover findings on causal abstraction in large networks. Gain insights into cutting-edge AI research and its implications for language comprehension and artificial intelligence development.
Syllabus
Intro
Could a purely self-supervised Foundation Model achieve grounded language understanding?
Could a Machine Think? Classical Al is unlikely to yield conscious machines, systems that mimic the brain might
A quick summary of "Could a machine think?"
Foundation Models (FMs)
Self-supervision
Two paths to world-class Al chess?
Conceptions of semantics
Bender & Koller 2020: Symbol streams lack crucial information
Multi-modal streams
Metaphysics and epistemology of understanding
Behavioral testing: Tricky with Foundation Models
Internalism at work: Causal abstraction analysis
Findings of causal abstraction in large networks
Taught by
Santa Fe Institute
Tags
Related Courses
中级汉语语法 | Intermediate Chinese GrammarPeking University via edX Miracles of Human Language: An Introduction to Linguistics
Leiden University via Coursera Introduction to Natural Language Processing
University of Michigan via Coursera Linguaggio, identità di genere e lingua italiana
Ca' Foscari University of Venice via EduOpen Natural Language Processing
Indian Institute of Technology, Kharagpur via Swayam