Could a Purely Self-Supervised Foundation Model Achieve Grounded Language Understanding?
Offered By: Santa Fe Institute via YouTube
Course Description
Overview
Explore a thought-provoking lecture by Stanford University Professor Chris Potts examining the potential for purely self-supervised foundation models to achieve grounded language understanding. Delve into topics including classical AI approaches, brain-mimicking systems, conceptions of semantics, and the challenges of behavioral testing for foundation models. Analyze the metaphysics and epistemology of understanding, and discover findings on causal abstraction in large networks. Gain insights into cutting-edge AI research and its implications for language comprehension and artificial intelligence development.
Syllabus
Intro
Could a purely self-supervised Foundation Model achieve grounded language understanding?
Could a Machine Think? Classical Al is unlikely to yield conscious machines, systems that mimic the brain might
A quick summary of "Could a machine think?"
Foundation Models (FMs)
Self-supervision
Two paths to world-class Al chess?
Conceptions of semantics
Bender & Koller 2020: Symbol streams lack crucial information
Multi-modal streams
Metaphysics and epistemology of understanding
Behavioral testing: Tricky with Foundation Models
Internalism at work: Causal abstraction analysis
Findings of causal abstraction in large networks
Taught by
Santa Fe Institute
Tags
Related Courses
Introduction to PhilosophyUniversity of Edinburgh via Coursera 活用希臘哲學 (Understanding the Greek Philosophy)
National Taiwan University via Coursera 哲学导论(中文版)
University of Edinburgh via Coursera Power and Responsibility: Doing Philosophy with Superheroes
Harvard University via edX Introducción a Filosofía III
Elbio Fernández Institute via Miríadax