MuZero - Mastering Atari, Go, Chess, and Shogi by Planning with a Learned Model
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore the groundbreaking MuZero algorithm in this 19-minute video lecture. Delve into how MuZero combines tree-based search with a learned model to achieve superhuman performance in complex, visually rich domains without prior knowledge of their underlying dynamics. Learn about its innovative approach to predicting future observations' latent representations, focusing only on task-relevant information. Discover how MuZero outperforms previous methods in 57 Atari games and matches AlphaZero's superhuman performance in Go, chess, and shogi without game rule knowledge. Gain insights into the algorithm's potential for advancing planning-based reinforcement learning in new domains where accurate environment models are unavailable.
Syllabus
MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Taught by
Yannic Kilcher
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent