DeepMind's AlphaGo Zero and AlphaZero - RL Paper Explained
Offered By: Aleksa Gordić - The AI Epiphany via YouTube
Course Description
Overview
Dive into a comprehensive video lecture exploring DeepMind's groundbreaking AI agents AlphaGo Zero and AlphaZero. Learn how these revolutionary algorithms mastered complex games like Go, Chess, and Shogi through pure self-play, without any human knowledge input. Explore the inner workings of these AI systems, including their architecture, training process, and the knowledge they acquired. Understand key concepts like Monte Carlo Tree Search (MCTS), self-play mechanisms, and the impact of architectural choices. Discover how these AI agents surpassed human expertise, even uncovering new strategies in ancient games. Compare AlphaGo Zero with its predecessors and examine the innovations introduced in AlphaZero. Gain insights into the future of AI and its potential applications beyond game-playing.
Syllabus
- AlphaGo lineage of agents
- Comparing AlphaGo Zero with AlphaGo
- High-level explanation of AlphaGo Zero inner workings
- MCTS recap
- Training details and curves
- Architecture impact
- Knowledge acquired
- Results
- Discovering joseki
- Human domain knowledge in AlphaGo Zero
- Pipeline overview
- Self-play thread explained
- Further details PUCT recap, etc.
- AlphaZero what's new?
Taught by
Aleksa Gordić - The AI Epiphany
Related Courses
Computational NeuroscienceUniversity of Washington via Coursera Reinforcement Learning
Brown University via Udacity Reinforcement Learning
Indian Institute of Technology Madras via Swayam FA17: Machine Learning
Georgia Institute of Technology via edX Introduction to Reinforcement Learning
Higher School of Economics via Coursera