Adversarial Search
Offered By: Udacity
Course Description
Overview
Learn how to search in multi-agent environments (including decision making in competitive environments) using the minimax theorem from game theory. Then build an agent that can play games better than any human.
Syllabus
- Introduction to Adversarial Search
- Extend classical search to adversarial domains, to build agents that make good decisions without any human intervention—such as the DeepMind AlphaGo agent.
- Search in Multiagent Domains
- Search in multi-agent domains, using the Minimax theorem to solve adversarial problems and build agents that make better decisions than humans.
- Optimizing Minimax Search
- Some of the limitations of minimax search and introduces optimizations & changes that make it practical in more complex domains.
- Build an Adversarial Game Playing Agent
- Build agents that make good decisions without any human intervention—such as the DeepMind AlphaGo agent.
- Extending Minimax Search
- Extensions to minimax search to support more than two players and non-deterministic domains.
- Additional Adversarial Search Topics
- Introduce Monte Carlo Tree Search, a highly-successful search technique in game domains, along with a reading list for other advanced adversarial search topics.
Taught by
Thad Starner
Related Courses
GGP Course VideosStanford University via YouTube AlphaGo - Mastering the Game of Go with Deep Neural Networks and Tree Search - RL Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube How Slot Machines Are Advancing the State of the Art in Computer Go AI
Churchill CompSci Talks via YouTube Neural Nets for NLP 2019 - Advanced Search Algorithms
Graham Neubig via YouTube CMU Neural Nets for NLP 2017 - Advanced Search Algorithms
Graham Neubig via YouTube