Adversarial Search
Offered By: Udacity
Course Description
Overview
Learn how to search in multi-agent environments (including decision making in competitive environments) using the minimax theorem from game theory. Then build an agent that can play games better than any human.
Syllabus
- Introduction to Adversarial Search
- Extend classical search to adversarial domains, to build agents that make good decisions without any human intervention—such as the DeepMind AlphaGo agent.
- Search in Multiagent Domains
- Search in multi-agent domains, using the Minimax theorem to solve adversarial problems and build agents that make better decisions than humans.
- Optimizing Minimax Search
- Some of the limitations of minimax search and introduces optimizations & changes that make it practical in more complex domains.
- Build an Adversarial Game Playing Agent
- Build agents that make good decisions without any human intervention—such as the DeepMind AlphaGo agent.
- Extending Minimax Search
- Extensions to minimax search to support more than two players and non-deterministic domains.
- Additional Adversarial Search Topics
- Introduce Monte Carlo Tree Search, a highly-successful search technique in game domains, along with a reading list for other advanced adversarial search topics.
Taught by
Thad Starner
Related Courses
Business Considerations for 5G with Edge, IoT, and AILinux Foundation via edX FinTech for Finance and Business Leaders
ACCA via edX AI-900: Microsoft Certified Azure AI Fundamentals
A Cloud Guru AWS Certified Machine Learning - Specialty (LA)
A Cloud Guru Azure AI Components and Services
A Cloud Guru