YoVDO

Divide-and-Conquer Monte Carlo Tree Search for Goal-Directed Planning - Paper Explained

Offered By: Yannic Kilcher via YouTube

Tags

Reinforcement Learning Courses Artificial Intelligence Courses Sequential Decision Making Courses

Course Description

Overview

Explore a groundbreaking approach to AI planning in this 26-minute video explanation of the paper "Divide-and-Conquer Monte Carlo Tree Search For Goal-Directed Planning." Delve into a novel generalization of Monte Carlo Tree Search (MCTS) that revolutionizes problem-solving by recursively dividing complex tasks into manageable sub-problems. Learn how this method deviates from traditional step-by-step planning, instead focusing on identifying optimal intermediate goals. Discover the algorithm's unique ability to improve imperfect goal-directed policies through strategic sub-goal sequencing. Examine the concept of Divide-and-Conquer MCTS (DC-MCTS) and its application in both grid-world navigation and challenging continuous control environments. Gain insights into the flexibility of planning strategies and their potential to outperform sequential planning approaches.

Syllabus

Intro
What is planning
The algorithm
Finding the next action
Building your search tree
Search over subproblems
Subdivide
The Catch
Deep Learning
Training


Taught by

Yannic Kilcher

Related Courses

Toward Generalizable Embodied AI for Machine Autonomy
Bolei Zhou via YouTube
What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?
Simons Institute via YouTube
Better Learning from the Past - Counterfactual - Batch RL
Simons Institute via YouTube
Off-Policy Policy Optimization
Simons Institute via YouTube
Provably Efficient Reinforcement Learning with Linear Function Approximation - Chi Jin
Institute for Advanced Study via YouTube