MDPs - Markov Decision Processes - Decision Making Under Uncertainty Using POMDPs.jl
Offered By: The Julia Programming Language via YouTube
Course Description
Overview
Explore Markov Decision Processes (MDPs) and decision-making under uncertainty in this comprehensive 49-minute video tutorial. Dive into the fundamentals of MDPs, including state space, action space, transition functions, reward functions, and discount factors. Learn about QuickPOMDPs and various MDP solvers, including reinforcement learning approaches. Follow along with a Pluto notebook to implement a Grid World environment, defining actions, transitions, rewards, and termination conditions. Discover offline solution methods like value iteration and policy visualization, as well as online approaches such as Q-learning, SARSA, and Monte Carlo Tree Search (MCTS). Gain practical insights through simulations and visualizations, and access additional resources and references to further your understanding of decision-making under uncertainty using POMDPs.jl in the Julia programming language.
Syllabus
Intro.
MDP definition.
Grid World.
State space.
Action space.
Transition function.
Reward function.
Discount factor.
QuickPOMDPs.
MDP solvers.
RL solvers.
Pluto notebook.
Grid World environment.
Grid World actions.
Grid World transitions.
Grid World rewards.
Grid World discount.
Grid World termination.
Grid World MDP.
Solutions (offline).
Value iteration.
Transition probability distribution.
Using the policy.
Visualizations.
Reinforcement learning.
TD learning.
Q-learning.
SARSA.
Solutions (online).
MCTS.
MCTS visualization.
Simulations.
Extras.
References.
Taught by
The Julia Programming Language
Related Courses
Computational NeuroscienceUniversity of Washington via Coursera Reinforcement Learning
Brown University via Udacity Reinforcement Learning
Indian Institute of Technology Madras via Swayam FA17: Machine Learning
Georgia Institute of Technology via edX Introduction to Reinforcement Learning
Higher School of Economics via Coursera