Online Learning in Markov Decision Processes - Part 1
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the fundamentals of online learning in Markov Decision Processes (MDPs) through this comprehensive lecture by Ambuj Tewari from the University of Michigan. Delve into key concepts such as online learning theory, E-Cube, R-Max, and the general U principle. Gain insights into algorithm design, notation, and MDPs. Understand optimal MDPs, Bellman equations, and Bellman's theorem. Analyze the optimal approach to online learning in MDPs. This talk, part of the Theory of Reinforcement Learning Boot Camp at the Simons Institute, provides a thorough introduction to the subject and addresses important questions in the field.
Syllabus
Introduction
Online Learning
Theory
ECube
RMax
General of U principle
Algorithm Design
Notation
MDPs
Optimal MDP
Questions
Bellman Equation
Bellman Theorem
Analysis
Optimal
Taught by
Simons Institute
Related Courses
Deep Reinforcement LearningNvidia Deep Learning Institute via Udacity Reinforcement Learning
Edureka Fundamentals of Deep Reinforcement Learning
Learn Ventures via edX A Friendly Introduction to Deep Reinforcement Learning, Q-Networks and Policy Gradients
Serrano.Academy via YouTube Deep Robust Reinforcement Learning and Regularization
Simons Institute via YouTube