YoVDO

Optimality and Approximation with Policy Gradient Methods in Markov Decision Processes

Offered By: Simons Institute via YouTube

Tags

Reinforcement Learning Courses Deep Learning Courses Markov Decision Processes Courses Approximation Courses Policy Gradient Methods Courses

Course Description

Overview

Explore the intricacies of policy gradient methods in Markov Decision Processes through this 55-minute lecture by Alekh Agarwal from Microsoft Research Redmond. Delve into optimality and approximation concepts as part of the "Emerging Challenges in Deep Learning" series at the Simons Institute. Examine MDP preliminaries, policy parameterizations, and the policy gradient algorithm, with a focus on softmax parameterization and entropy regularization. Analyze the convergence of entropy-regularized PGA, natural solutions, and proof ideas. Investigate restricted parameterizations, natural policy gradient updates, policy assumptions, and extensions to finite samples. Gain valuable insights into this crucial area of deep learning and reinforcement learning research.

Syllabus

Intro
Questions of interest
Main challenges
MDP Preliminaries
Policy parameterizations
Policy gradient algorithm
Policy gradient example: Softmax parameterization
Entropy regularization
Convergence of Entropy regularized PG
A natural solution
Proof ideas
Restricted parameterizations
A closer look at Natural Policy Gradient • NPG performs the update
Assumptions on policies
Extension to finite samples
Looking ahead


Taught by

Simons Institute

Related Courses

A Complete Reinforcement Learning System (Capstone)
University of Alberta via Coursera
Fundamentals of Deep Reinforcement Learning
Learn Ventures via edX
Data Science Decisions in Time: Using Data Effectively
Johns Hopkins University via Coursera
Reinforcement Learning with Gymnasium in Python
DataCamp
Decision-Making for Autonomous Systems
Chalmers University of Technology via edX