Reinforcement Learning via an Optimization Lens
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore reinforcement learning through an optimization lens in this 47-minute lecture by Lihong Li from Google Brain. Delve into the fundamentals of reinforcement learning, including Markov Decision Processes, Bellman equations, and the challenges of online versus offline learning. Examine the intersection of Bellman and Gauss in approximate dynamic programming, and investigate a long-standing open problem in the field. Discover how linear programming reformulation and Legendre-Fenchel transformation address difficulties in solving fixed-point problems. Learn about a new loss function for solving Bellman equations and its eigenfunction interpretation. Conclude with practical applications using neural networks in a Puddle World scenario.
Syllabus
Intro
Reinforcement karning: Learning to make decisions
Online vs. Offline (Batch) RL: A Basic View
Outline
Markov Decision Process (MDP)
MDP Example: Deterministic Shortest Path
More General Case: Bellman Equation
Bellman Operator
When Bellman Meets Gauss: Approximate DP
Divergence Example of Tsitsiklis & Van Roy (96)
Does It Matter in Practice?
A Long-standing Open Problem
Linear Programming Reformulation
Why Solving for Fixed Point Directly is Hard?
Addressing Difficulty #2: Legendre-Fenchel Transformation
Reformulation of Bellman Equation
Primal-dual Problems are Hard to Solve
A New Loss for Solving Bellman Equation
Eigenfunction Interpretation
Puddle World with Neural Networks
Conclusions
Taught by
Simons Institute
Related Courses
TensorFlow Developer Certificate Exam PrepA Cloud Guru Post Graduate Certificate in Advanced Machine Learning & AI
Indian Institute of Technology Roorkee via Coursera Advanced AI Techniques for the Supply Chain
LearnQuest via Coursera Advanced Learning Algorithms
DeepLearning.AI via Coursera IBM AI Engineering
IBM via Coursera