Reinforcement Learning via an Optimization Lens
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore reinforcement learning through an optimization lens in this 47-minute lecture by Lihong Li from Google Brain. Delve into the fundamentals of reinforcement learning, including Markov Decision Processes, Bellman equations, and the challenges of online versus offline learning. Examine the intersection of Bellman and Gauss in approximate dynamic programming, and investigate a long-standing open problem in the field. Discover how linear programming reformulation and Legendre-Fenchel transformation address difficulties in solving fixed-point problems. Learn about a new loss function for solving Bellman equations and its eigenfunction interpretation. Conclude with practical applications using neural networks in a Puddle World scenario.
Syllabus
Intro
Reinforcement karning: Learning to make decisions
Online vs. Offline (Batch) RL: A Basic View
Outline
Markov Decision Process (MDP)
MDP Example: Deterministic Shortest Path
More General Case: Bellman Equation
Bellman Operator
When Bellman Meets Gauss: Approximate DP
Divergence Example of Tsitsiklis & Van Roy (96)
Does It Matter in Practice?
A Long-standing Open Problem
Linear Programming Reformulation
Why Solving for Fixed Point Directly is Hard?
Addressing Difficulty #2: Legendre-Fenchel Transformation
Reformulation of Bellman Equation
Primal-dual Problems are Hard to Solve
A New Loss for Solving Bellman Equation
Eigenfunction Interpretation
Puddle World with Neural Networks
Conclusions
Taught by
Simons Institute
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX