Reinforcement Learning via an Optimization Lens
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore reinforcement learning through an optimization lens in this 47-minute lecture by Lihong Li from Google Brain. Delve into the fundamentals of reinforcement learning, including Markov Decision Processes, Bellman equations, and the challenges of online versus offline learning. Examine the intersection of Bellman and Gauss in approximate dynamic programming, and investigate a long-standing open problem in the field. Discover how linear programming reformulation and Legendre-Fenchel transformation address difficulties in solving fixed-point problems. Learn about a new loss function for solving Bellman equations and its eigenfunction interpretation. Conclude with practical applications using neural networks in a Puddle World scenario.
Syllabus
Intro
Reinforcement karning: Learning to make decisions
Online vs. Offline (Batch) RL: A Basic View
Outline
Markov Decision Process (MDP)
MDP Example: Deterministic Shortest Path
More General Case: Bellman Equation
Bellman Operator
When Bellman Meets Gauss: Approximate DP
Divergence Example of Tsitsiklis & Van Roy (96)
Does It Matter in Practice?
A Long-standing Open Problem
Linear Programming Reformulation
Why Solving for Fixed Point Directly is Hard?
Addressing Difficulty #2: Legendre-Fenchel Transformation
Reformulation of Bellman Equation
Primal-dual Problems are Hard to Solve
A New Loss for Solving Bellman Equation
Eigenfunction Interpretation
Puddle World with Neural Networks
Conclusions
Taught by
Simons Institute
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Decision-Making for Autonomous Systems
Chalmers University of Technology via edX Fundamentals of Reinforcement Learning
University of Alberta via Coursera A Complete Reinforcement Learning System (Capstone)
University of Alberta via Coursera An Introduction to Artificial Intelligence
Indian Institute of Technology Delhi via Swayam