What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?
Offered By: Simons Institute via YouTube
Course Description
Overview
Explore the statistical boundaries of offline reinforcement learning with function approximation in this 55-minute lecture by Sham Kakade from the University of Washington and Microsoft Research. Delve into key concepts including realizability, sequential decision making, coverage limits, and policy evaluation. Examine upper and lower bounds, practical considerations, and experimental results. Gain insights into the mathematics of online decision making and the interplay between models and features in reinforcement learning.
Syllabus
Intro
What is offline reinforcement learning
Intuition
Realizability
Sequential Decision Making
Standard Approach
Coverage
Limits
Policy Evaluation
Setting
Feature Mapping
Upper Limits
Lower Limits
Observations
Upper Bounds
Inequality
Simulation
Summary
Sufficient Conditions
Possible Results
Intuition and Construction
Practical Considerations
Follow Up
Experiments
Other Experiments
Model vs Feature
Taught by
Simons Institute
Related Courses
Can Wikipedia Help Offline Reinforcement Learning - Author InterviewYannic Kilcher via YouTube Can Wikipedia Help Offline Reinforcement Learning? - Paper Explained
Yannic Kilcher via YouTube CAP6412 - Final Project Presentations - Lecture 27
University of Central Florida via YouTube Offline Reinforcement Learning and Model-Based Optimization
Simons Institute via YouTube Reinforcement Learning
Simons Institute via YouTube