Reinforcement Learning in Feature Space: Complexity and Regret
Offered By: Simons Institute via YouTube
Course Description
Overview
Syllabus
Intro
Markov decision process
What does a sample mean?
Complexity and Regret for Tabular MDP
Rethinking Bellman equation
State Feature Map
Representing value function using linear combination of features
Reducing Bellman equation using features
Sample complexity of RL with features
Learning to Control On-The-Fly
Episodic Reinforcement Learning
Hilbert space embedding of transition kernel
The MatrixRL Algorithm
Regret Analysis
From feature to kernel
MatrixRL has a equivalent kernelization
Pros and cons for using features for RL
What could be good state features?
Finding Metastable State Clusters
Example: stochastic diffusion process
Unsupervised state aggregation learning
Soft state aggregation for NYC taxi data
Example: State Trajectories of Demon Attack
Taught by
Simons Institute
Related Courses
Beyond Worst-Case Analysis - Panel DiscussionSimons Institute via YouTube Reinforcement Learning - Part I
Simons Institute via YouTube Exploration with Limited Memory - Streaming Algorithms for Coin Tossing, Noisy Comparisons, and Multi-Armed Bandits
Association for Computing Machinery (ACM) via YouTube Optimal Transport for Machine Learning - Gabriel Peyre, Ecole Normale Superieure
Alan Turing Institute via YouTube Learning Quantum with Generative Models
APS Physics via YouTube