Deep Reinforcement Learning in the Real World - Sergey Levine
Offered By: Institute for Advanced Study via YouTube
Course Description
Overview
Explore deep reinforcement learning applications in real-world scenarios through this insightful lecture by Sergey Levine from the University of Berkeley. Delve into the challenges and solutions of off-policy reinforcement learning with large datasets, focusing on model-free and model-based approaches. Learn about QT-Opt, an off-policy Q-learning algorithm at scale, and its application in robotic grasping tasks. Discover how to address common issues in reinforcement learning, such as training on irrelevant data, and understand the potential of temporal difference models and Q-functions in learning implicit models. Gain valuable insights into optimizing over valid states and the application of model-based reinforcement learning for dexterous manipulation tasks.
Syllabus
Intro
Deep learning helps us handle unstructured environments
Reinforcement learning provides a formalism for behavior
RL has a big problem
Off-policy RL with large datasets
Off-policy model-free learning
How to solve for the Q-function?
QT-Opt: off-policy Q-learning at scale
Grasping with QT-Opt
Emergent grasping strategies
So what's the problem?
How to stop training on garbage?
How well does it work?
Off-policy model-based reinforcement learning
High-level algorithm outline
Model-based RL for dexterous manipulation
Q-Functions (can) learn models
Temporal difference models
Optimizing over valid states
Taught by
Institute for Advanced Study
Related Courses
Reinforcement LearningRWTH Aachen University via edX Hierarchical Imitation Learning with Vector Quantized Models
Finnish Center for Artificial Intelligence FCAI via YouTube Introduction to Reinforcement Learning
Open Data Science via YouTube Model-Based RL
Pascal Poupart via YouTube Partially Observable RL
Pascal Poupart via YouTube