What Matters in On-Policy Reinforcement Learning? A Large-Scale Empirical Study
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a comprehensive analysis of on-policy reinforcement learning in this 38-minute video. Delve into the impact of various design choices on agent performance across five continuous control environments. Learn about parameterized agents, unified online RL frameworks, policy loss, network architectures, and initial policy considerations. Examine the effects of normalization, clipping, advantage estimation, and training setup on RL outcomes. Investigate timestep handling, optimizer selection, and regularization techniques. Gain valuable insights and practical recommendations for implementing effective on-policy RL agents based on extensive empirical research involving over 250,000 trained agents.
Syllabus
- Intro & Overview
- Parameterized Agents
- Unified Online RL and Parameter Choices
- Policy Loss
- Network Architecture
- Initial Policy
- Normalization & Clipping
- Advantage Estimation
- Training Setup
- Timestep Handling
- Optimizers
- Regularization
- Conclusion & Comments
Taught by
Yannic Kilcher
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent