What Matters in On-Policy Reinforcement Learning? A Large-Scale Empirical Study
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a comprehensive analysis of on-policy reinforcement learning in this 38-minute video. Delve into the impact of various design choices on agent performance across five continuous control environments. Learn about parameterized agents, unified online RL frameworks, policy loss, network architectures, and initial policy considerations. Examine the effects of normalization, clipping, advantage estimation, and training setup on RL outcomes. Investigate timestep handling, optimizer selection, and regularization techniques. Gain valuable insights and practical recommendations for implementing effective on-policy RL agents based on extensive empirical research involving over 250,000 trained agents.
Syllabus
- Intro & Overview
- Parameterized Agents
- Unified Online RL and Parameter Choices
- Policy Loss
- Network Architecture
- Initial Policy
- Normalization & Clipping
- Advantage Estimation
- Training Setup
- Timestep Handling
- Optimizers
- Regularization
- Conclusion & Comments
Taught by
Yannic Kilcher
Related Courses
Computational NeuroscienceUniversity of Washington via Coursera Reinforcement Learning
Brown University via Udacity Reinforcement Learning
Indian Institute of Technology Madras via Swayam FA17: Machine Learning
Georgia Institute of Technology via edX Introduction to Reinforcement Learning
Higher School of Economics via Coursera