Regularization and Robustness in Reinforcement Learning
Offered By: GERAD Research Center via YouTube
Course Description
Overview
Explore the intersection of regularization and robustness in reinforcement learning through this insightful seminar presented by Esther Derman from MILA, Canada. Delve into the challenges of handling changing or partially known system dynamics in robust Markov decision processes (MDPs) and discover how regularization techniques can be leveraged to solve these complex problems. Learn about the limitations of traditional robust optimization methods in terms of computational complexity and scalability, and understand how regularized MDPs offer improved stability in policy learning without compromising time complexity. Gain valuable insights into the novel approach of using proper regularization to reduce planning and learning in robust MDPs to regularized MDPs, potentially revolutionizing the field of reinforcement learning.
Syllabus
Regularization and Robustness in Reinforcement Learning, Esther Derman
Taught by
GERAD Research Center
Related Courses
Computational NeuroscienceUniversity of Washington via Coursera Reinforcement Learning
Brown University via Udacity Reinforcement Learning
Indian Institute of Technology Madras via Swayam FA17: Machine Learning
Georgia Institute of Technology via edX Introduction to Reinforcement Learning
Higher School of Economics via Coursera