Regularizing Trajectory Optimization with Denoising Autoencoders - Paper Explained
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a comprehensive analysis of the paper "Regularizing Trajectory Optimization with Denoising Autoencoders" in this informative video. Delve into the challenges of planning with learned world models in reinforcement learning and discover a novel solution that regularizes trajectory optimization using denoising autoencoders. Learn how this approach improves planning accuracy with both gradient-based and gradient-free optimizers, leading to rapid initial learning in popular motor control tasks. Gain insights into the paper's methodology, experiments, and implications for enhancing sample efficiency in model-based reinforcement learning.
Syllabus
Introduction
What is Reinforcement Learning
Exploiting Inaccurate Models
Proposed Approach
Regularization
Denoising Autoencoders
Optimal Denoising Function
Gradient Descent
Experiments
Taught by
Yannic Kilcher
Related Courses
Computational NeuroscienceUniversity of Washington via Coursera Reinforcement Learning
Brown University via Udacity Reinforcement Learning
Indian Institute of Technology Madras via Swayam FA17: Machine Learning
Georgia Institute of Technology via edX Introduction to Reinforcement Learning
Higher School of Economics via Coursera