SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning
Offered By: Steve Brunton via YouTube
Course Description
Overview
Explore the innovative SINDy-RL framework in this 21-minute video lecture by Steve Brunton. Delve into the world of interpretable and efficient model-based reinforcement learning, combining sparse identification of nonlinear dynamics (SINDy) with deep reinforcement learning (DRL). Learn how this approach creates efficient, interpretable, and trustworthy representations of dynamics models, reward functions, and control policies. Discover the advantages of SINDy-RL over traditional DRL methods, including reduced data requirements and smaller, more interpretable control policies. Follow along as the lecture covers reinforcement learning basics, its drawbacks, dictionary learning, and the various components of SINDy-RL, including environment modeling, reward function approximation, agent design, and uncertainty quantification. Gain insights into how this method can be applied to benchmark control environments and challenging fluids problems, potentially revolutionizing control strategies in complex systems like tokamak fusion reactors and fluid dynamics.
Syllabus
Intro
What is Reinforcement Learning?
Reinforcement Learning Drawbacks
Dictionary Learning and SINDy
SINDy-RL: Environment
SINDy-RL: Reward
SINDy-RL: Agent
SINDy-RL: Uncertainty Quantification
Recap and Outro
Taught by
Steve Brunton
Related Courses
Reinforcement LearningSteve Brunton via YouTube Stanford CS330: Deep Multi-Task and Meta Learning
Stanford University via YouTube Mastering Atari with Discrete World Models - Machine Learning Research Paper Explained
Yannic Kilcher via YouTube Generalizable Autonomy for Robot Manipulation
Alexander Amini via YouTube RL Foundation Models Are Coming!
Edan Meyer via YouTube