SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning
Offered By: Steve Brunton via YouTube
Course Description
Overview
Explore the innovative SINDy-RL framework in this 21-minute video lecture by Steve Brunton. Delve into the world of interpretable and efficient model-based reinforcement learning, combining sparse identification of nonlinear dynamics (SINDy) with deep reinforcement learning (DRL). Learn how this approach creates efficient, interpretable, and trustworthy representations of dynamics models, reward functions, and control policies. Discover the advantages of SINDy-RL over traditional DRL methods, including reduced data requirements and smaller, more interpretable control policies. Follow along as the lecture covers reinforcement learning basics, its drawbacks, dictionary learning, and the various components of SINDy-RL, including environment modeling, reward function approximation, agent design, and uncertainty quantification. Gain insights into how this method can be applied to benchmark control environments and challenging fluids problems, potentially revolutionizing control strategies in complex systems like tokamak fusion reactors and fluid dynamics.
Syllabus
Intro
What is Reinforcement Learning?
Reinforcement Learning Drawbacks
Dictionary Learning and SINDy
SINDy-RL: Environment
SINDy-RL: Reward
SINDy-RL: Agent
SINDy-RL: Uncertainty Quantification
Recap and Outro
Taught by
Steve Brunton
Related Courses
6.S094: Deep Learning for Self-Driving CarsMassachusetts Institute of Technology via Independent Natural Language Processing (NLP)
Microsoft via edX Deep Reinforcement Learning
Nvidia Deep Learning Institute via Udacity Advanced AI: Deep Reinforcement Learning in Python
Udemy Self-driving go-kart with Unity-ML
Udemy