SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning
Offered By: Steve Brunton via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the innovative SINDy-RL framework in this 21-minute video lecture by Steve Brunton. Delve into the world of interpretable and efficient model-based reinforcement learning, combining sparse identification of nonlinear dynamics (SINDy) with deep reinforcement learning (DRL). Learn how this approach creates efficient, interpretable, and trustworthy representations of dynamics models, reward functions, and control policies. Discover the advantages of SINDy-RL over traditional DRL methods, including reduced data requirements and smaller, more interpretable control policies. Follow along as the lecture covers reinforcement learning basics, its drawbacks, dictionary learning, and the various components of SINDy-RL, including environment modeling, reward function approximation, agent design, and uncertainty quantification. Gain insights into how this method can be applied to benchmark control environments and challenging fluids problems, potentially revolutionizing control strategies in complex systems like tokamak fusion reactors and fluid dynamics.
Syllabus
Intro
What is Reinforcement Learning?
Reinforcement Learning Drawbacks
Dictionary Learning and SINDy
SINDy-RL: Environment
SINDy-RL: Reward
SINDy-RL: Agent
SINDy-RL: Uncertainty Quantification
Recap and Outro
Taught by
Steve Brunton
Related Courses
Explainable AI (XAI)Duke University via Coursera Interpretable Machine Learning Applications: Part 1
Coursera Project Network via Coursera Interpretable Machine Learning Applications: Part 2
Coursera Project Network via Coursera Interpretable machine learning applications: Part 3
Coursera Project Network via Coursera Interpretable Machine Learning Applications: Part 4
Coursera Project Network via Coursera