YoVDO

ADSI Summer Workshop- Algorithmic Foundations of Learning and Control

Offered By: Paul G. Allen School via YouTube

Tags

Machine Learning Courses Function Approximation Courses Policy Gradient Courses

Course Description

Overview

Explore model-based control of physical systems in this 49-minute lecture from the 2019 ADSI Summer Workshop on Algorithmic Foundations of Learning and Control. Delve into Emo Todorov's presentation, which covers the power of models beyond data sampling, the effectiveness of model-based control on real systems despite modeling errors, and inverse dynamics for control. Learn about Acceleration-based Direct Optimization (ADO) with a 2D hopper example, and understand the combination of trajectory optimization and function approximation. Examine analytical policy gradient and the distinction between optimization and discovery, with emphasis on automated discovery using Contact Invariant Optimization (CIO). Gain insights into the human role in discovery and the potential for automation in this field.

Syllabus

Intro
Models can do more than sample data
Model-based control works on real systems, despite modeling errors
Inverse dynamics for control
Acceleration-based Direct Optimization (ADO)
Example: 2D hopper
Combining trajectory optimization and function approximation
Analytical policy gradient
Optimization vs. discovery
Discovery is usually done by humans
Automated discovery with Contact Invariant Optimization (CIO)


Taught by

Paul G. Allen School

Related Courses

Artificial Intelligence 2.0: AI, Python, DRL + ChatGPT Prize
Udemy
Reinforcement Learning Course
YouTube
Neural Networks
Serrano.Academy via YouTube
Stanford CS234: Reinforcement Learning - Winter 2019
Stanford University via YouTube
TF-Agents - A Flexible Reinforcement Learning Library for TensorFlow
TensorFlow via YouTube