YoVDO

Decision Transformer - Reinforcement Learning via Sequence Modeling

Offered By: Yannic Kilcher via YouTube

Tags

Offline Reinforcement Learning Courses Reinforcement Learning Courses Sequence Modeling Courses Transformer Architecture Courses

Course Description

Overview

Explore a comprehensive video explanation of the research paper "Decision Transformer: Reinforcement Learning via Sequence Modeling." Delve into the innovative approach of framing offline reinforcement learning as a sequence modeling problem, leveraging the power of Transformer architectures. Learn about the Decision Transformer model, which generates optimal actions by conditioning on desired returns, past states, and actions. Discover how this method compares to traditional value function and policy gradient approaches in reinforcement learning. Examine key concepts such as offline reinforcement learning, temporal difference learning, reward-to-go, and the context length problem. Analyze experimental results on various benchmarks and gain insights into the potential implications of this research for the field of reinforcement learning.

Syllabus

- Intro & Overview
- Offline Reinforcement Learning
- Transformers in RL
- Value Functions and Temporal Difference Learning
- Sequence Modeling and Reward-to-go
- Why this is ideal for offline RL
- The context length problem
- Toy example: Shortest path from random walks
- Discount factors
- Experimental Results
- Do you need to know the best possible reward?
- Key-to-door toy experiment
- Comments & Conclusion


Taught by

Yannic Kilcher

Related Courses

Artificial Intelligence Foundations: Neural Networks
LinkedIn Learning
Transformers: Text Classification for NLP Using BERT
LinkedIn Learning
TensorFlow: Working with NLP
LinkedIn Learning
BERTによる自然言語処理を学ぼう! -Attention、TransformerからBERTへとつながるNLP技術-
Udemy
Complete Natural Language Processing Tutorial in Python
Keith Galli via YouTube