Can Wikipedia Help Offline Reinforcement Learning - Author Interview
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore an in-depth interview with authors Machel Reid and Yutaro Yamada discussing their research on leveraging pre-trained language models for offline reinforcement learning. Delve into the experimental results, challenges, and insights gained from applying Wikipedia-trained models to control and game environments. Learn about the potential of transferring knowledge between generative modeling tasks across different domains, the impact on convergence speed and performance, and the implications for future research in reinforcement learning and sequence modeling. Gain valuable perspectives on model architectures, attention patterns, computational requirements, and practical advice for getting started in this emerging field.
Syllabus
- Intro
- Brief paper, setup & idea recap
- Main experimental results & high standard deviations
- Why is there no clear winner?
- Why are bigger models not a lot better?
- What’s behind the name ChibiT?
- Why is iGPT underperforming?
- How are tokens distributed in Reinforcement Learning?
- What other domains could have good properties to transfer?
- A deeper dive into the models' attention patterns
- Codebase, model sizes, and compute requirements
- Scaling behavior of pre-trained models
- What did not work out in this project?
- How can people get started and where to go next?
Taught by
Yannic Kilcher
Related Courses
Batch Offline Reinforcement Learning - Part 1Simons Institute via YouTube Can Wikipedia Help Offline Reinforcement Learning? - Paper Explained
Yannic Kilcher via YouTube CAP6412 - Final Project Presentations - Lecture 27
University of Central Florida via YouTube Bayesian RL
Pascal Poupart via YouTube Datasets for Data-Driven Reinforcement Learning
Yannic Kilcher via YouTube