Can Wikipedia Help Offline Reinforcement Learning - Author Interview
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore an in-depth interview with authors Machel Reid and Yutaro Yamada discussing their research on leveraging pre-trained language models for offline reinforcement learning. Delve into the experimental results, challenges, and insights gained from applying Wikipedia-trained models to control and game environments. Learn about the potential of transferring knowledge between generative modeling tasks across different domains, the impact on convergence speed and performance, and the implications for future research in reinforcement learning and sequence modeling. Gain valuable perspectives on model architectures, attention patterns, computational requirements, and practical advice for getting started in this emerging field.
Syllabus
- Intro
- Brief paper, setup & idea recap
- Main experimental results & high standard deviations
- Why is there no clear winner?
- Why are bigger models not a lot better?
- What’s behind the name ChibiT?
- Why is iGPT underperforming?
- How are tokens distributed in Reinforcement Learning?
- What other domains could have good properties to transfer?
- A deeper dive into the models' attention patterns
- Codebase, model sizes, and compute requirements
- Scaling behavior of pre-trained models
- What did not work out in this project?
- How can people get started and where to go next?
Taught by
Yannic Kilcher
Related Courses
Perform Real-Time Object Detection with YOLOv3Coursera Project Network via Coursera Intel® Edge AI Fundamentals with OpenVINO™
Intel via Udacity Building Deep Learning Applications with Keras 2.0
LinkedIn Learning Expediting Deep Learning with Transfer Learning: PyTorch Playbook
Pluralsight 2024 Introduction to Spacy for Natural Language Processing
Udemy