YoVDO

Can Wikipedia Help Offline Reinforcement Learning? - Paper Explained

Offered By: Yannic Kilcher via YouTube

Tags

Reinforcement Learning Courses Language Models Courses Sequence Modeling Courses Offline Reinforcement Learning Courses

Course Description

Overview

Explore a comprehensive analysis of a research paper examining the potential of Wikipedia to enhance offline reinforcement learning. Delve into the innovative approach of treating reinforcement learning as sequence modeling, leveraging pre-trained language models to improve performance in control and game tasks. Discover how this method accelerates training by 3-6 times and achieves state-of-the-art results across various environments. Gain insights into the experimental findings, attention pattern analysis, and scaling properties of this novel technique. Understand the implications for bridging the gap between language modeling and reinforcement learning, opening new avenues for knowledge transfer between seemingly disparate domains.

Syllabus

- Intro
- Paper Overview
- Offline Reinforcement Learning as Sequence Modelling
- Input Embedding Alignment & other additions
- Main experimental results
- Analysis of the attention patterns across models
- More experimental results scaling properties, ablations, etc.
- Final thoughts


Taught by

Yannic Kilcher

Related Courses

Computational Neuroscience
University of Washington via Coursera
Reinforcement Learning
Brown University via Udacity
Reinforcement Learning
Indian Institute of Technology Madras via Swayam
FA17: Machine Learning
Georgia Institute of Technology via edX
Introduction to Reinforcement Learning
Higher School of Economics via Coursera