YoVDO

Improving Intrinsic Exploration with Language Abstractions - Machine Learning Paper Explained

Offered By: Yannic Kilcher via YouTube

Tags

Natural Language Processing (NLP) Courses Artificial Intelligence Courses Reinforcement Learning Courses Algorithm Analysis Courses

Course Description

Overview

Explore the concept of using language abstractions to improve intrinsic exploration in reinforcement learning through this in-depth video explanation. Dive into the challenges of sparse reward environments and how language descriptions of encountered states can be used to assess novelty. Learn about the MiniGrid and MiniHack environments, and understand how states are annotated with language. Examine baseline algorithms like AMIGo and NovelD, and discover how language is integrated into these methods. Analyze experimental results and consider the implications of using language-based variants for intrinsic exploration in challenging tasks. Gain insights into the potential of natural language as a medium for highlighting relevant abstractions in reinforcement learning environments.

Syllabus

- Intro
- Paper Overview: Language for exploration
- The MiniGrid & MiniHack environments
- Annotating states with language
- Baseline algorithm: AMIGo
- Adding language to AMIGo
- Baseline algorithm: NovelD and Random Network Distillation
- Adding language to NovelD
- Aren't we just using extra data?
- Investigating the experimental results
- Final comments


Taught by

Yannic Kilcher

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Artificial Intelligence for Robotics
Stanford University via Udacity
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent