Representation-driven Option Discovery in Reinforcement Learning
Offered By: GERAD Research Center via YouTube
Course Description
Overview
Explore the concept of representation-driven option discovery in reinforcement learning through this 56-minute coffee talk presented by Marlos C. Machado from the University of Alberta. Delve into the fundamental aspects of intelligence, focusing on the ability to reason at multiple levels of temporal abstraction. Examine how options, as temporally extended courses of actions, model this attribute in reinforcement learning. Investigate why options are rarely included as explicit components in traditional solutions within the field, and understand their crucial role in continual learning. Learn about a general framework for option discovery that utilizes the agent's representation to uncover useful options. Discover how leveraging these options generates a rich stream of experience, enabling agents to improve their representations and learn more effectively. Gain insights into the virtuous cycle of refinement created by this approach, which continuously enhances both representation and options. Understand the particular effectiveness of this method for problems requiring agents to exhibit different levels of abstractions to succeed.
Syllabus
Representation-driven Option Discovery in Reinforcement Learning, Marlos C. Machado
Taught by
GERAD Research Center
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent