Representation-driven Option Discovery in Reinforcement Learning
Offered By: GERAD Research Center via YouTube
Course Description
Overview
Explore the concept of representation-driven option discovery in reinforcement learning through this 56-minute coffee talk presented by Marlos C. Machado from the University of Alberta. Delve into the fundamental aspects of intelligence, focusing on the ability to reason at multiple levels of temporal abstraction. Examine how options, as temporally extended courses of actions, model this attribute in reinforcement learning. Investigate why options are rarely included as explicit components in traditional solutions within the field, and understand their crucial role in continual learning. Learn about a general framework for option discovery that utilizes the agent's representation to uncover useful options. Discover how leveraging these options generates a rich stream of experience, enabling agents to improve their representations and learn more effectively. Gain insights into the virtuous cycle of refinement created by this approach, which continuously enhances both representation and options. Understand the particular effectiveness of this method for problems requiring agents to exhibit different levels of abstractions to succeed.
Syllabus
Representation-driven Option Discovery in Reinforcement Learning, Marlos C. Machado
Taught by
GERAD Research Center
Related Courses
Computational NeuroscienceUniversity of Washington via Coursera Reinforcement Learning
Brown University via Udacity Reinforcement Learning
Indian Institute of Technology Madras via Swayam FA17: Machine Learning
Georgia Institute of Technology via edX Introduction to Reinforcement Learning
Higher School of Economics via Coursera