A Neurally Plausible Model Learns Successor Representations in Partially Observable Environments
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a neurally plausible model that learns successor representations in partially observable environments through this in-depth video analysis. Delve into the intersection of model-based and model-free reinforcement learning strategies, focusing on how animals devise strategies to maximize returns in noisy, incomplete information settings. Examine the concept of distributional successor features and their role in efficient value function computation. Discover how this model supports reinforcement learning in challenging environments where direct policy learning is impractical. Investigate the neural response features consistent with the successor representation framework and their implications for understanding animal behavior and decision-making processes.
Syllabus
Introduction
Reinforcement learning
successor representations
value functions
continuous space
distributional coding
wake and sleep
mu
Taught by
Yannic Kilcher
Related Courses
Computational NeuroscienceUniversity of Washington via Coursera Reinforcement Learning
Brown University via Udacity Reinforcement Learning
Indian Institute of Technology Madras via Swayam FA17: Machine Learning
Georgia Institute of Technology via edX Introduction to Reinforcement Learning
Higher School of Economics via Coursera