Computational Principles Underlying the Learning of Sensorimotor Repertoires
Offered By: Massachusetts Institute of Technology via YouTube
Course Description
Overview
Explore computational principles underlying sensorimotor learning in this lecture by Daniel Wolpert from MIT. Delve into the segmentation of continuous sensorimotor experiences into separate memories and the adaptation of growing motor repertoires. Examine the role of context in activating motor memories and how statistical learning leads to multimodal object representations. Discover a principled theory of motor learning based on contextual inference, challenging dominant single-context learning theories. Learn how this model explains key features of motor learning and predicts novel phenomena, confirmed through experiments. Investigate topics such as followthrough, tool use, control points, spontaneous recovery, memory updating, single-trial learning, savings, and explicit-implicit learning. Gain insights from Wolpert's extensive research background in neuroscience, engineering, and physiology, and understand the implications of contextual inference as a fundamental principle in motor behavior.
Syllabus
Intro
Overview
Question
Rules for motor memories
Followthrough
Followthrough replication
Tool use
Learning tools
Control points
generative model
model details
toy problem
Spontaneous recovery
Memory updating
Single trial learning
Savings
Explicit implicit learning
Conclusion
Taught by
MIT Embodied Intelligence
Tags
Related Courses
Computational NeuroscienceUniversity of Washington via Coursera Neuronal Dynamics
École Polytechnique Fédérale de Lausanne via edX Neuronal Dynamics
École Polytechnique Fédérale de Lausanne via edX Computational Neuroscience: Neuronal Dynamics of Cognition
École Polytechnique Fédérale de Lausanne via edX The Multi-scale brain
École Polytechnique Fédérale de Lausanne via edX