YoVDO

Stanford Seminar - Can the Brain Do Back-Propagation?

Offered By: Stanford University via YouTube

Tags

Machine Learning Courses Unsupervised Learning Courses Neuroscience Courses

Course Description

Overview

Explore a Stanford seminar examining the brain's capacity for back-propagation in neural networks. Delve into online stochastic gradient descent, challenges preventing the brain from performing backprop, and alternative sources of supervision. Investigate the wake-sleep algorithm, unsupervised learning methods, and the brain's ability to communicate real values. Analyze the relationship between statistics and neuroscience, comparing big data to big models. Examine dropout as a form of model averaging and various types of noise in hidden activities. Learn about the transmission of derivatives, temporal derivatives as error representations, and the combination of STDP with reverse STDP. Discover potential neuroscientific observations, the purpose of top-down passes, and methods for encoding top-level error derivatives. Investigate feedback alignment and its effectiveness in neural networks.

Syllabus

Introduction.
Online stochastic gradient descent.
Four reasons why the brain cannot do backprop.
Sources of supervision that allow backprop learning without a separate supervision signal.
The wake-sleep algorithm (Hinton et. al. 1995).
New methods for unsupervised learning.
Conclusion about supervision signals.
Can neurons communicate real values?.
Statistics and the brain.
Big data versus big models.
Dropout as a form of model averaging.
Different kinds of noise in the hidden activities.
How are the derivatives sent backwards?.
A fundamental representational decision: temporal derivatives represent error derivatives.
An early use of the idea that temporal derivatives encode error derivatives (Hinton & McClelland, 1988).
Combining STDP with reverse STDP.
If this is what is happening, what should neuroscientists see?.
What the two top-down passes achieve.
A way to encode the top-level error derivatives.
A consequence of using temporal derivatives to code error derivatives.
The next problem.
Now a miracle occurs.
Why does feedback alignment work?.


Taught by

Stanford Online

Tags

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent