YoVDO

Stanford Seminar - Can the Brain Do Back-Propagation?

Offered By: Stanford University via YouTube

Tags

Machine Learning Courses Unsupervised Learning Courses Neuroscience Courses

Course Description

Overview

Explore a Stanford seminar examining the brain's capacity for back-propagation in neural networks. Delve into online stochastic gradient descent, challenges preventing the brain from performing backprop, and alternative sources of supervision. Investigate the wake-sleep algorithm, unsupervised learning methods, and the brain's ability to communicate real values. Analyze the relationship between statistics and neuroscience, comparing big data to big models. Examine dropout as a form of model averaging and various types of noise in hidden activities. Learn about the transmission of derivatives, temporal derivatives as error representations, and the combination of STDP with reverse STDP. Discover potential neuroscientific observations, the purpose of top-down passes, and methods for encoding top-level error derivatives. Investigate feedback alignment and its effectiveness in neural networks.

Syllabus

Introduction.
Online stochastic gradient descent.
Four reasons why the brain cannot do backprop.
Sources of supervision that allow backprop learning without a separate supervision signal.
The wake-sleep algorithm (Hinton et. al. 1995).
New methods for unsupervised learning.
Conclusion about supervision signals.
Can neurons communicate real values?.
Statistics and the brain.
Big data versus big models.
Dropout as a form of model averaging.
Different kinds of noise in the hidden activities.
How are the derivatives sent backwards?.
A fundamental representational decision: temporal derivatives represent error derivatives.
An early use of the idea that temporal derivatives encode error derivatives (Hinton & McClelland, 1988).
Combining STDP with reverse STDP.
If this is what is happening, what should neuroscientists see?.
What the two top-down passes achieve.
A way to encode the top-level error derivatives.
A consequence of using temporal derivatives to code error derivatives.
The next problem.
Now a miracle occurs.
Why does feedback alignment work?.


Taught by

Stanford Online

Tags

Related Courses

Machine Learning: Unsupervised Learning
Brown University via Udacity
Practical Predictive Analytics: Models and Methods
University of Washington via Coursera
Поиск структуры в данных
Moscow Institute of Physics and Technology via Coursera
Statistical Machine Learning
Carnegie Mellon University via Independent
FA17: Machine Learning
Georgia Institute of Technology via edX