YoVDO

Gaussian Pre-Activations in Neural Networks: Myth or Reality?

Offered By: Finnish Center for Artificial Intelligence FCAI via YouTube

Tags

Neural Networks Courses Activation Functions Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of Gaussian pre-activations in neural networks through this 45-minute conference talk by Pierre Wolinski at the Finnish Center for Artificial Intelligence. Delve into the construction of activation functions and initialization distributions that ensure Gaussian pre-activations throughout network depth, even in narrow neural networks. Examine the critical review of Edge of Chaos claims and discover a unified view on pre-activations propagation. Gain insights into information propagation in deep and narrow neural networks, comparing ReLU and tanh activation functions with Kaiming and Xavier initializations. Learn about the speaker's background in neural network pruning, Bayesian neural networks, and current research on information propagation during initialization and training.

Syllabus

Introduction
Scaling
Framework
Naive heuristic
Outline
Edge of Cows
Recurrence Equation
Gaussian PreActivations
The Edge of Chaos
Experiments
Gaussian regulations
Assumption of edge of chaos
Preservation of variance
Solution
Summary
Constraints
Density
activation functions
numerical approximations
training experiments
training losses
conclusion
future work
questions
data patterns
impossibility results
Cons
Training Loss


Taught by

Finnish Center for Artificial Intelligence FCAI

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX