Gaussian Pre-Activations in Neural Networks: Myth or Reality?
Offered By: Finnish Center for Artificial Intelligence FCAI via YouTube
Course Description
Overview
Explore the intricacies of Gaussian pre-activations in neural networks through this 45-minute conference talk by Pierre Wolinski at the Finnish Center for Artificial Intelligence. Delve into the construction of activation functions and initialization distributions that ensure Gaussian pre-activations throughout network depth, even in narrow neural networks. Examine the critical review of Edge of Chaos claims and discover a unified view on pre-activations propagation. Gain insights into information propagation in deep and narrow neural networks, comparing ReLU and tanh activation functions with Kaiming and Xavier initializations. Learn about the speaker's background in neural network pruning, Bayesian neural networks, and current research on information propagation during initialization and training.
Syllabus
Introduction
Scaling
Framework
Naive heuristic
Outline
Edge of Cows
Recurrence Equation
Gaussian PreActivations
The Edge of Chaos
Experiments
Gaussian regulations
Assumption of edge of chaos
Preservation of variance
Solution
Summary
Constraints
Density
activation functions
numerical approximations
training experiments
training losses
conclusion
future work
questions
data patterns
impossibility results
Cons
Training Loss
Taught by
Finnish Center for Artificial Intelligence FCAI
Related Courses
TensorFlow on Google CloudGoogle Cloud via Coursera Deep Learning Fundamentals with Keras
IBM via edX Intro to TensorFlow em Português Brasileiro
Google Cloud via Coursera TensorFlow on Google Cloud - Français
Google Cloud via Coursera Introduction to Neural Networks and PyTorch
IBM via Coursera