Rapture of the Deep: Highs and Lows of Sparsity in Neural Networks
Offered By: Institut des Hautes Etudes Scientifiques (IHES) via YouTube
Course Description
Overview
Explore the depths of sparsity in neural networks through this 37-minute conference talk by Remi Gribonval from INRIA, hosted by the Institut des Hautes Etudes Scientifiques (IHES). Delve into the natural promotion of sparse connections in neural networks for complexity control and potential interpretability guarantees. Compare classical sparse regularization for inverse problems with multilayer sparse approximation. Discover the role of rescaling-invariances in deep parameterizations, their advantages and challenges. Learn about life beyond gradient descent, including an algorithm that significantly speeds up learning of certain fast transforms via multilayer sparse factorization. Cover topics such as bilinear sparsity, blind deconvolution, ReLU network training with weight decay, optimization with support constraints, butterfly factorization, and the consequences of scale-invariance in neural networks.
Syllabus
Intro
Based on joint work with
Sparsity & frugality
Sparsity & interpretability
Deep sparsity?
Bilinear sparsity: blind deconvolution
ReLU network training - weight decay
Behind the scene
Greed is good?
Optimization with support constraints
Application: butterfly factorization
Wandering in equivalence classes
Other consequences of scale-invariance
Conservation laws
Taught by
Institut des Hautes Etudes Scientifiques (IHES)
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX