YoVDO

Beyond Lazy Training for Over-parameterized Tensor Decomposition

Offered By: Fields Institute via YouTube

Tags

Tensor Decomposition Courses Machine Learning Courses Neural Networks Courses Gradient Descent Courses Implicit Regularization Courses

Course Description

Overview

Explore tensor decomposition and over-parameterization in this 37-minute conference talk from the Fields Institute's Mini-symposium on Low-Rank Models and Applications. Delve into the comparison between lazy training regimes and gradient descent techniques for finding approximate tensors. Examine the challenges of analyzing gradient descent, the failures of lazy training, and the existence of local minima. Learn about a novel algorithm that escapes local minima through random correlation and amplifies initial correlation using tensor power methods. Gain insights into the importance of over-parameterization in training neural networks and its implications for avoiding bad local optimal solutions.

Syllabus

Intro
Low rank models and implicit regularizati
Regimes of over-parametrization
Tensor (CP) decomposition
Why naïve algorithm fails
Why gradient descent?
Two-Layer Neural Network
Form of the objective
Difficulties of analyzing gradient descent
Lazy training fails
O is a high order saddle point
There are local minima away from 0
Our (high level) algorithm
Proof ideas
Escaping local minima by random correla
Amplify initial correlation by tensor power man
Conclusions and Open Problems


Taught by

Fields Institute

Related Courses

Neural Networks for Machine Learning
University of Toronto via Coursera
Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn
Statistical Learning with R
Stanford University via edX
Machine Learning 1—Supervised Learning
Brown University via Udacity
Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX