Beyond Lazy Training for Over-parameterized Tensor Decomposition
Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube
Course Description
Overview
Explore a lecture on over-parameterized tensor decomposition and its applications beyond lazy training. Delve into the mathematical foundations and algorithms for tensor computations, focusing on how gradient descent variants can find approximate tensor decompositions. Learn about the limitations of lazy training regimes, the challenges in analyzing gradient descent, and a novel high-level algorithm that overcomes these obstacles. Discover how this research relates to training neural networks and utilizing low-rank structure in data. Gain insights into the proof ideas, including maintaining iterates close to the correct subspace and escaping local minima through random correlation and tensor power methods.
Syllabus
Intro
Tensor (CP) decomposition
Why naïve algorithm fails
Why gradient descent?
Two-Layer Neural Network
Form of the objective
Difficulties of analyzing gradient descent
Lazy training fails
O is a high order saddle point
Our (high level) algorithm
Proof ideas
Iterates remain close to correct subspace
Escaping local minima by random correlation
Amplify initial correlation by tensor power method
Conclusions and Open Problems
Taught by
Institute for Pure & Applied Mathematics (IPAM)
Related Courses
Scientific ComputingUniversity of Washington via Coursera Inquiry Science Learning: Perspectives and Practices 3 - Science Content Survey
Rice University via Coursera Philosophy and the Sciences: Introduction to the Philosophy of Physical Sciences
University of Edinburgh via Coursera Natural Sciences
Modern States via Independent A mathematical way to think about biology
Udemy