YoVDO

Learning Deep Matrix Factorizations Via Gradient Descent - Implicit Bias Towards Low Rank

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Matrix Factorization Courses Deep Learning Courses Gradient Descent Courses

Course Description

Overview

Explore a 37-minute conference talk from the Tensor Methods and Emerging Applications to the Physical and Data Sciences 2021 workshop, focusing on learning deep matrix factorizations through gradient descent. Delve into the concept of implicit bias in deep learning scenarios where network parameters outnumber training examples. Examine the simplified setting of linear networks and deep matrix factorizations, investigating how gradient descent algorithms converge to low-rank matrices. Gain insights from rigorous theoretical results in matrix estimation, including an analysis of the dynamics of effective rank in iterates. Consider open problems and potential extensions to learning low-rank tensor decompositions, presented by Holger Rauhut from RWTH Aachen University at the Institute for Pure and Applied Mathematics, UCLA.

Syllabus

Holger Rauhut: "Learning Deep Matrix Factorizations Via Gradient Descent: Implicit Bias Towards ..."


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Introduction to Recommender Systems
University of Minnesota via Coursera
Поиск структуры в данных
Moscow Institute of Physics and Technology via Coursera
Matrix Factorization and Advanced Techniques
University of Minnesota via Coursera
Introduction to parallel Programming in Open MP
Indian Institute of Technology Delhi via Swayam
Recommender Systems
University of Minnesota via Coursera