YoVDO

Learning Deep Matrix Factorizations Via Gradient Descent - Implicit Bias Towards Low Rank

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Matrix Factorization Courses Deep Learning Courses Gradient Descent Courses

Course Description

Overview

Explore a 37-minute conference talk from the Tensor Methods and Emerging Applications to the Physical and Data Sciences 2021 workshop, focusing on learning deep matrix factorizations through gradient descent. Delve into the concept of implicit bias in deep learning scenarios where network parameters outnumber training examples. Examine the simplified setting of linear networks and deep matrix factorizations, investigating how gradient descent algorithms converge to low-rank matrices. Gain insights from rigorous theoretical results in matrix estimation, including an analysis of the dynamics of effective rank in iterates. Consider open problems and potential extensions to learning low-rank tensor decompositions, presented by Holger Rauhut from RWTH Aachen University at the Institute for Pure and Applied Mathematics, UCLA.

Syllabus

Holger Rauhut: "Learning Deep Matrix Factorizations Via Gradient Descent: Implicit Bias Towards ..."


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Practical Predictive Analytics: Models and Methods
University of Washington via Coursera
Deep Learning Fundamentals with Keras
IBM via edX
Introduction to Machine Learning
Duke University via Coursera
Intro to Deep Learning with PyTorch
Facebook via Udacity
Introduction to Machine Learning for Coders!
fast.ai via Independent