YoVDO

How Neural Networks Learn Features from Data

Offered By: Institute for Pure & Applied Mathematics (IPAM) via YouTube

Tags

Neural Networks Courses Machine Learning Courses Deep Learning Courses Matrix Factorization Courses Convolutional Networks Courses Language Models Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the fundamental mechanisms of feature learning in neural networks through this insightful lecture presented at IPAM's Theory and Practice of Deep Learning Workshop. Delve into the unifying concept of the average gradient outer product (AGOP) and its role in capturing relevant patterns across various network architectures, including convolutional networks and large language models. Discover the Recursive Feature Machine (RFM) algorithm and its ability to identify sparse subsets of features crucial for prediction. Gain a deeper understanding of how neural networks extract features from data, connecting this process to classical sparse recovery and low-rank matrix factorization algorithms. Uncover the implications of this research for developing more interpretable and effective models in scientific applications, advancing the reliable use of neural networks in technological and scientific fields.

Syllabus

Adityanarayanan Radhakrishnan - How do neural networks learn features from data? - IPAM at UCLA


Taught by

Institute for Pure & Applied Mathematics (IPAM)

Related Courses

Deep Learning Explained
Microsoft via edX
Deep Learning Foundation
Udemy
Deep Learning with Google Colab
Udemy
The Complete Neural Networks Bootcamp: Theory, Applications
Udemy
A deep understanding of deep learning (with Python intro)
Udemy