YoVDO

Understanding Generalization from Pre-training Loss to Downstream Tasks

Offered By: Simons Institute via YouTube

Tags

Machine Learning Courses Self-supervised Learning Courses Inductive Bias Courses Embeddings Courses Generalization Courses Contrastive Learning Courses Manifold Learning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the mysteries behind pre-trained models and their generalization capabilities in this lecture by Tengyu Ma from Stanford University. Delve into the role of pre-training losses in extracting meaningful structural information from unlabeled data, with a focus on the infinite data regime. Examine how contrastive loss creates embeddings that capture manifold distance between raw data and graph distance of positive-pair graphs. Investigate the relationship between embedding space directions and cluster relationships in positive-pair graphs. Discover recent advancements that incorporate architectural inductive bias and demonstrate the implicit bias of optimizers in pre-training. Gain insights into the theoretical frameworks and empirical evidence supporting these concepts, shedding light on the behavior of practical pre-trained models in AI and machine learning.

Syllabus

Understanding Generalization from Pre-training Loss to Downstream Tasks


Taught by

Simons Institute

Related Courses

Stanford Seminar - Audio Research: Transformers for Applications in Audio, Speech and Music
Stanford University via YouTube
How to Represent Part-Whole Hierarchies in a Neural Network - Geoff Hinton's Paper Explained
Yannic Kilcher via YouTube
OpenAI CLIP - Connecting Text and Images - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube
Learning Compact Representation with Less Labeled Data from Sensors
tinyML via YouTube
Human Activity Recognition - Learning with Less Labels and Privacy Preservation
University of Central Florida via YouTube