YoVDO

Understanding Generalization from Pre-training Loss to Downstream Tasks

Offered By: Simons Institute via YouTube

Tags

Machine Learning Courses Self-supervised Learning Courses Inductive Bias Courses Embeddings Courses Generalization Courses Contrastive Learning Courses Manifold Learning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the mysteries behind pre-trained models and their generalization capabilities in this lecture by Tengyu Ma from Stanford University. Delve into the role of pre-training losses in extracting meaningful structural information from unlabeled data, with a focus on the infinite data regime. Examine how contrastive loss creates embeddings that capture manifold distance between raw data and graph distance of positive-pair graphs. Investigate the relationship between embedding space directions and cluster relationships in positive-pair graphs. Discover recent advancements that incorporate architectural inductive bias and demonstrate the implicit bias of optimizers in pre-training. Gain insights into the theoretical frameworks and empirical evidence supporting these concepts, shedding light on the behavior of practical pre-trained models in AI and machine learning.

Syllabus

Understanding Generalization from Pre-training Loss to Downstream Tasks


Taught by

Simons Institute

Related Courses

Artificial Intelligence Foundations: Thinking Machines
LinkedIn Learning
Deep Learning for Computer Vision
NPTEL via YouTube
NYU Deep Learning
YouTube
Stanford Seminar - Representation Learning for Autonomous Robots, Anima Anandkumar
Stanford University via YouTube
A Path Towards Autonomous Machine Intelligence - Paper Explained
Yannic Kilcher via YouTube