YoVDO

Understanding Generalization from Pre-training Loss to Downstream Tasks

Offered By: Simons Institute via YouTube

Tags

Machine Learning Courses Self-supervised Learning Courses Inductive Bias Courses Embeddings Courses Generalization Courses Contrastive Learning Courses Manifold Learning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the mysteries behind pre-trained models and their generalization capabilities in this lecture by Tengyu Ma from Stanford University. Delve into the role of pre-training losses in extracting meaningful structural information from unlabeled data, with a focus on the infinite data regime. Examine how contrastive loss creates embeddings that capture manifold distance between raw data and graph distance of positive-pair graphs. Investigate the relationship between embedding space directions and cluster relationships in positive-pair graphs. Discover recent advancements that incorporate architectural inductive bias and demonstrate the implicit bias of optimizers in pre-training. Gain insights into the theoretical frameworks and empirical evidence supporting these concepts, shedding light on the behavior of practical pre-trained models in AI and machine learning.

Syllabus

Understanding Generalization from Pre-training Loss to Downstream Tasks


Taught by

Simons Institute

Related Courses

Launching into Machine Learning 日本語版
Google Cloud via Coursera
Launching into Machine Learning auf Deutsch
Google Cloud via Coursera
Launching into Machine Learning en Français
Google Cloud via Coursera
Launching into Machine Learning en Español
Google Cloud via Coursera
Основы машинного обучения
Higher School of Economics via Coursera