Big Self-Supervised Models Are Strong Semi-Supervised Learners
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a detailed explanation of the SimCLRv2 paper, which demonstrates the significant benefits of self-supervised pre-training for semi-supervised learning. Learn how this effect becomes more pronounced with fewer available labels and larger model parameters. Dive into key concepts including semi-supervised learning, self-supervised pre-training, contrastive loss, projection head retention, supervised fine-tuning, and unsupervised distillation. Examine the proposed three-step semi-supervised learning algorithm and its impressive results on ImageNet classification. Gain insights into the architecture, experiments, and broader impact of this approach that achieves state-of-the-art label efficiency for image classification tasks.
Syllabus
- Intro & Overview
- Semi-Supervised Learning
- Pre-Training via Self-Supervision
- Contrastive Loss
- Retaining Projection Heads
- Supervised Fine-Tuning
- Unsupervised Distillation & Self-Training
- Architecture Recap
- Experiments
- Broader Impact
Taught by
Yannic Kilcher
Related Courses
A Transformer-Based Framework for Multivariate Time Series Representation LearningLaunchpad via YouTube Inside ChatGPT- Unveiling the Training Process of OpenAI's Language Model
Krish Naik via YouTube Fine Tune GPT-3.5 Turbo
Data Science Dojo via YouTube Yi 34B: The Rise of Powerful Mid-Sized Models - Base, 200k, and Chat
Sam Witteveen via YouTube LLaMA 2 and Meta AI Projects - Interview with Thomas Scialom
Aleksa Gordić - The AI Epiphany via YouTube