YoVDO

Big Self-Supervised Models Are Strong Semi-Supervised Learners

Offered By: Yannic Kilcher via YouTube

Tags

Semi-supervised Learning Courses Deep Learning Courses Computer Vision Courses Self-supervised Learning Courses Supervised Fine-Tuning Courses

Course Description

Overview

Explore a detailed explanation of the SimCLRv2 paper, which demonstrates the significant benefits of self-supervised pre-training for semi-supervised learning. Learn how this effect becomes more pronounced with fewer available labels and larger model parameters. Dive into key concepts including semi-supervised learning, self-supervised pre-training, contrastive loss, projection head retention, supervised fine-tuning, and unsupervised distillation. Examine the proposed three-step semi-supervised learning algorithm and its impressive results on ImageNet classification. Gain insights into the architecture, experiments, and broader impact of this approach that achieves state-of-the-art label efficiency for image classification tasks.

Syllabus

- Intro & Overview
- Semi-Supervised Learning
- Pre-Training via Self-Supervision
- Contrastive Loss
- Retaining Projection Heads
- Supervised Fine-Tuning
- Unsupervised Distillation & Self-Training
- Architecture Recap
- Experiments
- Broader Impact


Taught by

Yannic Kilcher

Related Courses

Advanced PyTorch Techniques and Applications
Packt via Coursera
機械学習・深層学習 (ga120)
Waseda University via gacco
Artificial Intelligence Foundations: Machine Learning
LinkedIn Learning
Efficient Data Feeding and Labeling for Model Training
Pluralsight
What are GAN's actually- from underlying math to python code
Udemy