Self-Supervision & Contrastive Frameworks - A Vision-Based Review
Offered By: Stanford University via YouTube
Course Description
Overview
Syllabus
Intro
SS Learning: Invariant Representations
Pre-text Tasks: A Deeper Dive
Contrastive Learning: Entity Discrimination
Contrastive Learning: Problem
SimCLR: Simple Contrastive Learning Representatio
SimCLR: Architecture
SimCLR: Loss Function
SimCLR: Findings
MoCo V2: Momentum Contrast
MoCo V2: Architecture
MoCo V2: Main Principle
MoCoV2: Loss Function
MoCo V2: Findings
BYOL: Bootstrap Your Own Latent
BYOL: Architecture
BYOL: Main Principle
BYOL: Findings
SWAV: Swapping Assignments between Views
SWAV: Architecture
SWAV: Loss Function
SWAV: Main Principle
SWAV: Multi-crop
SWAV: Additional Findings
DINO: Self-Distillation with NO labels
DINO: Attention-Maps
VIT (Vision Transformer): Architecture
DINO: Architecture
DINO: Loss Function
DINO: Main Principle
DINO: Multi-crop
DINO: Additional Findings Compute
Taught by
Stanford MedAI
Tags
Related Courses
Stanford Seminar - Audio Research: Transformers for Applications in Audio, Speech and MusicStanford University via YouTube How to Represent Part-Whole Hierarchies in a Neural Network - Geoff Hinton's Paper Explained
Yannic Kilcher via YouTube OpenAI CLIP - Connecting Text and Images - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube Learning Compact Representation with Less Labeled Data from Sensors
tinyML via YouTube Human Activity Recognition - Learning with Less Labels and Privacy Preservation
University of Central Florida via YouTube