YoVDO

OpenAI CLIP - Connecting Text and Images - Paper Explained

Offered By: Aleksa Gordić - The AI Epiphany via YouTube

Tags

Natural Language Processing (NLP) Courses Deep Learning Courses Computer Vision Courses Contrastive Learning Courses

Course Description

Overview

Dive into a comprehensive 53-minute video lecture exploring OpenAI's CLIP (Contrastive Language-Image Pre-training) model. Learn about the contrastive learning approach behind CLIP, its comparison with SimCLR, and the intricacies of zero-shot learning. Explore the WIT dataset, prompt programming, and embedding space quality. Analyze CLIP's performance in few-shot learning scenarios, its robustness to distribution shifts, and potential limitations. Gain insights into this innovative approach connecting text and images through natural language supervision.

Syllabus

OpenAI's CLIP
Detailed explanation of the method
Comparision with SimCLR
How does the zero-shot part work
WIT dataset
Why this method, hint efficiency
Zero-shot - generalizing to new tasks
Prompt programming and ensembling
Zero-shot perf
Few-shot comparison with best baselines
How good the zero-shot classifier is?
Compute error correlation
Quality of CLIP's embedding space
Robustness to distribution shift
Limitations MNIST failure
A short recap


Taught by

Aleksa Gordić - The AI Epiphany

Related Courses

Natural Language Processing
Columbia University via Coursera
Natural Language Processing
Stanford University via Coursera
Introduction to Natural Language Processing
University of Michigan via Coursera
moocTLH: Nuevos retos en las tecnologías del lenguaje humano
Universidad de Alicante via Miríadax
Natural Language Processing
Indian Institute of Technology, Kharagpur via Swayam