Intro to Multi-Modal ML with OpenAI's CLIP
Offered By: James Briggs via YouTube
Course Description
Overview
Explore OpenAI's CLIP, a multi-modal model capable of understanding relationships between text and images, in this 23-minute tutorial. Learn how to use CLIP via the Hugging Face library to create text and image embeddings, perform text-image similarity searches, and explore alternative image and text search methods. Gain practical insights into multi-modal machine learning and discover the power of CLIP in bridging the gap between textual and visual data processing.
Syllabus
Intro
What is CLIP?
Getting started
Creating text embeddings
Creating image embeddings
Embedding a lot of images
Text-image similarity search
Alternative image and text search
Taught by
James Briggs
Related Courses
Natural Language ProcessingColumbia University via Coursera Natural Language Processing
Stanford University via Coursera Introduction to Natural Language Processing
University of Michigan via Coursera moocTLH: Nuevos retos en las tecnologĂas del lenguaje humano
Universidad de Alicante via MirĂadax Natural Language Processing
Indian Institute of Technology, Kharagpur via Swayam