Intro to Multi-Modal ML with OpenAI's CLIP
Offered By: James Briggs via YouTube
Course Description
Overview
Explore OpenAI's CLIP, a multi-modal model capable of understanding relationships between text and images, in this 23-minute tutorial. Learn how to use CLIP via the Hugging Face library to create text and image embeddings, perform text-image similarity searches, and explore alternative image and text search methods. Gain practical insights into multi-modal machine learning and discover the power of CLIP in bridging the gap between textual and visual data processing.
Syllabus
Intro
What is CLIP?
Getting started
Creating text embeddings
Creating image embeddings
Embedding a lot of images
Text-image similarity search
Alternative image and text search
Taught by
James Briggs
Related Courses
Building a unique NLP project: 1984 book vs 1984 albumCoursera Project Network via Coursera Exam Prep AI-102: Microsoft Azure AI Engineer Associate
Whizlabs via Coursera Amazon Echo Reviews Sentiment Analysis Using NLP
Coursera Project Network via Coursera Amazon Translate: Translate documents with batch translation
Coursera Project Network via Coursera Analyze Text Data with Yellowbrick
Coursera Project Network via Coursera