YoVDO

Building Multimodal Search and RAG

Offered By: DeepLearning.AI via Coursera

Tags

Contrastive Learning Courses Recommender Systems Courses Multimodal AI Courses Retrieval Augmented Generation Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to build multimodal search and RAG systems. RAG systems enhance an LLM by incorporating proprietary data into the prompt context. Typically, RAG applications use text documents, but, what if the desired context includes multimedia like images, audio, and video? This course covers the technical aspects of implementing RAG with multimodal data to accomplish this. 1. Learn how multimodal models are trained through contrastive learning and implement it on a real dataset. 2. Build any-to-any multimodal search to retrieve relevant context across different data types. 3. Learn how LLMs are trained to understand multimodal data through visual instruction tuning and use them on multiple image reasoning examples. 4. Implement an end-to-end multimodal RAG system that analyzes retrieved multimodal context to generate insightful answers. 5. Explore industry applications like visually analyzing invoices and flowcharts to output structured data. 6. Create a multi-vector recommender system that suggests relevant items by comparing their similarities across multiple modalities. As AI systems increasingly need to process and reason over multiple data modalities, learning how to build such systems is an important skill for AI developers. This course equips you with the key skills to embed, retrieve, and generate across different modalities. By gaining a strong foundation in multimodal AI, you’ll be prepared to build smarter search, RAG, and recommender systems.

Syllabus

  • Building Multimodal Search and RAG
    • Learn how to build multimodal search and RAG systems. RAG systems enhance an LLM by incorporating proprietary data into the prompt context. Typically, RAG applications use text documents, but, what if the desired context includes multimedia like images, audio, and video? This course covers the technical aspects of implementing RAG with multimodal data to accomplish this. 1) Learn how multimodal models are trained through contrastive learning and implement it on a real dataset. 2) Build any-to-any multimodal search to retrieve relevant context across different data types. 3) Learn how LLMs are trained to understand multimodal data through visual instruction tuning and use them on multiple image reasoning examples 4) Implement an end-to-end multimodal RAG system that analyzes retrieved multimodal context to generate insightful answers. 5) Explore industry applications like visually analyzing invoices and flowcharts to output structured data. 6) Create a multi-vector recommender system that suggests relevant items by comparing their similarities across multiple modalities. As AI systems increasingly need to process and reason over multiple data modalities, learning how to build such systems is an important skill for AI developers. This course equips you with the key skills to embed, retrieve, and generate across different modalities. By gaining a strong foundation in multimodal AI, you’ll be prepared to build smarter search, RAG, and recommender systems.

Taught by

Sebastian Witalec

Related Courses

Stanford Seminar - Audio Research: Transformers for Applications in Audio, Speech and Music
Stanford University via YouTube
How to Represent Part-Whole Hierarchies in a Neural Network - Geoff Hinton's Paper Explained
Yannic Kilcher via YouTube
OpenAI CLIP - Connecting Text and Images - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube
Learning Compact Representation with Less Labeled Data from Sensors
tinyML via YouTube
Human Activity Recognition - Learning with Less Labels and Privacy Preservation
University of Central Florida via YouTube