Building Multimodal AI RAG with LlamaIndex, NVIDIA NIM, and Milvus - LLM App Development
Offered By: Nvidia via YouTube
Course Description
Overview
Explore the process of building a multimodal AI retrieval-augmented generation (RAG) application in this 17-minute video tutorial. Learn how to convert documents into text using vision language models like NeVA 22B and DePlot, utilize GPU-accelerated Milvus for efficient embedding storage and retrieval, leverage NVIDIA NIM API's Llama 3 model for handling user queries, and seamlessly integrate all components with LlamaIndex. Gain practical insights into document processing, vector database management, inference techniques, and orchestration for creating a smooth Q&A experience. Access the accompanying notebook for hands-on practice and join the NVIDIA Developer Program for additional resources. Discover how to combine cutting-edge technologies such as LangChain, Mixtral, and NIM APIs to develop advanced LLM applications.
Syllabus
Building Multimodal AI RAG with LlamaIndex, NVIDIA NIM, and Milvus | LLM App Development
Taught by
NVIDIA Developer
Tags
Related Courses
Pinecone Vercel Starter Template and RAG - Live Code Review Part 2Pinecone via YouTube Will LLMs Kill Search? The Future of Information Retrieval
Aleksa Gordić - The AI Epiphany via YouTube RAG But Better: Rerankers with Cohere AI - Improving Retrieval Pipelines
James Briggs via YouTube Advanced RAG - Contextual Compressors and Filters - Lecture 4
Sam Witteveen via YouTube LangChain Multi-Query Retriever for RAG - Advanced Technique for Broader Vector Space Search
James Briggs via YouTube