Building Multimodal AI RAG with LlamaIndex, NVIDIA NIM, and Milvus - LLM App Development
Offered By: Nvidia via YouTube
Course Description
Overview
Explore the process of building a multimodal AI retrieval-augmented generation (RAG) application in this 17-minute video tutorial. Learn how to convert documents into text using vision language models like NeVA 22B and DePlot, utilize GPU-accelerated Milvus for efficient embedding storage and retrieval, leverage NVIDIA NIM API's Llama 3 model for handling user queries, and seamlessly integrate all components with LlamaIndex. Gain practical insights into document processing, vector database management, inference techniques, and orchestration for creating a smooth Q&A experience. Access the accompanying notebook for hands-on practice and join the NVIDIA Developer Program for additional resources. Discover how to combine cutting-edge technologies such as LangChain, Mixtral, and NIM APIs to develop advanced LLM applications.
Syllabus
Building Multimodal AI RAG with LlamaIndex, NVIDIA NIM, and Milvus | LLM App Development
Taught by
NVIDIA Developer
Tags
Related Courses
Building a Queryable Journal with OpenAI, Markdown, and LlamaIndexSamuel Chan via YouTube Building an AI Language Tutor with Pinecone, LlamaIndex, GPT-3, and BeautifulSoup
Samuel Chan via YouTube Locally-Hosted Offline LLM with LlamaIndex and OPT - Implementing Open-Source Instruction-Tuned Language Models
Samuel Chan via YouTube Understanding Embeddings in Large Language Models - LlamaIndex and Chroma DB
Samuel Chan via YouTube A Deep Dive Into Retrieval-Augmented Generation with LlamaIndex
Linux Foundation via YouTube