Navigating Challenges and Enhancing Performance of LLM-based Applications
Offered By: The ASF via YouTube
Course Description
Overview
Explore the intricacies of Retrieval-Augmented Generation (RAG) and its impact on Large Language Model (LLM) applications in this 28-minute conference talk. Delve into the fusion of retrieval and generation models, understanding how RAG enhances AI's text comprehension and response accuracy through information databases. Examine the challenges in evaluating LLM applications, particularly in domain-specific contexts, and discover essential strategies for assessing and optimizing RAG performance. Learn from Atita Arora, a Solution Architect at Qdrant and respected expert in information retrieval systems, as she shares insights on navigating challenges and improving LLM-based applications. Gain valuable knowledge on calibrating information retrieval systems, leveraging vectors in e-commerce search, and fostering diversity and inclusion in the tech industry.
Syllabus
Navigating Challenges and Enhancing Performance of LLM based Applications
Taught by
The ASF
Related Courses
Qdrant - A Vector Search Engine in RustRust via YouTube Introduction to Retrieval Augmented Generation (RAG)
Duke University via Coursera Advanced RAG with Llama 3 in LangChain - Building a PDF Chat System
Venelin Valkov via YouTube Hands-On AI: RAG using LlamaIndex
LinkedIn Learning Local RAG with Llama 3.1 for PDFs - Private Chat with Documents using LangChain and Streamlit
Venelin Valkov via YouTube