Make LLM Apps Sane Again - Forgetting Incorrect Data in Real Time
Offered By: Conf42 via YouTube
Course Description
Overview
Explore a conference talk on improving Large Language Model (LLM) applications by implementing real-time data correction. Delve into LLM limitations and various correction methods, including fine-tuning, prompt engineering, and Retrieval-Augmented Generation (RAG). Learn about vector embeddings, popular RAG use cases, and the potential risks of compromised RAG data. Discover a solution using real-time vector indexing, and follow along with a practical demonstration of building a chatbot. Gain insights on the importance of reactivity in LLM applications and walk away with key takeaways for enhancing LLM performance and reliability.
Syllabus
intro
preamble
agenda
llms
llm limitations
how to correct the model?
finetuning
prompt engineering
problems with manual prompting
rag
what are vector embeddings?
popular rag use cases
what happens if the rag data is compromised?
solution: use a real time vector index
practice: build a chatbot
pathway demo
reactivity is key
takeaways
thank you!
Taught by
Conf42
Related Courses
Discover, Validate & Launch New Business Ideas with ChatGPTUdemy 150 Digital Marketing Growth Hacks for Businesses
Udemy AI: Executive Briefing
Pluralsight The Complete Digital Marketing Guide - 25 Courses in 1
Udemy Learn to build a voice assistant with Alexa
Udemy