YoVDO

Make LLM Apps Sane Again - Forgetting Incorrect Data in Real Time

Offered By: Conf42 via YouTube

Tags

Retrieval Augmented Generation Courses Chatbot Courses Prompt Engineering Courses Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a conference talk on improving Large Language Model (LLM) applications by implementing real-time data correction. Delve into LLM limitations and various correction methods, including fine-tuning, prompt engineering, and Retrieval-Augmented Generation (RAG). Learn about vector embeddings, popular RAG use cases, and the potential risks of compromised RAG data. Discover a solution using real-time vector indexing, and follow along with a practical demonstration of building a chatbot. Gain insights on the importance of reactivity in LLM applications and walk away with key takeaways for enhancing LLM performance and reliability.

Syllabus

intro
preamble
agenda
llms
llm limitations
how to correct the model?
finetuning
prompt engineering
problems with manual prompting
rag
what are vector embeddings?
popular rag use cases
what happens if the rag data is compromised?
solution: use a real time vector index
practice: build a chatbot
pathway demo
reactivity is key
takeaways
thank you!


Taught by

Conf42

Related Courses

How to Build a Chatbot Without Coding
IBM via Coursera
Building Bots for Journalism: Software You Talk With
Knight Center for Journalism in the Americas via Independent
Microsoft Bot Framework and Conversation as a Platform
Microsoft via edX
AI Chatbots without Programming
IBM via edX
Smarter Chatbots with Node-RED and Watson AI
IBM via edX