YoVDO

Local RAG with Llama 3.1 for PDFs - Private Chat with Documents using LangChain and Streamlit

Offered By: Venelin Valkov via YouTube

Tags

LangChain Courses Streamlit Courses LLaMA (Large Language Model Meta AI) Courses Retrieval Augmented Generation (RAG) Courses Qdrant Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover how to build a local Retrieval-Augmented Generation (RAG) system for efficient document processing using Large Language Models (LLMs) in this comprehensive tutorial video. Learn to extract high-quality text from PDFs, split and format documents for optimal LLM performance, create vector stores with Qdrant, implement advanced retrieval techniques, and integrate local and remote LLMs. Follow along to develop a private chat application for your documents using LangChain and Streamlit, covering everything from project structure and UI design to document ingestion, retrieval methods, and deployment on Streamlit Cloud.

Syllabus

- What is RagBase?
- Text tutorial on MLExpert.io
- How RagBase works
- Project Structure
- UI with Streamlit
- Config
- File Upload
- Document Processing Ingestion
- Retrieval Reranker & LLMChainFilter
- QA Chain
- Chat Memory/History
- Create Models
- Start RagBase Locally
- Deploy to Streamlit Cloud
- Conclusion


Taught by

Venelin Valkov

Related Courses

LLaMA- Open and Efficient Foundation Language Models - Paper Explained
Yannic Kilcher via YouTube
Alpaca & LLaMA - Can it Compete with ChatGPT?
Venelin Valkov via YouTube
Experimenting with Alpaca & LLaMA
Aladdin Persson via YouTube
What's LLaMA? ChatLLaMA? - And Some ChatGPT/InstructGPT
Aladdin Persson via YouTube
Llama Index - Step by Step Introduction
echohive via YouTube