Local RAG with Llama 3.1 for PDFs - Private Chat with Documents using LangChain and Streamlit
Offered By: Venelin Valkov via YouTube
Course Description
Overview
Discover how to build a local Retrieval-Augmented Generation (RAG) system for efficient document processing using Large Language Models (LLMs) in this comprehensive tutorial video. Learn to extract high-quality text from PDFs, split and format documents for optimal LLM performance, create vector stores with Qdrant, implement advanced retrieval techniques, and integrate local and remote LLMs. Follow along to develop a private chat application for your documents using LangChain and Streamlit, covering everything from project structure and UI design to document ingestion, retrieval methods, and deployment on Streamlit Cloud.
Syllabus
- What is RagBase?
- Text tutorial on MLExpert.io
- How RagBase works
- Project Structure
- UI with Streamlit
- Config
- File Upload
- Document Processing Ingestion
- Retrieval Reranker & LLMChainFilter
- QA Chain
- Chat Memory/History
- Create Models
- Start RagBase Locally
- Deploy to Streamlit Cloud
- Conclusion
Taught by
Venelin Valkov
Related Courses
Better Llama with Retrieval Augmented Generation - RAGJames Briggs via YouTube Live Code Review - Pinecone Vercel Starter Template and Retrieval Augmented Generation
Pinecone via YouTube Nvidia's NeMo Guardrails - Full Walkthrough for Chatbots - AI
James Briggs via YouTube Hugging Face LLMs with SageMaker - RAG with Pinecone
James Briggs via YouTube Supercharge Your LLM Applications with RAG
Data Science Dojo via YouTube