YoVDO

37% Better Output with 15 Lines of Code - Llama 3 Improved RAG

Offered By: All About AI via YouTube

Tags

AI Engineering Courses Language Models Courses Code Efficiency Courses GROQ Courses Retrieval Augmented Generation Courses Ollama Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover how to enhance RAG (Retrieval-Augmented Generation) performance by 37% with just 15 lines of code in this informative video tutorial. Explore the implementation of improved RAG techniques for Llama 3 8B on Ollama and Llama 3 70B on Groq. Learn about a common problem in local RAG systems and its solution, understand the mechanics behind the improvement, and see a comparison between different model sizes. Gain insights into practical AI engineering techniques and how to boost the output quality of language models significantly with minimal code changes.

Syllabus

Llama 3 Improved RAG Intro
Problem / Soulution
Brilliant.org
How this works
Llama 3 70B Groq
Conclusion


Taught by

All About AI

Related Courses

The GenAI Stack - From Zero to Database-Backed Support Bot
Docker via YouTube
Ollama Crash Course: Running AI Models Locally Offline on CPU
1littlecoder via YouTube
AI Anytime, Anywhere - Getting Started with LLMs on Your Laptop
Docker via YouTube
Rust Ollama Tutorial - Interfacing with Ollama API Using ollama-rs
Jeremy Chone via YouTube
Ollama: Libraries, Vision Models, and OpenAI Compatibility Updates
Sam Witteveen via YouTube