Custom RAG Pipelines and LLM Fine-Tuning - A Gradient Tutorial
Offered By: Data Centric via YouTube
Course Description
Overview
          Explore the development of a custom RAG pipeline using a fine-tuned, 13B parameter open-source model that mimics Yoda's speech style in this comprehensive tutorial video. Discover valuable engineering tips for deploying fine-tuned models within RAG pipelines and learn efficient model fine-tuning techniques using Gradient's platform. Gain insights into the technical overview, open-source model selection, Gradient workspace setup, fine-tuning process, LoRA explanation, hyper-parameter optimization, and testing of both the fine-tuned model and RAG pipeline. Access complementary resources including a blog post, GitHub repository, and additional learning materials to deepen your understanding of AI, Data Science, and Large Language Models.
        
Syllabus
Intro: 
Gradient Intro: 
Technical Overview: 
Open-Source Model: 
Gradient Workspace: 
Fine-tuning: 
Brief Explanation of LoRA: 
Hyper-parameters: 
Testing fine-tuned model: 
Testing RAG pipeline: 
Outro: 
Taught by
Data Centric
Related Courses
How to Do Stable Diffusion LORA Training by Using Web UI on Different ModelsSoftware Engineering Courses - SE Courses via YouTube MicroPython & WiFi
Kevin McAleer via YouTube Building a Wireless Community Sensor Network with LoRa
Hackaday via YouTube ComfyUI - Node Based Stable Diffusion UI
Olivio Sarikas via YouTube AI Masterclass for Everyone - Stable Diffusion, ControlNet, Depth Map, LORA, and VR
Hugh Hou via YouTube
