Custom RAG Pipelines and LLM Fine-Tuning - A Gradient Tutorial
Offered By: Data Centric via YouTube
Course Description
Overview
Explore the development of a custom RAG pipeline using a fine-tuned, 13B parameter open-source model that mimics Yoda's speech style in this comprehensive tutorial video. Discover valuable engineering tips for deploying fine-tuned models within RAG pipelines and learn efficient model fine-tuning techniques using Gradient's platform. Gain insights into the technical overview, open-source model selection, Gradient workspace setup, fine-tuning process, LoRA explanation, hyper-parameter optimization, and testing of both the fine-tuned model and RAG pipeline. Access complementary resources including a blog post, GitHub repository, and additional learning materials to deepen your understanding of AI, Data Science, and Large Language Models.
Syllabus
Intro:
Gradient Intro:
Technical Overview:
Open-Source Model:
Gradient Workspace:
Fine-tuning:
Brief Explanation of LoRA:
Hyper-parameters:
Testing fine-tuned model:
Testing RAG pipeline:
Outro:
Taught by
Data Centric
Related Courses
Build a Natural Language Processing Solution with Microsoft AzurePluralsight Challenges and Solutions in Industry Scale Data and AI Systems - Yangqing Jia
Association for Computing Machinery (ACM) via YouTube An AI Engineer Guide to Comet Model Registry Platform
Prodramp via YouTube An AI Engineer Guide to Model Monitoring with Comet ML Platform
Prodramp via YouTube An AI Engineer's Guide to Machine Learning with Keras
Prodramp via YouTube