Custom RAG Pipelines and LLM Fine-Tuning - A Gradient Tutorial
Offered By: Data Centric via YouTube
Course Description
Overview
Explore the development of a custom RAG pipeline using a fine-tuned, 13B parameter open-source model that mimics Yoda's speech style in this comprehensive tutorial video. Discover valuable engineering tips for deploying fine-tuned models within RAG pipelines and learn efficient model fine-tuning techniques using Gradient's platform. Gain insights into the technical overview, open-source model selection, Gradient workspace setup, fine-tuning process, LoRA explanation, hyper-parameter optimization, and testing of both the fine-tuned model and RAG pipeline. Access complementary resources including a blog post, GitHub repository, and additional learning materials to deepen your understanding of AI, Data Science, and Large Language Models.
Syllabus
Intro:
Gradient Intro:
Technical Overview:
Open-Source Model:
Gradient Workspace:
Fine-tuning:
Brief Explanation of LoRA:
Hyper-parameters:
Testing fine-tuned model:
Testing RAG pipeline:
Outro:
Taught by
Data Centric
Related Courses
TensorFlow: Working with NLPLinkedIn Learning Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube