YoVDO

Custom RAG Pipelines and LLM Fine-Tuning - A Gradient Tutorial

Offered By: Data Centric via YouTube

Tags

Retrieval Augmented Generation (RAG) Courses WebAssembly Courses LoRA (Low-Rank Adaptation) Courses Hyperparameters Courses AI Engineering Courses Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the development of a custom RAG pipeline using a fine-tuned, 13B parameter open-source model that mimics Yoda's speech style in this comprehensive tutorial video. Discover valuable engineering tips for deploying fine-tuned models within RAG pipelines and learn efficient model fine-tuning techniques using Gradient's platform. Gain insights into the technical overview, open-source model selection, Gradient workspace setup, fine-tuning process, LoRA explanation, hyper-parameter optimization, and testing of both the fine-tuned model and RAG pipeline. Access complementary resources including a blog post, GitHub repository, and additional learning materials to deepen your understanding of AI, Data Science, and Large Language Models.

Syllabus

Intro:
Gradient Intro:
Technical Overview:
Open-Source Model:
Gradient Workspace:
Fine-tuning:
Brief Explanation of LoRA:
Hyper-parameters:
Testing fine-tuned model:
Testing RAG pipeline:
Outro:


Taught by

Data Centric

Related Courses

Introduction to WebAssembly
Linux Foundation via edX
WebAssembly Components: From Cloud to Edge
Linux Foundation via edX
Chrome University
Google via YouTube
Blazor: Getting Started
LinkedIn Learning
Tech Sense
LinkedIn Learning