Fine-Tuning LLM Models - Generative AI Course
Offered By: freeCodeCamp
Course Description
Overview
Dive into the world of fine-tuning Large Language Models (LLMs) in this comprehensive 2-hour 37-minute course. Master techniques like QLORA and LORA, explore quantization using LLama2, Gradient, and Google Gemma models, and gain both theoretical knowledge and practical skills. Cover topics including quantization intuition, in-depth LORA and QLORA concepts, fine-tuning with LLama2 and Google Gemma models, 1-bit LLM insights, building no-code LLM pipelines, and customizing models with your own data. Access accompanying code on GitHub to enhance your learning experience and apply newfound knowledge to real-world scenarios.
Syllabus
⌨️ Introduction
⌨️ Quantization Intuition
⌨️ Lora And QLORA Indepth Intuition
⌨️ Finetuning With LLama2
⌨️ 1 bit LLM Indepth Intuition
⌨️ Finetuning with Google Gemma Models
⌨️ Building LLm Pipelines With No code
⌨️ Fine tuning With Own Custom Data
Taught by
freeCodeCamp.org
Related Courses
Fine-Tuning LLM with QLoRA on Single GPU - Training Falcon-7b on ChatBot Support FAQ DatasetVenelin Valkov via YouTube Deploy LLM to Production on Single GPU - REST API for Falcon 7B with QLoRA on Inference Endpoints
Venelin Valkov via YouTube Building an LLM Fine-Tuning Dataset - From Reddit Comments to QLoRA Training
sentdex via YouTube Generative AI: Fine-Tuning LLM Models Crash Course
Krish Naik via YouTube Aligning Open Language Models - Stanford CS25 Lecture
Stanford University via YouTube