Generative AI: Fine-Tuning LLM Models Crash Course
Offered By: Krish Naik via YouTube
Course Description
Overview
Dive into a comprehensive crash course on generative AI fine-tuning techniques for Large Language Models (LLMs). Explore quantization, QLORA, LORA, and 1-bit LLM concepts through theoretical and practical insights. Learn to fine-tune popular models like LLama2 and Google Gemma, build no-code LLM pipelines, and customize models with your own data. Gain hands-on experience with provided code examples and in-depth explanations of advanced techniques in natural language processing and machine learning.
Syllabus
Introduction
Quantization Intuition
Lora And QLORA Indepth Intuition
Finetuning With LLama2
1 bit LLM Indepth Intuition
Finetuning with Google Gemma Models
Building LLm Pipelines With No code
Fine tuning With Own Cutom Data
Taught by
Krish Naik
Related Courses
Fine-Tuning LLM with QLoRA on Single GPU - Training Falcon-7b on ChatBot Support FAQ DatasetVenelin Valkov via YouTube Deploy LLM to Production on Single GPU - REST API for Falcon 7B with QLoRA on Inference Endpoints
Venelin Valkov via YouTube Building an LLM Fine-Tuning Dataset - From Reddit Comments to QLoRA Training
sentdex via YouTube Aligning Open Language Models - Stanford CS25 Lecture
Stanford University via YouTube Fine-Tuning LLM Models - Generative AI Course
freeCodeCamp