MLOps: Fine-tuning Mistral 7B with PEFT, QLora, and MLFlow
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to fine-tune the Mistral 7B language model using PEFT (Parameter-Efficient Fine-Tuning) and QLora techniques, integrated with MLflow for efficient experiment tracking and model management. This 28-minute video tutorial demonstrates the process of enhancing a large language model's performance while optimizing computational resources. Explore practical implementation using provided code examples and gain insights into advanced MLOps practices for managing and deploying fine-tuned models effectively.
Syllabus
MlOps Mlflow: Fine tune Mistral 7B ,PEFT , QLora and MLFlow
Taught by
The Machine Learning Engineer
Related Courses
Fine-Tuning LLM with QLoRA on Single GPU - Training Falcon-7b on ChatBot Support FAQ DatasetVenelin Valkov via YouTube Deploy LLM to Production on Single GPU - REST API for Falcon 7B with QLoRA on Inference Endpoints
Venelin Valkov via YouTube Building an LLM Fine-Tuning Dataset - From Reddit Comments to QLoRA Training
sentdex via YouTube Generative AI: Fine-Tuning LLM Models Crash Course
Krish Naik via YouTube Aligning Open Language Models - Stanford CS25 Lecture
Stanford University via YouTube