YoVDO

MLOps: Fine-tuning Mistral 7B with PEFT, QLora, and MLFlow

Offered By: The Machine Learning Engineer via YouTube

Tags

MLOps Courses Machine Learning Courses Transformers Courses MLFlow Courses Fine-Tuning Courses PEFT Courses QLoRA Courses Mistral 7B Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to fine-tune the Mistral 7B language model using PEFT (Parameter-Efficient Fine-Tuning) and QLora techniques, integrated with MLflow for efficient experiment tracking and model management. This 28-minute video tutorial demonstrates the process of enhancing a large language model's performance while optimizing computational resources. Explore practical implementation using provided code examples and gain insights into advanced MLOps practices for managing and deploying fine-tuned models effectively.

Syllabus

MlOps Mlflow: Fine tune Mistral 7B ,PEFT , QLora and MLFlow


Taught by

The Machine Learning Engineer

Related Courses

Zephyr 7B Beta - Comparing a 7B LLM with 70B Models
Venelin Valkov via YouTube
Fine-Tuning a Local Mistral 7B Model - Step-by-Step Guide
All About AI via YouTube
Personalizando LLMs: Guía para Fine-Tuning Local de Modelos Open Source en Español
PyCon US via YouTube
Full Fine-Tuning vs LoRA and QLoRA - Comparison and Best Practices
Trelis Research via YouTube
Mistral 7B: Architecture, Evaluation, and Advanced Techniques
Trelis Research via YouTube