MLOps: Fine-tuning Mistral 7B with PEFT, QLora, and MLFlow
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Learn how to fine-tune the Mistral 7B language model using PEFT (Parameter-Efficient Fine-Tuning) and QLora techniques, integrated with MLflow for efficient experiment tracking and model management. This 28-minute video tutorial demonstrates the process of enhancing a large language model's performance while optimizing computational resources. Explore practical implementation using provided code examples and gain insights into advanced MLOps practices for managing and deploying fine-tuned models effectively.
Syllabus
MlOps Mlflow: Fine tune Mistral 7B ,PEFT , QLora and MLFlow
Taught by
The Machine Learning Engineer
Related Courses
Predicción del fraude bancario con autoML y PycaretCoursera Project Network via Coursera Clasificación de datos de Satélites con autoML y Pycaret
Coursera Project Network via Coursera Regresión (ML) en la vida real con PyCaret
Coursera Project Network via Coursera ML Pipelines on Google Cloud
Google Cloud via Coursera ML Pipelines on Google Cloud
Pluralsight