MLOps: PEFT Dialog Summarization with Flan T5 Using LoRA
Offered By: The Machine Learning Engineer via YouTube
Course Description
Overview
Explore the implementation of Parameter-Efficient Fine-Tuning (PEFT) using Low-Rank Adaptation (LoRA) to fine-tune a T5 model for dialog summarization. This 24-minute tutorial guides you through the process, demonstrating how to leverage PEFT techniques to efficiently adapt large language models for specific tasks. Access the accompanying Jupyter notebook on GitHub to follow along and gain hands-on experience in applying these advanced machine learning and natural language processing techniques. Dive into the intersection of MLOps, data science, and machine learning as you learn to optimize model performance while minimizing computational resources.
Syllabus
MLOps: PEFT Dialog Summarization Flan T5 (Lora) #datascience #machinelearning
Taught by
The Machine Learning Engineer
Related Courses
MLOps: OpenVino Quantized Pipeline for Grammatical Error CorrectionThe Machine Learning Engineer via YouTube Fine-tuning Flan-T5 for Sequence-to-Sequence Classification with MLFlow
The Machine Learning Engineer via YouTube MLOps MLFlow: Fine-tuning Flan-T5 for Sequence-to-Sequence Classification in Spanish
The Machine Learning Engineer via YouTube Fine-tuning Flan-T5 for Text Classification with MLFlow
The Machine Learning Engineer via YouTube Fine-tuning Flan-T5 for Text Classification Using MLFlow - Spanish Tutorial
The Machine Learning Engineer via YouTube