YoVDO

MLOps: PEFT Dialog Summarization with Flan T5 Using LoRA

Offered By: The Machine Learning Engineer via YouTube

Tags

Data Science Courses Machine Learning Courses MLOps Courses LoRA (Low-Rank Adaptation) Courses Transformers Courses Fine-Tuning Courses PEFT Courses Flan-T5 Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the implementation of Parameter-Efficient Fine-Tuning (PEFT) using Low-Rank Adaptation (LoRA) to fine-tune a T5 model for dialog summarization. This 24-minute tutorial guides you through the process, demonstrating how to leverage PEFT techniques to efficiently adapt large language models for specific tasks. Access the accompanying Jupyter notebook on GitHub to follow along and gain hands-on experience in applying these advanced machine learning and natural language processing techniques. Dive into the intersection of MLOps, data science, and machine learning as you learn to optimize model performance while minimizing computational resources.

Syllabus

MLOps: PEFT Dialog Summarization Flan T5 (Lora) #datascience #machinelearning


Taught by

The Machine Learning Engineer

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent