YoVDO

How to Fine-Tune LLMs for Specialized Enterprise Tasks - Curating Data and Emerging Methods

Offered By: Snorkel AI via YouTube

Tags

Fine-Tuning Courses Model Evaluation Courses Snorkel AI Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover how to fine-tune Large Language Models (LLMs) for specialized enterprise tasks in this 51-minute webinar by Snorkel AI experts. Learn about emerging fine-tuning and alignment methods like DPO, ORPO, and SPIN, and explore techniques for rapidly curating high-quality instruction and preference data. Gain insights into evaluating LLM accuracy for production deployment and see a practical demonstration of the fine-tuning, alignment, and evaluation process. Understand the importance of domain-specific knowledge and high-quality training data in transforming foundation models like Meta's Llama 3 into specialized LLMs. Explore topics such as data considerations, the development process, and creating effective data slices for model training. Enhance your understanding of enterprise AI and LLM fine-tuning through this comprehensive webinar.

Syllabus

Introduction
When and why finetune LLMs
Data considerations
Recent methods
Training data
Outline
Mission
Development Process
Domain Expert
Quality Model
Quality Model Example
Data Slices
Writing Data Slices
QA Session


Taught by

Snorkel AI

Related Courses

Solving the Last Mile Problem of Foundation Models with Data-Centric AI
MLOps.community via YouTube
Foundational Models in Enterprise AI - Challenges and Opportunities
MLOps.community via YouTube
Knowledge Distillation Demystified: Techniques and Applications
Snorkel AI via YouTube
Model Distillation - From Large Models to Efficient Enterprise Solutions
Snorkel AI via YouTube
Curate Training Data via Labeling Functions - 10 to 100x Faster
Snorkel AI via YouTube