How to Fine-Tune LLMs for Specialized Enterprise Tasks - Curating Data and Emerging Methods
Offered By: Snorkel AI via YouTube
Course Description
Overview
Discover how to fine-tune Large Language Models (LLMs) for specialized enterprise tasks in this 51-minute webinar by Snorkel AI experts. Learn about emerging fine-tuning and alignment methods like DPO, ORPO, and SPIN, and explore techniques for rapidly curating high-quality instruction and preference data. Gain insights into evaluating LLM accuracy for production deployment and see a practical demonstration of the fine-tuning, alignment, and evaluation process. Understand the importance of domain-specific knowledge and high-quality training data in transforming foundation models like Meta's Llama 3 into specialized LLMs. Explore topics such as data considerations, the development process, and creating effective data slices for model training. Enhance your understanding of enterprise AI and LLM fine-tuning through this comprehensive webinar.
Syllabus
Introduction
When and why finetune LLMs
Data considerations
Recent methods
Training data
Outline
Mission
Development Process
Domain Expert
Quality Model
Quality Model Example
Data Slices
Writing Data Slices
QA Session
Taught by
Snorkel AI
Related Courses
TensorFlow: Working with NLPLinkedIn Learning Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube