Fine-Tuning LLMs: Best Practices and When to Go Small - Lecture 124
Offered By: MLOps.community via YouTube
Course Description
Overview
Syllabus
[] Introduction to Mark Kim-Huang
[] Join the LLMs in Production Conference Part 2 on June 15-16!
[] Fine-Tuning LLMs: Best Practices and When to Go Small
[] Model approaches
[] You might think that you could just use OpenAI but only older base models are available
[] Why custom LLMs over closed source models?
[] Small models work well for simple tasks
[] Types of Fine-Tuning
[] Strategies for improving fine-tuning performance
[] Challenges
[] Define your task
[] Task framework
[] Defining tasks
[] Clustering task diversifies training data and improves out-of-domain performance
[] Prompt engineering
[] Constructing a prompt
[] Synthesize more data
[] Constructing a prompt
[] Increase fine-tuning efficiency with LoRa
[] Naive data parallelism with mixed precision is inefficient
[] Further reading on mixed precision
[] Parameter efficient fine-tuning with LoRa
[] LoRa Data Parallel with Mixed Precision
[] Summary
[] Q&A
[] Mark's journey to LLMs
[] Task clustering mixing with existing data sets
[] LangChain Auto Evaluator evaluating LLMs
[] Cloud platforms costs
[] Vector database used at Preemo
[] Finding a reasoning path of a model on Prompting
[] When to fine-tune versus prompting with a context window
[] Wrap up
Taught by
MLOps.community
Related Courses
3D-печать для всех и каждогоTomsk State University via Coursera Developing a Multidimensional Data Model
Microsoft via edX Launching into Machine Learning 日本語版
Google Cloud via Coursera Art and Science of Machine Learning 日本語版
Google Cloud via Coursera Launching into Machine Learning auf Deutsch
Google Cloud via Coursera