50x Faster Fine-Tuning in 10 Lines of YAML with Ludwig and Ray
Offered By: Anyscale via YouTube
Course Description
Overview
Learn how to fine-tune powerful foundation models like LLMs on your data using Ludwig and Ray in this 12-minute conference talk. Explore three methods for fine-tuning models in less than 10 lines of YAML: modifying pretrained model weights, training dense layers with fixed pretrained embeddings, and using pretrained embeddings as inputs for tree-based models. Compare model quality against training time and cost, and discover how Ludwig leverages Ray AIR for scalable, optimized performance. Gain insights on using the Ludwig framework for flexible model training, automatically scaling with Ray, applying best practices to speed up fine-tuning, and understanding Ludwig's integration with Ray Train for distributed training. Learn about Ludwig's encoder embedding cache capability and how Ray Datasets parallelizes this process for efficient fine-tuning on CPU hardware. Explore switching between neural networks and gradient boosted trees using Ludwig's configuration, and leverage Ray AIR's support for distributed PyTorch and LightGBM.
Syllabus
50x Faster Fine-Tuning in 10 Lines of YAML with Ludwig and Ray
Taught by
Anyscale
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX