Lessons From Fine-Tuning Llama-2
Offered By: Anyscale via YouTube
Course Description
Overview
Explore the insights gained from fine-tuning open-source language models for task-specific applications in this 29-minute presentation by Anyscale. Discover how tailored solutions can outperform comprehensive models like GPT-4 in specialized scenarios. Learn about the efficient fine-tuning processes enabled by Anyscale + Ray's suite of libraries, addressing the critical GPU availability bottleneck. Gain valuable takeaways on when to apply fine-tuning, how to set up an LLM fine-tuning problem, and the role of Ray and its libraries in building a fine-tuning infrastructure. Understand the requirements for parameter-efficient fine-tuning and how the Anyscale platform supports LLM-based fine-tuning. Access the accompanying slide deck for a comprehensive overview of the presented concepts and techniques.
Syllabus
Lessons From Fine-Tuning Llama-2
Taught by
Anyscale
Related Courses
Generative AI Engineering and Fine-Tuning TransformersIBM via Coursera The Next Million AI Apps - Developing Custom Models for Specialized Tasks
MLOps.community via YouTube LLM Fine-Tuning - Explained
CodeEmporium via YouTube Fine-tuning Large Models on Local Hardware Using PEFT and Quantization
EuroPython Conference via YouTube Fine-Tuning and Customizing LLMs for Enterprise Tasks
Snorkel AI via YouTube