YoVDO

Lessons From Fine-Tuning Llama-2

Offered By: Anyscale via YouTube

Tags

Fine-Tuning Courses Language Models Courses Parameter-Efficient Fine-Tuning Courses Anyscale Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the insights gained from fine-tuning open-source language models for task-specific applications in this 29-minute presentation by Anyscale. Discover how tailored solutions can outperform comprehensive models like GPT-4 in specialized scenarios. Learn about the efficient fine-tuning processes enabled by Anyscale + Ray's suite of libraries, addressing the critical GPU availability bottleneck. Gain valuable takeaways on when to apply fine-tuning, how to set up an LLM fine-tuning problem, and the role of Ray and its libraries in building a fine-tuning infrastructure. Understand the requirements for parameter-efficient fine-tuning and how the Anyscale platform supports LLM-based fine-tuning. Access the accompanying slide deck for a comprehensive overview of the presented concepts and techniques.

Syllabus

Lessons From Fine-Tuning Llama-2


Taught by

Anyscale

Related Courses

Optimizing LLM Inference with AWS Trainium, Ray, vLLM, and Anyscale
Anyscale via YouTube
Scalable and Cost-Efficient AI Workloads with AWS and Anyscale
Anyscale via YouTube
End-to-End LLM Workflows with Anyscale
Anyscale via YouTube
Developing and Serving RAG-Based LLM Applications in Production
Anyscale via YouTube
Deploying Many Models Efficiently with Ray Serve
Anyscale via YouTube