Fine-Tuning Large Language Models at Scale - Workday's Approach
Offered By: Anyscale via YouTube
Course Description
Overview
Discover Workday's innovative approach to fine-tuning Large Language Models (LLMs) at scale in this Ray Summit 2024 presentation. Explore how Trevor DiMartino addresses the challenges of training models on isolated customer data within a secure, multi-tenant environment while managing GPU scarcity and strict data access controls. Learn about Workday's platform design, which utilizes Parameter-Efficient Fine-Tuning (PEFT) techniques like LoRA and KubeRay's autoscaling capabilities to enable cost-efficient, on-demand GPU resource allocation for both research and production environments. Gain insights into how Ray is leveraged at various scales to create a flexible deployment solution, making full-stack development as accessible as full-scale production in Workday's multi-tenant ecosystem.
Syllabus
How Workday Fine-Tunes LLMs at Scale | Ray Summit 2024
Taught by
Anyscale
Related Courses
Managing Devices using Enterprise Mobility SuiteMicrosoft via edX Firebase Essentials For Android
Google via Udacity Research Data Management and Sharing
The University of North Carolina at Chapel Hill via Coursera SAP HANA CLOUD PLATFORM の重要事項
SAP Learning Windows 10 pour l'entreprise
Microsoft Virtual Academy via OpenClassrooms