YoVDO

Fine-Tuning Large Language Models at Scale - Workday's Approach

Offered By: Anyscale via YouTube

Tags

Fine-Tuning Courses LoRA (Low-Rank Adaptation) Courses Data Security Courses KubeRay Courses Parameter-Efficient Fine-Tuning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Discover Workday's innovative approach to fine-tuning Large Language Models (LLMs) at scale in this Ray Summit 2024 presentation. Explore how Trevor DiMartino addresses the challenges of training models on isolated customer data within a secure, multi-tenant environment while managing GPU scarcity and strict data access controls. Learn about Workday's platform design, which utilizes Parameter-Efficient Fine-Tuning (PEFT) techniques like LoRA and KubeRay's autoscaling capabilities to enable cost-efficient, on-demand GPU resource allocation for both research and production environments. Gain insights into how Ray is leveraged at various scales to create a flexible deployment solution, making full-stack development as accessible as full-scale production in Workday's multi-tenant ecosystem.

Syllabus

How Workday Fine-Tunes LLMs at Scale | Ray Summit 2024


Taught by

Anyscale

Related Courses

How to Do Stable Diffusion LORA Training by Using Web UI on Different Models
Software Engineering Courses - SE Courses via YouTube
MicroPython & WiFi
Kevin McAleer via YouTube
Building a Wireless Community Sensor Network with LoRa
Hackaday via YouTube
ComfyUI - Node Based Stable Diffusion UI
Olivio Sarikas via YouTube
AI Masterclass for Everyone - Stable Diffusion, ControlNet, Depth Map, LORA, and VR
Hugh Hou via YouTube