YoVDO

FwdLLM: Efficient Federated Finetuning of Large Language Models with Perturbed Inferences

Offered By: USENIX via YouTube

Tags

Federated Learning Courses LLaMA (Large Language Model Meta AI) Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore an innovative approach to federated fine-tuning of Large Language Models (LLMs) in this 20-minute conference talk from USENIX ATC '24. Dive into FwdLLM, a novel protocol designed to enhance efficiency in Federated Learning (FL) for LLMs on mobile devices. Learn how the researchers from Beijing University of Posts and Telecommunications address the challenge of balancing LLM complexity with mobile resource constraints. Discover the key components of FwdFL, including backpropagation-free training methods, adaptive computational load allocation, and discriminative sampling of perturbed predictions. Gain insights into the significant advantages of this approach, such as faster convergence and reduced memory footprint, and understand how it enables federated billion-parameter LLMs on commercial off-the-shelf mobile devices for the first time.

Syllabus

USENIX ATC '24 - FwdLLM: Efficient Federated Finetuning of Large Language Models with Perturbed...


Taught by

USENIX

Related Courses

Secure and Private AI
Facebook via Udacity
Advanced Deployment Scenarios with TensorFlow
DeepLearning.AI via Coursera
Big Data for Reliability and Security
Purdue University via edX
MLOps for Scaling TinyML
Harvard University via edX
Edge Analytics: IoT and Data Science
LinkedIn Learning