Hugging Face Accelerate: Making Device-Agnostic ML Training and Inference Easy at Scale
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore the open-source library Hugging Face Accelerate, designed to simplify machine learning model training and inference across various devices. Learn how this framework maintains a low-level approach, minimizing abstraction while maximizing code flexibility. Discover its evolution over the past two years, including support for training on diverse ML acceleration hardware (CUDA, XLA, NPU, and XPU), lower precision training for improved speed and memory efficiency, and scalable large model inference. Gain insights into Accelerate's impact on the ML landscape and get introduced to its user-friendly, device-agnostic API. By the end of this 23-minute conference talk, acquire the knowledge needed to begin your journey into large-scale computing and local-first deployment of machine learning models using Hugging Face Accelerate.
Syllabus
Hugging Face Accelerate: Making Device-Agnostic ML Training and Inference Easy... - Zachary Mueller
Taught by
Linux Foundation
Tags
Related Courses
High Performance ComputingGeorgia Institute of Technology via Udacity Fundamentals of Accelerated Computing with CUDA C/C++
Nvidia via Independent High Performance Computing for Scientists and Engineers
Indian Institute of Technology, Kharagpur via Swayam CUDA programming Masterclass with C++
Udemy Neural Network Programming - Deep Learning with PyTorch
YouTube