YoVDO

Optimizing vLLM for Intel CPUs and XPUs - Ray Summit 2024

Offered By: Anyscale via YouTube

Tags

vLLM Courses Hardware Acceleration Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the optimization of vLLM for Intel CPUs and XPUs in this 30-minute conference talk from Ray Summit 2024. Dive into Ding Ke and Yuan Zhou's presentation on enhancing vLLM performance for Intel architectures, addressing the growing demands of GenAI inference. Gain insights into key technical advancements, challenges, and solutions encountered during the optimization process. Learn about the collaboration with the open-source community and its impact on refining approaches and accelerating progress. Examine initial performance data showcasing the efficiency improvements of vLLM on Intel hardware. Acquire valuable knowledge for developers and organizations aiming to maximize GenAI inference performance on Intel platforms. Delve into a technical perspective on hardware-specific optimizations for large language models, essential for those working on high-performance AI applications.

Syllabus

Optimizing vLLM for Intel CPUs and XPUs | Ray Summit 2024


Taught by

Anyscale

Related Courses

Finetuning, Serving, and Evaluating Large Language Models in the Wild
Open Data Science via YouTube
Cloud Native Sustainable LLM Inference in Action
CNCF [Cloud Native Computing Foundation] via YouTube
Optimizing Kubernetes Cluster Scaling for Advanced Generative Models
Linux Foundation via YouTube
LLaMa for Developers
LinkedIn Learning
Scaling Video Ad Classification Across Millions of Classes with GenAI
Databricks via YouTube