Exploring the Latency, Throughput, and Cost Space for LLM Inference
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore the intricacies of LLM inference stacks in this 30-minute conference talk by Timothée Lacroix, CTO of Mistral. Delve into the process of selecting the optimal model for specific tasks, choosing appropriate hardware, and implementing efficient inference code. Examine popular inference stacks and setups, uncovering the factors that contribute to inference costs. Gain insights into leveraging current open-source models effectively and learn about the limitations in existing open-source serving stacks. Discover the potential advancements that future generations of models may bring to the field of LLM inference.
Syllabus
Exploring the Latency/Throughput & Cost Space for LLM Inference // Timothée Lacroix // CTO Mistral
Taught by
MLOps.community
Related Courses
Elastic Cloud Infrastructure: Containers and Services auf DeutschGoogle Cloud via Coursera Deep Dive into Amazon Glacier
Amazon via Independent AWS Well-Architected Training
Amazon via Independent Gestión de compras eficientes para tu empresa
Logyca via edX Optimizing Your Google Cloud Costs 日本語版
Google Cloud via Coursera