Deploy LLMs More Efficiently with vLLM and Neural Magic
Offered By: Neural Magic via YouTube
Course Description
Overview
Discover the advantages of vLLM, the leading open-source inference server, and explore how Neural Magic collaborates with enterprises to develop and scale vLLM-based model services for improved efficiency and cost-effectiveness. Delve into the history of open-source AI, deployment paradigms, and the benefits of open-source solutions. Gain insights into Neural Magic's mission, their role in vLLM development, and learn about their business model. Explore topics such as hardware support, quantization techniques, and scalable deployment strategies. Examine a case study and understand the importance of model registry in AI deployment. This 33-minute video provides a comprehensive overview of efficient LLM deployment using vLLM and Neural Magic's expertise.
Syllabus
Introduction
Our Vision and Mission
History of Open Source AI
Advantages of Open Source
Deployment Paradigms
What is a VM
Who Neural Magic is
Our Mission
Why vLLM
VM Adoption
Hardware Support
Neural Magics Role in VM
Neural Magics Business
Stable Distribution of vLLM
Quantization
Case Study
Model Registry
Scalable Deployment
Taught by
Neural Magic
Related Courses
Finetuning, Serving, and Evaluating Large Language Models in the WildOpen Data Science via YouTube Cloud Native Sustainable LLM Inference in Action
CNCF [Cloud Native Computing Foundation] via YouTube Optimizing Kubernetes Cluster Scaling for Advanced Generative Models
Linux Foundation via YouTube LLaMa for Developers
LinkedIn Learning Scaling Video Ad Classification Across Millions of Classes with GenAI
Databricks via YouTube