Finetuning, Serving, and Evaluating Large Language Models in the Wild
Offered By: Open Data Science via YouTube
Course Description
Overview
Syllabus
Welcome to the world of the world of large language models with Dr. Hao Zhang postdoctoral researcher at the Sky Lab, UC Berkeley. In this talk, Finetuning, Serving, and Evaluating LLMs in the Wild, Hao shares his hands-on experience with serving and evaluating over 20 LLM-based Chatbots, including Vicuna, within the innovative Chatbot Arena. In this video, you’ll get a deep dive into Vicuna, an open-source chatbot finely tuned from Meta's Llama, and explore the Chatbot Arena platform designed for real-world model evaluations. Discover the challenges behind serving numerous LLMs, achieving high throughput, and ensuring low latency with limited university-donated GPUs. Hao unveils the key enabling techniques, including paged attention vLLM, SOSP’23 and statistical multiplexing with model parallelism AlpaServe, OSDI’23, in collaboration with the LMSYS Org team at https://lmsys.org.
- Introductions
- Background
- An Example
- Chatbot Arena: Deployment & Elo-based Leaderboard
- Today’s Focus: Behind the Scene
- Key Insight
- vLLM: Efficient Memory Management for LLM Inference
- Memory Efficiency of vLLM
- vLLM Open-Source Adoption
- Key Idea
Taught by
Open Data Science
Related Courses
How to Build a Chatbot Without CodingIBM via Coursera Building Bots for Journalism: Software You Talk With
Knight Center for Journalism in the Americas via Independent Microsoft Bot Framework and Conversation as a Platform
Microsoft via edX AI Chatbots without Programming
IBM via edX Smarter Chatbots with Node-RED and Watson AI
IBM via edX