Self-hosted LLMs Across All Your Devices and GPUs
Offered By: Conf42 via YouTube
Course Description
Overview
Explore the world of self-hosted Large Language Models (LLMs) across various devices and GPUs in this informative conference talk from Conf42 LLMs 2024. Discover the benefits of open-source solutions and learn why alternatives to OpenAI might be preferable. Gain insights into the LlamaEdge API server and its role as a developer platform. Watch live demonstrations showcasing the implementation and operation of self-hosted LLMs. Delve into development and operational aspects, and witness practical applications through additional demos. Enhance your understanding of LLM deployment and management across different hardware configurations in this comprehensive 34-minute presentation.
Syllabus
intro
preamble
all open source
demo
why not just openai?
why llamaedge api server?
llamaedfe is a developer platform
demo
dev
ops
demo
thank you
Taught by
Conf42
Related Courses
Fundamentals of Accelerated Computing with CUDA C/C++Nvidia via Independent Using GPUs to Scale and Speed-up Deep Learning
IBM via edX Deep Learning
IBM via edX Deep Learning with IBM
IBM via edX Accelerating Deep Learning with GPUs
IBM via Cognitive Class