YoVDO

Running an Open Source LLM - Deployment and Cost Considerations

Offered By: Conf42 via YouTube

Tags

Machine Learning Courses Kubernetes Courses Google Cloud Platform (GCP) Courses GPU Computing Courses Cloud Infrastructure Courses Hugging Face Courses Open Source LLMs Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the process of running an open-source Large Language Model (LLM) in this conference talk from Conf42 LLMs 2024. Gain insights into product envisioning, basic LLM overview, and utilizing Hugging Face for model hosting. Learn about deployment infrastructure on Google Cloud, GPU requirements, and Kubernetes implementation. Discover experimentation results, challenges with open LLMs, cost considerations, and key learnings from the speaker's experience. Understand why there's no one-to-one switch between different LLMs and how to approach the implementation process.

Syllabus

intro
preamble
the product
the product we envisioned
basic overview
hugging face
hosting llm
based on google cloud
gpu requirements
deployment infrastructure
kubernetes
experimentation and results
open llms
no one-to-one switch
how much is this going to cost?
learnings
thank you


Taught by

Conf42

Related Courses

Creating Versatile AI Agents Through WebAssembly and Rust
Linux Foundation via YouTube
Building a Q&A App with RAG, LangChain, and Open-Source LLMs - Step-by-Step Guide
Code With Aarohi via YouTube
Self-Hosted LLM Agent on Your Own Laptop or Edge Device
CNCF [Cloud Native Computing Foundation] via YouTube
Open Source LLMs: Viable for Production or a Low-Quality Toy?
Anyscale via YouTube
GPT-4 vs Open Source LLMs: Epic Rap Battles Test Creativity with AutoGen
Data Centric via YouTube