YoVDO

Running an Open Source LLM - Deployment and Cost Considerations

Offered By: Conf42 via YouTube

Tags

Machine Learning Courses Kubernetes Courses Google Cloud Platform (GCP) Courses GPU Computing Courses Cloud Infrastructure Courses Hugging Face Courses Open Source LLMs Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the process of running an open-source Large Language Model (LLM) in this conference talk from Conf42 LLMs 2024. Gain insights into product envisioning, basic LLM overview, and utilizing Hugging Face for model hosting. Learn about deployment infrastructure on Google Cloud, GPU requirements, and Kubernetes implementation. Discover experimentation results, challenges with open LLMs, cost considerations, and key learnings from the speaker's experience. Understand why there's no one-to-one switch between different LLMs and how to approach the implementation process.

Syllabus

intro
preamble
the product
the product we envisioned
basic overview
hugging face
hosting llm
based on google cloud
gpu requirements
deployment infrastructure
kubernetes
experimentation and results
open llms
no one-to-one switch
how much is this going to cost?
learnings
thank you


Taught by

Conf42

Related Courses

Моделирование биологических молекул на GPU (Biomolecular modeling on GPU)
Moscow Institute of Physics and Technology via Coursera
Practical Deep Learning For Coders
fast.ai via Independent
GPU Architectures And Programming
Indian Institute of Technology, Kharagpur via Swayam
Perform Real-Time Object Detection with YOLOv3
Coursera Project Network via Coursera
Getting Started with PyTorch
Coursera Project Network via Coursera