YoVDO

No More Runtime Setup - Bundling, Distributing, Deploying, and Scaling LLMs Seamlessly with Ollama Operator

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

LLM (Large Language Model) Courses Kubernetes Courses CUDA Courses GPU Computing Courses Model Deployment Courses Containerization Courses llama.cpp Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a groundbreaking approach to bundling, distributing, deploying, and scaling Large Language Models (LLMs) in this 34-minute conference talk by Fanshi Zhang from DaoCloud at the Cloud Native Computing Foundation (CNCF) event. Learn how Ollama Operator simplifies the complex process of managing LLMs by eliminating runtime setup challenges. Discover how this innovative tool, powered by Modelfile, streamlines the deployment of LLM workloads across various operating systems and environments. Gain insights into utilizing llama.cpp-powered unified bundled runtime through simple CRD definitions or the kollama CLI. Delve into the capabilities of Ollama Operator and Ollama for deploying custom large language models, and explore how to leverage Modelfile features within the Kubernetes ecosystem. This presentation offers valuable solutions for developers and organizations seeking to overcome common hurdles in LLM implementation and management.

Syllabus

No More Runtime Setup! Let's Bundle, Distribute, Deploy, Scale LLMs Seamlessly... - Fanshi Zhang


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Fundamentals of Containers, Kubernetes, and Red Hat OpenShift
Red Hat via edX
Configuration Management for Containerized Delivery
Microsoft via edX
Getting Started with Google Kubernetes Engine - Español
Google Cloud via Coursera
Getting Started with Google Kubernetes Engine - 日本語版
Google Cloud via Coursera
Architecting with Google Kubernetes Engine: Foundations en Español
Google Cloud via Coursera