YoVDO

DevOps for AI: Running LLMs in Production with Kubernetes and KubeFlow

Offered By: WeAreDevelopers via YouTube

Tags

Kubernetes Courses Artificial Intelligence Courses DevOps Courses Cloud Computing Courses Orchestration Courses Containerization Courses Kubeflow Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intersection of DevOps and AI in this 34-minute talk from WeAreDevelopers. Dive into the practical aspects of deploying and managing Large Language Models (LLMs) in production environments using Kubernetes and KubeFlow. Learn how to leverage these powerful tools to streamline the deployment process, ensure scalability, and maintain high performance for AI applications. Gain insights into best practices for containerization, orchestration, and workflow management specifically tailored for LLMs. Discover strategies to overcome common challenges in AI deployment and understand how DevOps principles can be applied to machine learning operations. Whether you're a developer, data scientist, or DevOps engineer, this talk provides valuable knowledge for running sophisticated AI models in real-world production scenarios.

Syllabus

DevOps for AI: running LLMs in production with Kubernetes and KubeFlow


Taught by

WeAreDevelopers

Related Courses

Building End-to-end Machine Learning Workflows with Kubeflow
Pluralsight
Smart Analytics, Machine Learning, and AI on GCP
Pluralsight
Leveraging Cloud-Based Machine Learning on Google Cloud Platform: Real World Applications
LinkedIn Learning
Distributed TensorFlow - TensorFlow at O'Reilly AI Conference, San Francisco '18
TensorFlow via YouTube
KFServing - Model Monitoring with Apache Spark and Feature Store
Databricks via YouTube