YoVDO

Cloud-Native LLM Deployments Made Easy Using LangChain

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

LangChain Courses Kubernetes Courses Generative AI Courses Model Deployment Courses Containerization Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore how to seamlessly deploy large language model (LLM) architectures using LangChain in a cloud-native environment. Learn about the challenges of deploying LLMs with billions of parameters and how LangChain, an open-source framework, simplifies the creation of gen AI interfaces. Discover how to combine LangChain with Kubernetes to manage complex architectures, balance computational requirements, and ensure efficient resource utilization. Follow a step-by-step walkthrough of deploying an end-to-end LLM containerized LangChain application in a cloud-native setting, demonstrating how to quickly and easily transition trained models into working applications. Gain insights into streamlining NLP components and leveraging Kubernetes for infrastructure management in this 34-minute conference talk presented by Ezequiel Lanza and Arun Gupta from Intel at a CNCF event.

Syllabus

Cloud-Native LLM Deployments Made Easy Using LangChain - Ezequiel Lanza & Arun Gupta, Intel


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Developing a Tabular Data Model
Microsoft via edX
Data Science in Action - Building a Predictive Churn Model
SAP Learning
Serverless Machine Learning with Tensorflow on Google Cloud Platform 日本語版
Google Cloud via Coursera
Intro to TensorFlow em Português Brasileiro
Google Cloud via Coursera
Serverless Machine Learning con TensorFlow en GCP
Google Cloud via Coursera