YoVDO

Cloud-Native LLM Deployments Made Easy Using LangChain

Offered By: CNCF [Cloud Native Computing Foundation] via YouTube

Tags

LangChain Courses Kubernetes Courses Generative AI Courses Model Deployment Courses Containerization Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore how to seamlessly deploy large language model (LLM) architectures using LangChain in a cloud-native environment. Learn about the challenges of deploying LLMs with billions of parameters and how LangChain, an open-source framework, simplifies the creation of gen AI interfaces. Discover how to combine LangChain with Kubernetes to manage complex architectures, balance computational requirements, and ensure efficient resource utilization. Follow a step-by-step walkthrough of deploying an end-to-end LLM containerized LangChain application in a cloud-native setting, demonstrating how to quickly and easily transition trained models into working applications. Gain insights into streamlining NLP components and leveraging Kubernetes for infrastructure management in this 34-minute conference talk presented by Ezequiel Lanza and Arun Gupta from Intel at a CNCF event.

Syllabus

Cloud-Native LLM Deployments Made Easy Using LangChain - Ezequiel Lanza & Arun Gupta, Intel


Taught by

CNCF [Cloud Native Computing Foundation]

Related Courses

Prompt Templates for GPT-3.5 and Other LLMs - LangChain
James Briggs via YouTube
Getting Started with GPT-3 vs. Open Source LLMs - LangChain
James Briggs via YouTube
Chatbot Memory for Chat-GPT, Davinci + Other LLMs - LangChain
James Briggs via YouTube
Chat in LangChain
James Briggs via YouTube
LangChain Data Loaders, Tokenizers, Chunking, and Datasets - Data Prep
James Briggs via YouTube