Streaming for LangChain Agents and FastAPI
Offered By: James Briggs via YouTube
Course Description
Overview
Learn how to implement streaming for LangChain Agents and serve it through FastAPI in this comprehensive 28-minute tutorial. Progress from basic LangChain streaming to advanced techniques, including simple terminal streaming with LLMs, parsing stream outputs using Async Iterator streaming, and integrating with OpenAI's GPT-3.5-turbo model via LangChain's ChatOpenAI object. Explore custom callback handlers, FastAPI integration, and essential considerations for deploying streaming in production. Access accompanying code notebooks and FastAPI template code to enhance your learning experience and quickly apply these concepts in real-world scenarios.
Syllabus
Streaming for LLMs and Agents
Simple StdOut Streaming in LangChain
Streaming with LangChain Agents
Final Output Streaming
Custom Callback Handlers in LangChain
FastAPI with LangChain Agent Streaming
Confirming we have Agent Streaming
Custom Callback Handlers for Async
Final Things to Consider
Taught by
James Briggs
Related Courses
Prompt Templates for GPT-3.5 and Other LLMs - LangChainJames Briggs via YouTube Getting Started with GPT-3 vs. Open Source LLMs - LangChain
James Briggs via YouTube Chatbot Memory for Chat-GPT, Davinci + Other LLMs - LangChain
James Briggs via YouTube Chat in LangChain
James Briggs via YouTube LangChain Data Loaders, Tokenizers, Chunking, and Datasets - Data Prep
James Briggs via YouTube