Streaming for LangChain Agents and FastAPI
Offered By: James Briggs via YouTube
Course Description
Overview
Learn how to implement streaming for LangChain Agents and serve it through FastAPI in this comprehensive 28-minute tutorial. Progress from basic LangChain streaming to advanced techniques, including simple terminal streaming with LLMs, parsing stream outputs using Async Iterator streaming, and integrating with OpenAI's GPT-3.5-turbo model via LangChain's ChatOpenAI object. Explore custom callback handlers, FastAPI integration, and essential considerations for deploying streaming in production. Access accompanying code notebooks and FastAPI template code to enhance your learning experience and quickly apply these concepts in real-world scenarios.
Syllabus
Streaming for LLMs and Agents
Simple StdOut Streaming in LangChain
Streaming with LangChain Agents
Final Output Streaming
Custom Callback Handlers in LangChain
FastAPI with LangChain Agent Streaming
Confirming we have Agent Streaming
Custom Callback Handlers for Async
Final Things to Consider
Taught by
James Briggs
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera 機器學習技法 (Machine Learning Techniques)
National Taiwan University via Coursera Machine Learning Capstone: An Intelligent Application with Deep Learning
University of Washington via Coursera Прикладные задачи анализа данных
Moscow Institute of Physics and Technology via Coursera Leading Ambitious Teaching and Learning
Microsoft via edX