LLM Observability and Evaluations
Offered By: Data Council via YouTube
Course Description
Overview
Explore LLM observability and evaluations in this 11-minute video from Data Council featuring Amber Roberts, ML Engineer & Community Leader at Arize AI. Gain insights into debugging high-level abstractions in LLM-powered applications using LangChain and LlamaIndex. Learn how to leverage Arize Phoenix modules to streamline development and maintenance processes for large language models. Discover industry knowledge, technical architectures, and best practices for building cutting-edge data and AI systems. Enhance your understanding of LLM application complexities and improve your ability to evaluate and observe their performance effectively.
Syllabus
LLM Observability and Evaluations Rendered 4 15 24
Taught by
Data Council
Related Courses
Prompt Templates for GPT-3.5 and Other LLMs - LangChainJames Briggs via YouTube Getting Started with GPT-3 vs. Open Source LLMs - LangChain
James Briggs via YouTube Chatbot Memory for Chat-GPT, Davinci + Other LLMs - LangChain
James Briggs via YouTube Chat in LangChain
James Briggs via YouTube LangChain Data Loaders, Tokenizers, Chunking, and Datasets - Data Prep
James Briggs via YouTube