LLM Observability and Evaluations
Offered By: Data Council via YouTube
Course Description
Overview
Explore LLM observability and evaluations in this 11-minute video from Data Council featuring Amber Roberts, ML Engineer & Community Leader at Arize AI. Gain insights into debugging high-level abstractions in LLM-powered applications using LangChain and LlamaIndex. Learn how to leverage Arize Phoenix modules to streamline development and maintenance processes for large language models. Discover industry knowledge, technical architectures, and best practices for building cutting-edge data and AI systems. Enhance your understanding of LLM application complexities and improve your ability to evaluate and observe their performance effectively.
Syllabus
LLM Observability and Evaluations Rendered 4 15 24
Taught by
Data Council
Related Courses
Google BARD and ChatGPT AI for Increased ProductivityUdemy Bringing LLM to the Enterprise - Training From Scratch or Just Fine-Tune With Cerebras-GPT
Prodramp via YouTube Generative AI and Long-Term Memory for LLMs
James Briggs via YouTube Extractive Q&A With Haystack and FastAPI in Python
James Briggs via YouTube OpenAssistant First Models Are Here! - Open-Source ChatGPT
Yannic Kilcher via YouTube