YoVDO

Evaluating LLM-Based Apps - Deepchecks LLM Validation

Offered By: LLMOps Space via YouTube

Tags

LLMOps Courses Artificial Intelligence Courses Machine Learning Courses Benchmarking Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of evaluating Large Language Model (LLM) based applications in this comprehensive webinar featuring Shir Chorev, CTO at Deepchecks, and Yaron, VP Product at Deepchecks. Delve into crucial topics such as LLM hallucinations, evaluation methodologies, and the importance of golden sets in benchmarking LLM performance. Witness a live demonstration of the new Deepchecks LLM evaluation module, designed to address the challenges of assessing LLM-based applications. Gain insights into robust approaches for tackling hallucinations, where models generate outputs not grounded in the given context. Learn about various automated and manual evaluation techniques, and understand the significance of structuring effective golden sets for benchmarking. This 48-minute session, hosted by LLMOps Space, a global community for LLM practitioners, offers valuable knowledge for professionals working on deploying LLMs in production environments.

Syllabus

Evaluating LLM-Based Apps: New Product Release | Deepchecks LLM Validation


Taught by

LLMOps Space

Related Courses

Investment Strategies and Portfolio Analysis
Rice University via Coursera
Advanced R Programming
Johns Hopkins University via Coursera
Supply Chain Analytics
Rutgers University via Coursera
Технологическое предпринимательство
Moscow Institute of Physics and Technology via Coursera
Learn How To Code: Google's Go (golang) Programming Language
Udemy