Automated Evaluation for RAG Chatbot or Other Generative Tool - Conf42 LLMs 2024
Offered By: Conf42 via YouTube
Course Description
Overview
Explore automated evaluation techniques for RAG chatbots and generative tools in this 14-minute conference talk from Conf42 LLMs 2024. Discover the importance of automating testing for generative models and learn about various approaches, including string matching, semantic similarity, and LLM-led evaluations. Gain insights into using grading rubrics with Marvin AI and explore additional ideas for effective automated testing. Understand the challenges of evaluating generative models and acquire practical strategies to improve your testing processes.
Syllabus
intro
preamble
why to automate testing?
how to automate testing?
testing generative models is hard!
string matching
semantic similarity
llm-led evals
closeness between target, actual
using a grading rubric with marvin ai
a couple of other ideas
thank you!
Taught by
Conf42
Related Courses
Building and Managing Superior SkillsState University of New York via Coursera ChatGPT et IA : mode d'emploi pour managers et RH
CNAM via France Université Numerique Digital Skills: Artificial Intelligence
Accenture via FutureLearn AI Foundations for Everyone
IBM via Coursera Design a Feminist Chatbot
Institute of Coding via FutureLearn