Mastering Large Language Model Evaluations - Techniques for Ensuring Generative AI Reliability
Offered By: Data Science Dojo via YouTube
Course Description
Overview
Dive into the intricacies of evaluating Large Language Models (LLMs) in this comprehensive 59-minute live session. Explore the challenges faced in assessing LLMs powering generative AI applications, including hallucinations, toxicity, prompt injections, and data leaks. Gain insights into essential evaluation techniques, metrics, and tools for ensuring LLM reliability. Learn about automated solutions for both RAG and non-RAG applications. Discover best practices for setting up accurate evaluations and addressing key issues in LLM assessments. Follow along with a live demo and explore various evaluation methods. Equip yourself with the knowledge to effectively assess and improve the performance of LLMs in real-world applications.
Syllabus
Mastering Large Language Models Evaluations: Techniques for Ensuring Generative AI Reliability
Taught by
Data Science Dojo
Related Courses
Building and Managing Superior SkillsState University of New York via Coursera ChatGPT et IA : mode d'emploi pour managers et RH
CNAM via France Université Numerique Digital Skills: Artificial Intelligence
Accenture via FutureLearn AI Foundations for Everyone
IBM via Coursera Design a Feminist Chatbot
Institute of Coding via FutureLearn