YoVDO

Mastering Large Language Model Evaluations - Techniques for Ensuring Generative AI Reliability

Offered By: Data Science Dojo via YouTube

Tags

Generative AI Courses Model Evaluation Courses Prompt Injection Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into the intricacies of evaluating Large Language Models (LLMs) in this comprehensive 59-minute live session. Explore the challenges faced in assessing LLMs powering generative AI applications, including hallucinations, toxicity, prompt injections, and data leaks. Gain insights into essential evaluation techniques, metrics, and tools for ensuring LLM reliability. Learn about automated solutions for both RAG and non-RAG applications. Discover best practices for setting up accurate evaluations and addressing key issues in LLM assessments. Follow along with a live demo and explore various evaluation methods. Equip yourself with the knowledge to effectively assess and improve the performance of LLMs in real-world applications.

Syllabus

Mastering Large Language Models Evaluations: Techniques for Ensuring Generative AI Reliability


Taught by

Data Science Dojo

Related Courses

AI CTF Solutions - DEFCon31 Hackathon and Kaggle Competition
Rob Mulla via YouTube
Indirect Prompt Injections in the Wild - Real World Exploits and Mitigations
Ekoparty Security Conference via YouTube
Hacking Neural Networks - Introduction and Current Techniques
media.ccc.de via YouTube
The Curious Case of the Rogue SOAR - Vulnerabilities and Exploits in Security Automation
nullcon via YouTube
Indirect Prompt Injection Into LLMs Using Images and Sounds
Black Hat via YouTube