Reliable Hallucination Detection in Large Language Models
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore reliable hallucination detection techniques for large language models in this 35-minute AI in Production talk by Jiaxin Zhang. Delve into the critical aspects of understanding trustworthiness in modern language models by examining existing detection approaches based on self-consistency. Discover two types of hallucinations stemming from question-level and model-level issues that cannot be effectively identified through self-consistency checks alone. Learn about the novel sampling-based method called semantic-aware cross-check consistency (SAC3), which expands on the principle of self-consistency checking. Understand how SAC3 incorporates additional mechanisms to detect both question-level and model-level hallucinations by leveraging semantically equivalent question perturbation and cross-model response consistency checking. Gain insights from extensive empirical analysis demonstrating SAC3's superior performance in detecting non-factual and factual statements across multiple question-answering and open-domain generation benchmarks.
Syllabus
Reliable Hallucination Detection in Large Language Models // Jiaxin Zhang // AI in Production Talk
Taught by
MLOps.community
Related Courses
Data Science: Inferential Thinking through SimulationsUniversity of California, Berkeley via edX Decision Making Under Uncertainty: Introduction to Structured Expert Judgment
Delft University of Technology via edX Probabilistic Deep Learning with TensorFlow 2
Imperial College London via Coursera Agent Based Modeling
The National Centre for Research Methods via YouTube Sampling in Python
DataCamp