Hallucination-Free LLMs: Strategies for Monitoring and Mitigation
Offered By: Data Science Dojo via YouTube
Course Description
Overview
Explore strategies for monitoring and mitigating hallucinations in Large Language Models (LLMs) deployed to production. Delve into state-of-the-art solutions for detecting hallucinations, focusing on Uncertainty Quantification and LLM self-evaluation. Learn about leveraging token probabilities to estimate response quality, including simple accuracy estimation and advanced methods for Semantic Uncertainty. Discover how to use LLMs to quantify answer quality and explore cutting-edge algorithms like SelfCheckGPT and LLM-eval. Gain an intuitive understanding of LLM monitoring methods, their strengths and weaknesses, and learn to set up an effective LLM monitoring system. Topics covered include an introduction to LLM monitoring, consistency-based and answer evaluation-based hallucination detection, output uncertainty quantification, semantic uncertainty quantification, and experimental results.
Syllabus
Introduction
What is LLM Monitoring
LLM-Based Hallucination Detection: Consistency
LLM-Based Hallucination Detection: Answer Evaluation
Output Uncertainty Quantification
Semantic Uncertainty Quantification
Experiment Results
Taught by
Data Science Dojo
Related Courses
Data Science: Inferential Thinking through SimulationsUniversity of California, Berkeley via edX Decision Making Under Uncertainty: Introduction to Structured Expert Judgment
Delft University of Technology via edX Probabilistic Deep Learning with TensorFlow 2
Imperial College London via Coursera Agent Based Modeling
The National Centre for Research Methods via YouTube Sampling in Python
DataCamp