Hallucination-Free LLMs: Strategies for Monitoring and Mitigation
Offered By: Linux Foundation via YouTube
Course Description
Overview
Explore strategies for monitoring and mitigating hallucinations in Large Language Models (LLMs) in this 39-minute conference talk by Wojtek Kuberski from NannyML. Gain insights into why and how to monitor LLMs in production environments, focusing on state-of-the-art solutions for hallucination detection. Delve into two main approaches: Uncertainty Quantification and LLM self-evaluation. Learn about leveraging token probabilities to estimate model response quality, including simple accuracy estimation and advanced methods for Semantic Uncertainty. Discover techniques for using LLMs to assess their own output quality, covering algorithms like SelfCheckGPT and LLM-eval. Develop an intuitive understanding of various LLM monitoring methods, their strengths and weaknesses, and acquire knowledge on setting up an effective LLM monitoring system.
Syllabus
Hallucination-Free LLMs: Strategies for Monitoring and Mitigation - Wojtek Kuberski, NannyML
Taught by
Linux Foundation
Tags
Related Courses
Data Science: Inferential Thinking through SimulationsUniversity of California, Berkeley via edX Decision Making Under Uncertainty: Introduction to Structured Expert Judgment
Delft University of Technology via edX Probabilistic Deep Learning with TensorFlow 2
Imperial College London via Coursera Agent Based Modeling
The National Centre for Research Methods via YouTube Sampling in Python
DataCamp