Stopping Hallucinations From Hurting Your LLMs - Part 2
Offered By: MLOps.community via YouTube
Course Description
Overview
Explore the critical issue of hallucinations in Large Language Models (LLMs) through this insightful 15-minute conference talk by Atindriyo Sanyal, founder and CTO of Galileo. Delve into the definition of hallucinations in modern LLM workflows and understand their impact on model outcomes and downstream consumers. Discover novel and efficient metrics and methods for early detection of hallucinations, aimed at preventing disinformation and poor or biased outcomes from LLMs. Learn how to increase trust in your LLM systems by addressing this crucial evaluation metric. Gain valuable insights from Sanyal's extensive experience in building large-scale ML platforms at companies like Uber and Apple.
Syllabus
Stopping Hallucinations From Hurting Your LLMs // Atindriyo Sanyal // LLMs in Prod Conference Part 2
Taught by
MLOps.community
Related Courses
Epidemiology for Public HealthImperial College London via Coursera Understanding and Counteracting Conscious and Unconscious Bias
Pluralsight Overcoming Bias
University of California, Irvine via Coursera Ethics and Bias in Artificial Intelligence: Executive Briefing
Pluralsight Key Concepts in Organizational DE&I
Rice University via Coursera