YoVDO

Stopping Hallucinations From Hurting Your LLMs - Part 2

Offered By: MLOps.community via YouTube

Tags

Machine Learning Courses MLOps Courses Disinformation Courses Bias Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical issue of hallucinations in Large Language Models (LLMs) through this insightful 15-minute conference talk by Atindriyo Sanyal, founder and CTO of Galileo. Delve into the definition of hallucinations in modern LLM workflows and understand their impact on model outcomes and downstream consumers. Discover novel and efficient metrics and methods for early detection of hallucinations, aimed at preventing disinformation and poor or biased outcomes from LLMs. Learn how to increase trust in your LLM systems by addressing this crucial evaluation metric. Gain valuable insights from Sanyal's extensive experience in building large-scale ML platforms at companies like Uber and Apple.

Syllabus

Stopping Hallucinations From Hurting Your LLMs // Atindriyo Sanyal // LLMs in Prod Conference Part 2


Taught by

MLOps.community

Related Courses

Introduction to Artificial Intelligence
Stanford University via Udacity
Natural Language Processing
Columbia University via Coursera
Probabilistic Graphical Models 1: Representation
Stanford University via Coursera
Computer Vision: The Fundamentals
University of California, Berkeley via Coursera
Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent