YoVDO

Stopping Hallucinations From Hurting Your LLMs - Part 2

Offered By: MLOps.community via YouTube

Tags

Machine Learning Courses MLOps Courses Disinformation Courses Bias Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical issue of hallucinations in Large Language Models (LLMs) through this insightful 15-minute conference talk by Atindriyo Sanyal, founder and CTO of Galileo. Delve into the definition of hallucinations in modern LLM workflows and understand their impact on model outcomes and downstream consumers. Discover novel and efficient metrics and methods for early detection of hallucinations, aimed at preventing disinformation and poor or biased outcomes from LLMs. Learn how to increase trust in your LLM systems by addressing this crucial evaluation metric. Gain valuable insights from Sanyal's extensive experience in building large-scale ML platforms at companies like Uber and Apple.

Syllabus

Stopping Hallucinations From Hurting Your LLMs // Atindriyo Sanyal // LLMs in Prod Conference Part 2


Taught by

MLOps.community

Related Courses

Machine Learning Operations (MLOps): Getting Started
Google Cloud via Coursera
Проектирование и реализация систем машинного обучения
Higher School of Economics via Coursera
Demystifying Machine Learning Operations (MLOps)
Pluralsight
Machine Learning Engineer with Microsoft Azure
Microsoft via Udacity
Machine Learning Engineering for Production (MLOps)
DeepLearning.AI via Coursera