YoVDO

Mitigating LLM Hallucination Risk Through Research-Backed Metrics

Offered By: Databricks via YouTube

Tags

Prompt Engineering Courses Fine-Tuning Courses Retrieval Augmented Generation (RAG) Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 42-minute conference talk on mitigating Large Language Model (LLM) hallucination risks using research-backed metrics. Delve into ChainPoll, a methodology for evaluating LLM output quality, particularly in Retrieval-Augmented Generation (RAG) and fine-tuning scenarios. Learn about metrics that correlate highly with human feedback while remaining cost-effective and low-latency. Gain insights into evaluating input quality, including data and RAG context, as well as output quality focusing on hallucinations. Discover an evaluation and experimentation framework for prompt engineering with RAG and fine-tuning using custom data. Watch a practical, demo-led guide to implementing guardrails and reducing hallucinations in LLM-powered applications. Presented by Vikram Chatterji, CEO and Co-founder of Galileo Technologies Inc, this talk offers valuable knowledge for developers and researchers working with LLMs.

Syllabus

Mitigating LLM Hallucination Risk Through Research Backed Metrics


Taught by

Databricks

Related Courses

Discover, Validate & Launch New Business Ideas with ChatGPT
Udemy
150 Digital Marketing Growth Hacks for Businesses
Udemy
AI: Executive Briefing
Pluralsight
The Complete Digital Marketing Guide - 25 Courses in 1
Udemy
Learn to build a voice assistant with Alexa
Udemy