Mitigating LLM Hallucination Risk Through Research-Backed Metrics
Offered By: Databricks via YouTube
Course Description
Overview
Explore a 42-minute conference talk on mitigating Large Language Model (LLM) hallucination risks using research-backed metrics. Delve into ChainPoll, a methodology for evaluating LLM output quality, particularly in Retrieval-Augmented Generation (RAG) and fine-tuning scenarios. Learn about metrics that correlate highly with human feedback while remaining cost-effective and low-latency. Gain insights into evaluating input quality, including data and RAG context, as well as output quality focusing on hallucinations. Discover an evaluation and experimentation framework for prompt engineering with RAG and fine-tuning using custom data. Watch a practical, demo-led guide to implementing guardrails and reducing hallucinations in LLM-powered applications. Presented by Vikram Chatterji, CEO and Co-founder of Galileo Technologies Inc, this talk offers valuable knowledge for developers and researchers working with LLMs.
Syllabus
Mitigating LLM Hallucination Risk Through Research Backed Metrics
Taught by
Databricks
Related Courses
Discover, Validate & Launch New Business Ideas with ChatGPTUdemy 150 Digital Marketing Growth Hacks for Businesses
Udemy AI: Executive Briefing
Pluralsight The Complete Digital Marketing Guide - 25 Courses in 1
Udemy Learn to build a voice assistant with Alexa
Udemy