Mitigating LLM Hallucination Risk Through Research-Backed Metrics
Offered By: Databricks via YouTube
Course Description
Overview
Explore a 42-minute conference talk on mitigating Large Language Model (LLM) hallucination risks using research-backed metrics. Delve into ChainPoll, a methodology for evaluating LLM output quality, particularly in Retrieval-Augmented Generation (RAG) and fine-tuning scenarios. Learn about metrics that correlate highly with human feedback while remaining cost-effective and low-latency. Gain insights into evaluating input quality, including data and RAG context, as well as output quality focusing on hallucinations. Discover an evaluation and experimentation framework for prompt engineering with RAG and fine-tuning using custom data. Watch a practical, demo-led guide to implementing guardrails and reducing hallucinations in LLM-powered applications. Presented by Vikram Chatterji, CEO and Co-founder of Galileo Technologies Inc, this talk offers valuable knowledge for developers and researchers working with LLMs.
Syllabus
Mitigating LLM Hallucination Risk Through Research Backed Metrics
Taught by
Databricks
Related Courses
TensorFlow: Working with NLPLinkedIn Learning Introduction to Video Editing - Video Editing Tutorials
Great Learning via YouTube HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning
Python Engineer via YouTube GPT3 and Finetuning the Core Objective Functions - A Deep Dive
David Shapiro ~ AI via YouTube How to Build a Q&A AI in Python - Open-Domain Question-Answering
James Briggs via YouTube