YoVDO

Mitigating LLM Hallucination Risk Through Research-Backed Metrics

Offered By: Databricks via YouTube

Tags

Prompt Engineering Courses Fine-Tuning Courses Retrieval Augmented Generation (RAG) Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 42-minute conference talk on mitigating Large Language Model (LLM) hallucination risks using research-backed metrics. Delve into ChainPoll, a methodology for evaluating LLM output quality, particularly in Retrieval-Augmented Generation (RAG) and fine-tuning scenarios. Learn about metrics that correlate highly with human feedback while remaining cost-effective and low-latency. Gain insights into evaluating input quality, including data and RAG context, as well as output quality focusing on hallucinations. Discover an evaluation and experimentation framework for prompt engineering with RAG and fine-tuning using custom data. Watch a practical, demo-led guide to implementing guardrails and reducing hallucinations in LLM-powered applications. Presented by Vikram Chatterji, CEO and Co-founder of Galileo Technologies Inc, this talk offers valuable knowledge for developers and researchers working with LLMs.

Syllabus

Mitigating LLM Hallucination Risk Through Research Backed Metrics


Taught by

Databricks

Related Courses

Better Llama with Retrieval Augmented Generation - RAG
James Briggs via YouTube
Live Code Review - Pinecone Vercel Starter Template and Retrieval Augmented Generation
Pinecone via YouTube
Nvidia's NeMo Guardrails - Full Walkthrough for Chatbots - AI
James Briggs via YouTube
Hugging Face LLMs with SageMaker - RAG with Pinecone
James Briggs via YouTube
Supercharge Your LLM Applications with RAG
Data Science Dojo via YouTube