YoVDO

Reducing Hallucinations and Evaluating LLMs for Production

Offered By: Linux Foundation via YouTube

Tags

Model Evaluation Courses Machine Learning Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the challenges of evaluating Large Language Models (LLMs) and reducing hallucinations in their outputs in this informative conference talk. Gain insights into traditional evaluation methods like BLEU and F1 scores, as well as modern approaches such as Eleuther's AI evaluation framework. Examine the underlying causes of LLM hallucinations, including biases in training data and overfitting. Learn about open-source LLM validation modules and best practices for minimizing hallucinations to prepare LLMs for production use. Designed for practitioners, researchers, and enthusiasts with a basic understanding of language models, this talk provides valuable knowledge on improving LLM performance and reliability.

Syllabus

Reducing Hallucinations and Evaluating LLMs for Production - Divyansh Chaurasia, Deepchecks


Taught by

Linux Foundation

Tags

Related Courses

Macroeconometric Forecasting
International Monetary Fund via edX
Machine Learning With Big Data
University of California, San Diego via Coursera
Data Science at Scale - Capstone Project
University of Washington via Coursera
Structural Equation Model and its Applications | 结构方程模型及其应用 (粤语)
The Chinese University of Hong Kong via Coursera
Data Science in Action - Building a Predictive Churn Model
SAP Learning