YoVDO

Combining LLMs with Knowledge Bases to Prevent Hallucinations

Offered By: MLOps.community via YouTube

Tags

Information Retrieval Courses Scalability Courses Trustworthy AI Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore strategies for preventing hallucinations in Large Language Models (LLMs) in this 44-minute conference talk by Scott Mackie at LLMs in Prod Con 2. Dive into the concept of "LLM Hallucinations" and learn how to keep LLMs grounded and reliable for real-world applications. Follow along as Mackie demonstrates an "LLM-powered Support Center" implementation to illustrate hallucination-related problems. Discover how integrating a searchable knowledge base can enhance the trustworthiness of AI-generated responses. Examine the scalability of this approach and its potential impact on future AI-driven applications. Gain insights from Mackie's experience as a Staff Engineer at Mem and his work on scaling LLM pipeline systems for AI workspaces.

Syllabus

Combining LLMs with Knowledge Bases to Prevent Hallucinations // Scott Mackie // LLMs in Prod Con 2


Taught by

MLOps.community

Related Courses

Semantic Web Technologies
openHPI
أساسيات استرجاع المعلومات
Rwaq (رواق)
《gacco特別企画》Evernoteで広がるgaccoの学びスタイル (ga038)
University of Tokyo via gacco
La Web Semántica: Herramientas para la publicación y extracción efectiva de información en la Web
Pontificia Universidad Católica de Chile via Coursera
快速学习
University of Science and Technology of China via Coursera