YoVDO

Combining LLMs with Knowledge Bases to Prevent Hallucinations

Offered By: MLOps.community via YouTube

Tags

Information Retrieval Courses Scalability Courses Trustworthy AI Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore strategies for preventing hallucinations in Large Language Models (LLMs) in this 44-minute conference talk by Scott Mackie at LLMs in Prod Con 2. Dive into the concept of "LLM Hallucinations" and learn how to keep LLMs grounded and reliable for real-world applications. Follow along as Mackie demonstrates an "LLM-powered Support Center" implementation to illustrate hallucination-related problems. Discover how integrating a searchable knowledge base can enhance the trustworthiness of AI-generated responses. Examine the scalability of this approach and its potential impact on future AI-driven applications. Gain insights from Mackie's experience as a Staff Engineer at Mem and his work on scaling LLM pipeline systems for AI workspaces.

Syllabus

Combining LLMs with Knowledge Bases to Prevent Hallucinations // Scott Mackie // LLMs in Prod Con 2


Taught by

MLOps.community

Related Courses

Creating Trustworthy and Ethical Artificial Intelligence
SAP Learning
AI and the Law: Implementing Trustworthy AI
Pluralsight
Trustworthy AI for Healthcare Management
Politecnico di Milano via Coursera
Solana Larsen- Who Has Power Over AI?
Stanford University via YouTube
Human-Centered AI: Challenges and Governance in News Automation
Association for Computing Machinery (ACM) via YouTube