YoVDO

Combining LLMs with Knowledge Bases to Prevent Hallucinations

Offered By: MLOps.community via YouTube

Tags

Information Retrieval Courses Scalability Courses Trustworthy AI Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore strategies for preventing hallucinations in Large Language Models (LLMs) in this 44-minute conference talk by Scott Mackie at LLMs in Prod Con 2. Dive into the concept of "LLM Hallucinations" and learn how to keep LLMs grounded and reliable for real-world applications. Follow along as Mackie demonstrates an "LLM-powered Support Center" implementation to illustrate hallucination-related problems. Discover how integrating a searchable knowledge base can enhance the trustworthiness of AI-generated responses. Examine the scalability of this approach and its potential impact on future AI-driven applications. Gain insights from Mackie's experience as a Staff Engineer at Mem and his work on scaling LLM pipeline systems for AI workspaces.

Syllabus

Combining LLMs with Knowledge Bases to Prevent Hallucinations // Scott Mackie // LLMs in Prod Con 2


Taught by

MLOps.community

Related Courses

Financial Sustainability: The Numbers side of Social Enterprise
+Acumen via NovoEd
Cloud Computing Concepts: Part 2
University of Illinois at Urbana-Champaign via Coursera
Developing Repeatable ModelsĀ® to Scale Your Impact
+Acumen via Independent
Managing Microsoft Windows Server Active Directory Domain Services
Microsoft via edX
Introduction aux conteneurs
Microsoft Virtual Academy via OpenClassrooms