How to Improve LLM Factual Accuracy and Reliability
Offered By: Snorkel AI via YouTube
Course Description
Overview
Explore techniques for improving the factual accuracy and reliability of large language models in this 29-minute talk by Matei Zaharia, Co-Founder and Chief Technologist at Databricks. Discover research-based approaches like the Demonstrate-Search-Predict (DSP) framework, which connects LLMs to factual information and enhances application performance over time. Learn about industry-focused solutions, including Databricks' development of "LLMOps" tools within the MLflow open-source framework. Gain insights into converting LLMs' text generation capabilities into dependable, production-grade applications for more truthful and accurate content generation.
Syllabus
How Can We Get LLMs To Tell The Truth?
Taught by
Snorkel AI
Related Courses
Solving the Last Mile Problem of Foundation Models with Data-Centric AIMLOps.community via YouTube Foundational Models in Enterprise AI - Challenges and Opportunities
MLOps.community via YouTube Knowledge Distillation Demystified: Techniques and Applications
Snorkel AI via YouTube Model Distillation - From Large Models to Efficient Enterprise Solutions
Snorkel AI via YouTube Curate Training Data via Labeling Functions - 10 to 100x Faster
Snorkel AI via YouTube