YoVDO

Prompting Language Models Improves Quoting from Pre-Training Data - EACL 2024

Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube

Tags

Prompt Engineering Courses Information Retrieval Courses Wikipedia Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 10-minute conference talk presented by Marc Marone at EACL 2024, discussing the paper "According to …: Prompting Language Models Improves Quoting from Pre-Training Data." Delve into the innovative "according-to prompting" technique, designed to enhance the factual accuracy of Large Language Models (LLMs) by grounding their responses in previously observed text. Learn about the novel QUIP-Score evaluation metric, which measures how well model-generated answers align with underlying text corpora. Examine experiments conducted on Wikipedia, PubMed, and U.S. legal tax code, demonstrating improved grounding and task performance. Discover how LLMs can increase or decrease grounded generations on request, offering potential solutions to combat hallucination and fake information generation in AI language models.

Syllabus

‘‘According to …’’: Prompting Language Models Improves Quoting from Pre-Training Data -- EACL 2024


Taught by

Center for Language & Speech Processing(CLSP), JHU

Related Courses

Text, Textuality and Digital Media
Indian Institute of Technology Delhi via Swayam
Reading Politics of the Supposedly Neutral
media.ccc.de via YouTube
Informath: Informalization of Formal Mathematics
Hausdorff Center for Mathematics via YouTube
Are Anonymity-Seekers Just like Everybody Else? An Analysis of Contributions to Wikipedia from Tor
IEEE via YouTube
Building a Scalable AI Chatbot with Wikipedia Data - Semantic Search and RAG
Kunal Kushwaha via YouTube