Prompting Language Models Improves Quoting from Pre-Training Data - EACL 2024
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore a 10-minute conference talk presented by Marc Marone at EACL 2024, discussing the paper "According to …: Prompting Language Models Improves Quoting from Pre-Training Data." Delve into the innovative "according-to prompting" technique, designed to enhance the factual accuracy of Large Language Models (LLMs) by grounding their responses in previously observed text. Learn about the novel QUIP-Score evaluation metric, which measures how well model-generated answers align with underlying text corpora. Examine experiments conducted on Wikipedia, PubMed, and U.S. legal tax code, demonstrating improved grounding and task performance. Discover how LLMs can increase or decrease grounded generations on request, offering potential solutions to combat hallucination and fake information generation in AI language models.
Syllabus
‘‘According to …’’: Prompting Language Models Improves Quoting from Pre-Training Data -- EACL 2024
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Semantic Web TechnologiesopenHPI أساسيات استرجاع المعلومات
Rwaq (رواق) 《gacco特別企画》Evernoteで広がるgaccoの学びスタイル (ga038)
University of Tokyo via gacco La Web Semántica: Herramientas para la publicación y extracción efectiva de información en la Web
Pontificia Universidad Católica de Chile via Coursera 快速学习
University of Science and Technology of China via Coursera