Controlling Distribution Shifts in Language Models: A Data-Centric Approach
Offered By: Simons Institute via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a lecture on controlling distribution shifts in language models through data-centric approaches. Delve into Tatsunori Hashimoto's presentation from Stanford University, part of the Emerging Generalization Settings series at the Simons Institute. Examine the challenges of cross-task and cross-domain generalization in NLP, focusing on the trade-offs between generalization and control in language model pretraining. Discover two complementary strategies: algorithmic data filtering to prioritize benchmark-relevant training data and domain adaptation through large-scale synthesis of domain-specific pretraining data. Gain insights into addressing the gaps between pretraining and target evaluation caused by distribution shifts in language models.
Syllabus
Controlling distribution shifts in language models: a data-centric approach.
Taught by
Simons Institute
Related Courses
Improving Retrieval with RAG Fine-tuningPluralsight Adapting Like Humans: Embodied AI Beyond Datasets and Domains
GAIA via YouTube Neural Nets for NLP 2017 - Multilingual and Multitask Learning
Graham Neubig via YouTube Covariate Shift - Challenges and Good Practice in Machine Learning
GOTO Conferences via YouTube Challenges in Adapting LLMs for Niche Domains - Devoxx Greece 2024
Devoxx via YouTube