Why GenAI Needs Careful Training Data Management
Offered By: Snorkel AI via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical importance of managing training data for large language models in this 18-minute talk by Stephen Bach, assistant professor of computer science at Brown University. Discover the three sequential stages of LLM training and learn why harmonizing data across these stages is essential for model effectiveness. Examine two research vignettes from Bach's lab, illustrating how to adapt GenAI models to new domains through automatic generation of instruction tuning data, and revealing potential safety vulnerabilities in GPT-4 for low-resource languages due to improperly harmonized data. Access accompanying slides and additional resources to deepen your understanding of data harmonization in GenAI development. Gain valuable insights into the complexities of LLM training and the impact of careful data management on model performance and safety.
Syllabus
Why GenAI Needs Careful Training Data Management
Taught by
Snorkel AI
Related Courses
Generative AI Advance Fine-Tuning for LLMsIBM via Coursera Instruction Tuning in Advanced Natural Language Processing - Lecture 6
Graham Neubig via YouTube Fine-Tuning Large Language Models Faster Using Bonito for Task-Specific Training Data Generation
Snorkel AI via YouTube Fine-tuning LLMs with Hugging Face SFT and QLoRA - LLMOps Techniques
LLMOps Space via YouTube Instruction Tuning of Large Language Models - Lecture
Center for Language & Speech Processing(CLSP), JHU via YouTube