Allen School Colloquium - NLP Research Lab Presentations
Offered By: Paul G. Allen School via YouTube
Course Description
Overview
Explore cutting-edge research in natural language processing through this comprehensive colloquium featuring five expert speakers. Delve into innovative approaches for language modeling, including Sewon Min's proposal for using data at inference time to keep models up-to-date. Discover Margaret Li's efficient LLM training techniques leveraging adaptive computation and sparsity. Examine Orevaoghene Ahia's analysis of tokenization methods and their impact on model utility and costs. Investigate Shangbin Feng's exploration of political bias propagation in language models. Learn about Niloofar Mireshghallah's research on privacy risks in interactive LLM settings. Gain valuable insights into the latest advancements and challenges in NLP research across various critical areas.
Syllabus
Allen School Colloquium: NLP Research Lab
Taught by
Paul G. Allen School
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent