NLP Evaluations That We Believe In
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore the challenges and advancements in Natural Language Processing (NLP) evaluations in this insightful talk by Matt Gardner from the Allen Institute for Artificial Intelligence. Delve into the limitations of current NLP benchmarks and discover innovative approaches to create more meaningful and rigorous evaluation methods. Learn about the Open Reading Benchmark (ORB), which consolidates various reading comprehension datasets to target different aspects of reading comprehension. Examine the concept of contrast sets, a technique for developing non-iid test sets that more thoroughly assess a model's capabilities. Gain valuable insights into the intersection of open domain reading comprehension and question semantics understanding, and explore the importance of reasoning over open domain text in NLP research.
Syllabus
NLP Evaluations that We Believe In -- Matt Gardner (Allen Institute for Artificial Intelligence)
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Fantasy and Science Fiction: The Human Mind, Our Modern WorldUniversity of Michigan via Coursera Spanish Basics
Independent Poetry in America: Whitman
Harvard University via edX How to Read a Mind: an Introduction to Understanding Literary Characters
The University of Nottingham via FutureLearn "A Christmas Carol" by Dickens: BerkeleyX Book Club
University of California, Berkeley via edX