Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Explore a detailed analysis of the 2014 NeurIPS peer review experiment in this informative video. Delve into the subjective nature of conference paper reviews, examining how well reviewers can predict future impact and the fate of rejected papers. Learn about the experiment's findings, including the lack of correlation between quality scores and citation counts for accepted papers, and the implications for assessing researcher quality. Gain insights into potential improvements for the reviewing process and understand the broader context of peer review in machine learning conferences.
Syllabus
- Intro & Overview
- Recap: The 2014 NeurIPS Experiment
- How much of reviewing is subjective?
- Validation via simulation
- Can reviewers predict future impact?
- Discussion & Comments
Taught by
Yannic Kilcher
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent