Student Lightning Talks: Text Generation, Reward Consistency, and Evaluation Validity
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore three cutting-edge research presentations in natural language processing and machine learning delivered by students at the Center for Language & Speech Processing at Johns Hopkins University. Dive into Tianjian Li's work on improving text generation models' robustness to noisy training data through Error Norm Truncation. Examine Lingfeng Shen's investigation into reward model inconsistency in Reinforcement Learning from Human Feedback (RLHF) and its impact on chatbot performance. Discover Kaiser Sun's analysis of compositional generalization evaluation datasets and their influence on assessing model capabilities. Gain insights into advanced techniques for enhancing language models, understanding the challenges in RLHF, and critically evaluating benchmarking strategies in NLP research.
Syllabus
Student Lightning Talks - Tianjian, Lingfeng, Kaiser
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Computational NeuroscienceUniversity of Washington via Coursera Reinforcement Learning
Brown University via Udacity Reinforcement Learning
Indian Institute of Technology Madras via Swayam FA17: Machine Learning
Georgia Institute of Technology via edX Introduction to Reinforcement Learning
Higher School of Economics via Coursera