Multi-Factor Context-Aware Language Modeling - 2018
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore a cutting-edge approach to language modeling in this 48-minute lecture by Mari Ostendorf from the University of Washington. Delve into the concept of multi-factor context-aware language modeling, which addresses the challenge of language variation across different contexts. Learn about a novel mechanism for incorporating contextual factors into recurrent neural network (RNN) language models, allowing for efficient handling of both continuous and discrete variables. Discover how this approach improves upon existing methods by using context vectors to control low-rank transformations of the recurrent layer weight matrix. Examine experimental results demonstrating performance gains across various contexts and datasets, and understand the computational efficiency of this method compared to alternatives. Gain insights from Ostendorf's extensive research in dynamic statistical models for speech and language processing, and her work on integrating acoustic, prosodic, and language cues for speech understanding and generation.
Syllabus
Multi-Factor Context-Aware Language Modeling -- Mari Ostendorf (University of Washington) - 2018
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Neural Networks for Machine LearningUniversity of Toronto via Coursera Good Brain, Bad Brain: Basics
University of Birmingham via FutureLearn Statistical Learning with R
Stanford University via edX Machine Learning 1—Supervised Learning
Brown University via Udacity Fundamentals of Neuroscience, Part 2: Neurons and Networks
Harvard University via edX