Learning Neural Network Hyperparameters for Machine Translation - 2019
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore neural network hyperparameter optimization for machine translation in this 52-minute conference talk by Kenton Murray, a PhD candidate at the University of Notre Dame. Dive into methods for improving hyperparameter selection without extensive grid searches, focusing on techniques that learn optimal parameters during the training process. Examine common regularization techniques, objective functions, and proximal gradient methods. Analyze experiments in 5-gram language modeling and auto-sizing transformer layers. Discover key takeaways about the non-universality of optimal hyperparameters and the potential of perceptron tuning for beam search. Gain insights into implementing these techniques in PyTorch and their application to low-resource and morphologically rich language pairs.
Syllabus
Intro
Statistical Machine Translation
Motivation
Grid Search
Method Overview
Common Regularization
Objective Function
Proximal Gradient Methods
Experiments: 5-gram Language Modeling
5-gram Perplexity
Behavior During Training
Key Takeaways
Optimal Hyperparameters Not Universal
Auto-Sizing Transformer Layers
Pytorch Implementation
Beam Search
Perceptron Tuning
Experiment: Tuned Reward
Questions?
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and OptimizationDeepLearning.AI via Coursera How to Win a Data Science Competition: Learn from Top Kagglers
Higher School of Economics via Coursera Predictive Modeling and Machine Learning with MATLAB
MathWorks via Coursera Machine Learning Rapid Prototyping with IBM Watson Studio
IBM via Coursera Hyperparameter Tuning with Neural Network Intelligence
Coursera Project Network via Coursera