Learning Neural Network Hyperparameters for Machine Translation - 2019
Offered By: Center for Language & Speech Processing(CLSP), JHU via YouTube
Course Description
Overview
Explore neural network hyperparameter optimization for machine translation in this 52-minute conference talk by Kenton Murray, a PhD candidate at the University of Notre Dame. Dive into methods for improving hyperparameter selection without extensive grid searches, focusing on techniques that learn optimal parameters during the training process. Examine common regularization techniques, objective functions, and proximal gradient methods. Analyze experiments in 5-gram language modeling and auto-sizing transformer layers. Discover key takeaways about the non-universality of optimal hyperparameters and the potential of perceptron tuning for beam search. Gain insights into implementing these techniques in PyTorch and their application to low-resource and morphologically rich language pairs.
Syllabus
Intro
Statistical Machine Translation
Motivation
Grid Search
Method Overview
Common Regularization
Objective Function
Proximal Gradient Methods
Experiments: 5-gram Language Modeling
5-gram Perplexity
Behavior During Training
Key Takeaways
Optimal Hyperparameters Not Universal
Auto-Sizing Transformer Layers
Pytorch Implementation
Beam Search
Perceptron Tuning
Experiment: Tuned Reward
Questions?
Taught by
Center for Language & Speech Processing(CLSP), JHU
Related Courses
Data Analysis and VisualizationGeorgia Institute of Technology via Udacity Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization
DeepLearning.AI via Coursera 機器學習基石下 (Machine Learning Foundations)---Algorithmic Foundations
National Taiwan University via Coursera Data Science: Machine Learning
Harvard University via edX Art and Science of Machine Learning auf Deutsch
Google Cloud via Coursera