Federated Hyperparameter Tuning - Challenges, Baselines, and Connections
Offered By: Stanford University via YouTube
Course Description
Overview
Explore the challenges, baselines, and connections to weight-sharing in federated hyperparameter tuning through this comprehensive conference talk. Delve into the complexities of tuning hyperparameters in federated learning environments, where models are trained across distributed networks of heterogeneous devices. Learn about key challenges in federated hyperparameter optimization and discover how standard approaches can be adapted to form baselines. Gain insights into a novel method called FedEx, which accelerates federated hyperparameter tuning by connecting to neural architecture search techniques. Examine theoretical foundations and empirical results demonstrating FedEx's superior performance on benchmarks like Shakespeare, FEMNIST, and CIFAR-10. Understand the importance of efficient hyperparameter tuning in federated learning and its impact on model accuracy and training budgets.
Syllabus
Introduction
Machine Learning
Outline
Hyperparameter Tuning Global vs Local
Hyperparameter Tuning Methods
Baseline Challenges
Success of Having
Issues
Resource Limitations
Local vs Global Validation
Baselines vs Bayesian Optimization
Neural Architecture Search
Architecture Search
Weight Sharing
Constraints
Federated Learning
Federated Averaging
Local Hyperparameters
Local Hyperparameter Tuning
Summary
Solution
Methods
Results
Key takeaways
Questions
Taught by
Stanford MedAI
Tags
Related Courses
Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and OptimizationDeepLearning.AI via Coursera How to Win a Data Science Competition: Learn from Top Kagglers
Higher School of Economics via Coursera Predictive Modeling and Machine Learning with MATLAB
MathWorks via Coursera Machine Learning Rapid Prototyping with IBM Watson Studio
IBM via Coursera Hyperparameter Tuning with Neural Network Intelligence
Coursera Project Network via Coursera