Federated Hyperparameter Tuning - Challenges, Baselines, and Connections
Offered By: Stanford University via YouTube
Course Description
Overview
Explore the challenges, baselines, and connections to weight-sharing in federated hyperparameter tuning through this comprehensive conference talk. Delve into the complexities of tuning hyperparameters in federated learning environments, where models are trained across distributed networks of heterogeneous devices. Learn about key challenges in federated hyperparameter optimization and discover how standard approaches can be adapted to form baselines. Gain insights into a novel method called FedEx, which accelerates federated hyperparameter tuning by connecting to neural architecture search techniques. Examine theoretical foundations and empirical results demonstrating FedEx's superior performance on benchmarks like Shakespeare, FEMNIST, and CIFAR-10. Understand the importance of efficient hyperparameter tuning in federated learning and its impact on model accuracy and training budgets.
Syllabus
Introduction
Machine Learning
Outline
Hyperparameter Tuning Global vs Local
Hyperparameter Tuning Methods
Baseline Challenges
Success of Having
Issues
Resource Limitations
Local vs Global Validation
Baselines vs Bayesian Optimization
Neural Architecture Search
Architecture Search
Weight Sharing
Constraints
Federated Learning
Federated Averaging
Local Hyperparameters
Local Hyperparameter Tuning
Summary
Solution
Methods
Results
Key takeaways
Questions
Taught by
Stanford MedAI
Tags
Related Courses
Machine Learning Modeling Pipelines in ProductionDeepLearning.AI via Coursera MLOps for Scaling TinyML
Harvard University via edX Parameter Prediction for Unseen Deep Architectures - With First Author Boris Knyazev
Yannic Kilcher via YouTube SpineNet - Learning Scale-Permuted Backbone for Recognition and Localization
Yannic Kilcher via YouTube Synthetic Petri Dish - A Novel Surrogate Model for Rapid Architecture Search
Yannic Kilcher via YouTube