Efficient Distributed Hyperparameter Tuning with Apache Spark
Offered By: Databricks via YouTube
Course Description
Overview
Explore efficient distributed hyperparameter tuning techniques using Apache Spark in this 26-minute talk from Databricks. Learn how to accelerate machine learning model optimization by leveraging Spark's distributed computing capabilities. Discover best practices for utilizing Spark with Hyperopt, including data distribution strategies and cluster sizing. Understand the challenges of parallelizing Sequential Model-Based Optimization methods and how to overcome them. Gain insights into the SparkTrials API and the joblib-spark extension for scaling up training with scikit-learn. Suitable for those familiar with machine learning concepts and interested in scaling their training processes, this talk provides practical knowledge for implementing distributed hyperparameter tuning workflows.
Syllabus
Introduction
Scenario
Agenda
Hyperparameter Tuning
Hyperparameter Tuning Challenges
Parallelization and Performance
Data Distribution
Cluster Size
Demo
Recap
Taught by
Databricks
Related Courses
Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and OptimizationDeepLearning.AI via Coursera How to Win a Data Science Competition: Learn from Top Kagglers
Higher School of Economics via Coursera Predictive Modeling and Machine Learning with MATLAB
MathWorks via Coursera Machine Learning Rapid Prototyping with IBM Watson Studio
IBM via Coursera Hyperparameter Tuning with Neural Network Intelligence
Coursera Project Network via Coursera