Reducing Cost, Latency, and Manual Efforts in Hyperparameter Tuning at Redicell
Offered By: Anyscale via YouTube
Course Description
Overview
Learn how to optimize hyperparameter tuning in machine learning models using Ray Tune in this conference talk. Discover techniques to reduce costs, latency, and manual efforts while building and experimenting with ML/DL models. Explore the benefits of Ray Tune's out-of-the-box features for efficient compute resource management and its scheduling algorithms for pruning bad trials. Gain insights into integrating Ray Tune with tools like MLflow and Weights & Biases for streamlined experiment tracking and logging. Follow along with a demo and learn how to implement these strategies to enhance your model training process.
Syllabus
Introduction
What is hyperparameter tuning
Asynchronous hyperband scheduler
Demo
Questions
Taught by
Anyscale
Related Courses
How Google does Machine Learning en EspaƱolGoogle Cloud via Coursera Creating Custom Callbacks in Keras
Coursera Project Network via Coursera Automatic Machine Learning with H2O AutoML and Python
Coursera Project Network via Coursera AI in Healthcare Capstone
Stanford University via Coursera AutoML con Pycaret y TPOT
Coursera Project Network via Coursera