Reducing Cost, Latency, and Manual Efforts in Hyperparameter Tuning at Redicell
Offered By: Anyscale via YouTube
Course Description
Overview
Learn how to optimize hyperparameter tuning in machine learning models using Ray Tune in this conference talk. Discover techniques to reduce costs, latency, and manual efforts while building and experimenting with ML/DL models. Explore the benefits of Ray Tune's out-of-the-box features for efficient compute resource management and its scheduling algorithms for pruning bad trials. Gain insights into integrating Ray Tune with tools like MLflow and Weights & Biases for streamlined experiment tracking and logging. Follow along with a demo and learn how to implement these strategies to enhance your model training process.
Syllabus
Introduction
What is hyperparameter tuning
Asynchronous hyperband scheduler
Demo
Questions
Taught by
Anyscale
Related Courses
Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and OptimizationDeepLearning.AI via Coursera Machine Learning in the Enterprise
Google Cloud via Coursera Art and Science of Machine Learning 日本語版
Google Cloud via Coursera Art and Science of Machine Learning auf Deutsch
Google Cloud via Coursera Art and Science of Machine Learning en Español
Google Cloud via Coursera