Reducing Cost, Latency, and Manual Efforts in Hyperparameter Tuning at Redicell
Offered By: Anyscale via YouTube
Course Description
Overview
Learn how to optimize hyperparameter tuning in machine learning models using Ray Tune in this conference talk. Discover techniques to reduce costs, latency, and manual efforts while building and experimenting with ML/DL models. Explore the benefits of Ray Tune's out-of-the-box features for efficient compute resource management and its scheduling algorithms for pruning bad trials. Gain insights into integrating Ray Tune with tools like MLflow and Weights & Biases for streamlined experiment tracking and logging. Follow along with a demo and learn how to implement these strategies to enhance your model training process.
Syllabus
Introduction
What is hyperparameter tuning
Asynchronous hyperband scheduler
Demo
Questions
Taught by
Anyscale
Related Courses
Getting Started with MLflowPluralsight PyTorch for Deep Learning Bootcamp
Udemy Supercharge Your Training With PyTorch Lightning and Weights & Biases
Weights & Biases via YouTube MLOps 101 - A Practical Tutorial on Creating a Machine Learning Project with DagsHub
Data Professor via YouTube Reproducible Machine Learning and Experiment Tracking Pipeline with Python and DVC
Venelin Valkov via YouTube