Accelerate Model Training with a High-Performance Distributed AI/ML Stack for the Cloud
Offered By: Linux Foundation via YouTube
Course Description
Overview
Discover how to accelerate model training using a high-performance distributed AI/ML stack for the cloud in this 45-minute conference talk by Michael Clifford and Erik Erlandson from Red Hat. Learn about the challenges faced by data scientists in managing resources and infrastructure for large-scale machine learning models. Explore how open-source projects Ray and Open Data Hub can simplify distributed training and cloud-based resource allocation. Gain insights into the integration of Ray with Open Data Hub to enhance the user experience for developing large machine learning models. Witness a real-world demonstration of Ray accelerating an AI/ML workload on Open Data Hub. Understand the role of project CodeFlare in improving ML workflow tooling in the cloud. By the end of this talk, acquire knowledge on building high-performance and scalable AI/ML systems, empowering data scientists with limited DevOps expertise to train and deploy models requiring extensive compute resources.
Syllabus
Accelerate Model Training with an Easy to Use High-Performance...- Michael Clifford & Erik Erlandson
Taught by
Linux Foundation
Tags
Related Courses
How Google does Machine Learning en EspaƱolGoogle Cloud via Coursera Creating Custom Callbacks in Keras
Coursera Project Network via Coursera Automatic Machine Learning with H2O AutoML and Python
Coursera Project Network via Coursera AI in Healthcare Capstone
Stanford University via Coursera AutoML con Pycaret y TPOT
Coursera Project Network via Coursera