Accelerate Model Training with a High-Performance Distributed AI/ML Stack for the Cloud
Offered By: Linux Foundation via YouTube
Course Description
Overview
Discover how to accelerate model training using a high-performance distributed AI/ML stack for the cloud in this 45-minute conference talk by Michael Clifford and Erik Erlandson from Red Hat. Learn about the challenges faced by data scientists in managing resources and infrastructure for large-scale machine learning models. Explore how open-source projects Ray and Open Data Hub can simplify distributed training and cloud-based resource allocation. Gain insights into the integration of Ray with Open Data Hub to enhance the user experience for developing large machine learning models. Witness a real-world demonstration of Ray accelerating an AI/ML workload on Open Data Hub. Understand the role of project CodeFlare in improving ML workflow tooling in the cloud. By the end of this talk, acquire knowledge on building high-performance and scalable AI/ML systems, empowering data scientists with limited DevOps expertise to train and deploy models requiring extensive compute resources.
Syllabus
Accelerate Model Training with an Easy to Use High-Performance...- Michael Clifford & Erik Erlandson
Taught by
Linux Foundation
Tags
Related Courses
Startup EngineeringStanford University via Coursera Developing Scalable Apps in Java
Google via Udacity Cloud Computing Concepts, Part 1
University of Illinois at Urbana-Champaign via Coursera Cloud Networking
University of Illinois at Urbana-Champaign via Coursera Cloud Computing Concepts: Part 2
University of Illinois at Urbana-Champaign via Coursera