Simplified Training, Tuning and Deployment of Foundational Models as a Service with Red Hat OpenShift
Offered By: Anyscale via YouTube
Course Description
Overview
Explore the simplified end-to-end lifecycle of foundation models in this 28-minute conference talk from Anyscale. Discover how a cloud-native, scalable stack for training, fine-tuning, and inferencing is realized with Red Hat OpenShift Data Science (RHODS). Learn about new open-source components like CodeFlare SDK, Multi-cloud App Dispatcher, and InstaScale, and their integration with Ray and PyTorch for large-scale data preparation, training, and validation. Examine the inference stack based on ModelMesh and KServe for deploying models in production, and understand how Data Science Pipelines orchestrate models, including versioning and tracking. Gain insights into operating this stack across public cloud and on-premise environments, and explore success stories highlighting the benefits of this full-stack approach for various use cases involving foundation models.
Syllabus
Simplified Training, Tuning and Deployment of Foundational Models as a Service with Red Hat OpenShif
Taught by
Anyscale
Related Courses
Custom and Distributed Training with TensorFlowDeepLearning.AI via Coursera Architecting Production-ready ML Models Using Google Cloud ML Engine
Pluralsight Building End-to-end Machine Learning Workflows with Kubeflow
Pluralsight Deploying PyTorch Models in Production: PyTorch Playbook
Pluralsight Inside TensorFlow
TensorFlow via YouTube