Introduction to Amazon Elastic Inference
Offered By: Amazon Web Services via AWS Skill Builder
Course Description
Overview
Now you can reduce machine learning inference costs by up to 75% by using Amazon Elastic Inference (Amazon EI). This new accelerated compute service for Amazon SageMaker and Amazon EC2 enables you to add hardware acceleration to your machine learning inference in fractional sizes of a full GPU instance, so you can avoid over-provisioning GPU compute capacity. In this video, you’ll also learn about the service’s benefits and key features and see a brief demonstration.
Tags
Related Courses
Introduction to AWS Inferentia and Amazon EC2 Inf1 InstancesPluralsight Introduction to AWS Inferentia and Amazon EC2 Inf1 Instances (Korean)
Amazon Web Services via AWS Skill Builder TensorFlow Lite - Solution for Running ML On-Device
TensorFlow via YouTube Inference on KubeEdge
Linux Foundation via YouTube Deep Learning Neural Network Acceleration at the Edge
Linux Foundation via YouTube