Docker Containers for Machine Learning on Specialized AI Hardware
Offered By: Docker via YouTube
Course Description
Overview
Explore the intricacies of running machine learning workloads on specialized AI hardware using Docker in this 35-minute talk by AWS Developer Advocate Shashank Prasanna. Delve into the evolution of specialized processors, from early coprocessors to modern GPUs and AI accelerators like AWS Inferentia and Intel Habana Gaudi. Discover how Docker containers adapt to heterogeneous systems with multiple processor types, ensuring scalability and maintaining their benefits. Gain insights into the future of machine learning workloads across diverse AI silicon, including GPUs, TPUs, and emerging technologies. Learn about the crucial role containers play in managing these complex, multi-processor environments for efficient machine learning deployments.
Syllabus
How does Docker run machine learning on specialized AI
Taught by
Docker
Related Courses
Моделирование биологических молекул на GPU (Biomolecular modeling on GPU)Moscow Institute of Physics and Technology via Coursera Practical Deep Learning For Coders
fast.ai via Independent GPU Architectures And Programming
Indian Institute of Technology, Kharagpur via Swayam Perform Real-Time Object Detection with YOLOv3
Coursera Project Network via Coursera Getting Started with PyTorch
Coursera Project Network via Coursera