Deploying Deep Learning Models for Inference at Production Scale
Offered By: Applied Singularity via YouTube
Course Description
Overview
Explore a comprehensive session from NVIDIA Discovery Bengaluru focused on deploying AI models at production scale. Learn about two key NVIDIA resources: TensorRT, a deep learning platform optimizing neural network models and accelerating inference across GPU platforms, and Triton Inference Server, an open-source software providing a standardized inference platform for various infrastructures. Access the accompanying PowerPoint presentation for detailed insights. Gain valuable knowledge on the latest advancements in AI, machine learning, deep learning, and generative AI by joining the Applied Singularity Meetup group and downloading their free mobile app available on iOS and Android.
Syllabus
Deploying Deep Learning Models for Inference at Production Scale - at NVIDIA
Taught by
Applied Singularity
Related Courses
Teaching Impacts of Technology: FundamentalsUniversity of California, San Diego via Coursera Microsoft Azure Services and Concepts
Pluralsight Virtualización con VMware aplicada al mundo empresarial
Udemy Cloud Deployment Options: Executive Briefing
Pluralsight Designing Storage Networking for Cisco Data Center Infrastructure
Pluralsight