Accelerate AI Inference Workloads with Google Cloud TPUs and GPUs
Offered By: Google Cloud Tech via YouTube
Course Description
Overview
          Explore key considerations for choosing cloud tensor processing units (TPUs) and NVidia-powered graphics processing unit (GPU) VMs for high-performance AI inference on Google Cloud. Learn about the strengths of each accelerator for various workloads, including large language models and generative AI. Discover deployment and optimization techniques for inference pipelines using TPUs or GPUs. Understand cost implications and explore strategies for cost optimization. This 37-minute conference talk from Google Cloud Next 2024 features insights from speakers Alexander Spiridonov, Omer Hasan, Uğur Arpaci, and Kirat Pandya, providing valuable guidance for deploying AI models at scale with Google Cloud's range of accelerator options.
        
Syllabus
Accelerate AI inference workloads with Google Cloud TPUs and GPUs
Taught by
Google Cloud Tech
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Artificial Intelligence for Robotics
Stanford University via Udacity Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent
