YoVDO

Accelerate AI Inference Workloads with Google Cloud TPUs and GPUs

Offered By: Google Cloud Tech via YouTube

Tags

Google Cloud Platform (GCP) Courses Artificial Intelligence Courses Machine Learning Courses Cloud Computing Courses Generative AI Courses Inference Courses TPUs Courses Hardware Acceleration Courses

Course Description

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore key considerations for choosing cloud tensor processing units (TPUs) and NVidia-powered graphics processing unit (GPU) VMs for high-performance AI inference on Google Cloud. Learn about the strengths of each accelerator for various workloads, including large language models and generative AI. Discover deployment and optimization techniques for inference pipelines using TPUs or GPUs. Understand cost implications and explore strategies for cost optimization. This 37-minute conference talk from Google Cloud Next 2024 features insights from speakers Alexander Spiridonov, Omer Hasan, Uğur Arpaci, and Kirat Pandya, providing valuable guidance for deploying AI models at scale with Google Cloud's range of accelerator options.

Syllabus

Accelerate AI inference workloads with Google Cloud TPUs and GPUs


Taught by

Google Cloud Tech

Related Courses

Production Machine Learning Systems
Google Cloud via Coursera
Deep Learning
Kaggle via YouTube
All About AI Accelerators - GPU, TPU, Dataflow, Near-Memory, Optical, Neuromorphic & More
Yannic Kilcher via YouTube
Machine Learning with JAX - From Hero to HeroPro+
Aleksa Gordić - The AI Epiphany via YouTube
PyTorch NLP Model Training and Fine-Tuning on Colab TPU Multi-GPU with Accelerate
1littlecoder via YouTube