Accelerate PyTorch Workloads with PyTorch/XLA
Offered By: Google Cloud Tech via YouTube
Course Description
Overview
Explore how PyTorch/XLA accelerates AI workloads on Google Cloud AI Accelerators in this 31-minute conference talk. Learn about the collaboration between Google, Meta, and AI ecosystem partners to enhance performance and cost-effectiveness for PyTorch, JAX, and TensorFlow frameworks. Discover the XLA compiler's role in optimizing PyTorch workloads on Cloud TPUs and GPUs. Gain insights into PyTorch/XLA's capabilities for high-performance training and inference of state-of-the-art large language models like Meta's LLaMA 2. Understand how PyTorch Lightning facilitates quick and easy fine-tuning of LLMs on Cloud TPUs. Presented by Carlos Mocholi, Damien Sereni, Shauheen Zahirazami, and Rachit Aggarwal at Google Cloud Next.
Syllabus
Accelerate PyTorch workloads with PyTorch/XLA
Taught by
Google Cloud Tech
Related Courses
Production Machine Learning SystemsGoogle Cloud via Coursera Deep Learning
Kaggle via YouTube All About AI Accelerators - GPU, TPU, Dataflow, Near-Memory, Optical, Neuromorphic & More
Yannic Kilcher via YouTube Machine Learning with JAX - From Hero to HeroPro+
Aleksa Gordić - The AI Epiphany via YouTube PyTorch NLP Model Training and Fine-Tuning on Colab TPU Multi-GPU with Accelerate
1littlecoder via YouTube