All About AI Accelerators - GPU, TPU, Dataflow, Near-Memory, Optical, Neuromorphic & More
Offered By: Yannic Kilcher via YouTube
Course Description
Overview
Dive into an in-depth interview with AI acceleration expert Adi Fuchs, exploring the landscape of modern AI acceleration technology. Gain insights into the success of GPUs, the concept of "dark silicon," and emerging technologies beyond traditional accelerators. Explore systolic arrays, VLIW, reconfigurable dataflow hardware, near-memory computing, optical and neuromorphic computing, and their impact on AI development. Understand how hardware acts as both an enabler and limiter in AI progress, and discover resources for further exploration of this rapidly evolving field.
Syllabus
- Intro
- What does it mean to make hardware for AI?
- Why were GPUs so successful?
- What is "dark silicon"?
- Beyond GPUs: How can we get even faster AI compute?
- A look at today's accelerator landscape
- Systolic Arrays and VLIW
- Reconfigurable dataflow hardware
- The failure of Wave Computing
- What is near-memory compute?
- Optical and Neuromorphic Computing
- Hardware as enabler and limiter
- Everything old is new again
- Where to go to dive deeper?
Taught by
Yannic Kilcher
Related Courses
Production Machine Learning SystemsGoogle Cloud via Coursera Deep Learning
Kaggle via YouTube Machine Learning with JAX - From Hero to HeroPro+
Aleksa Gordić - The AI Epiphany via YouTube PyTorch NLP Model Training and Fine-Tuning on Colab TPU Multi-GPU with Accelerate
1littlecoder via YouTube Solving a Complex Game with AI and All the Google Cloud Power
Devoxx via YouTube