JAX Crash Course - Accelerating Machine Learning Code
Offered By: AssemblyAI via YouTube
Course Description
Overview
Dive into a comprehensive 27-minute crash course on JAX, exploring its capabilities as a NumPy-compatible library for accelerating machine learning code on CPU, GPU, and TPU. Learn about JAX's key features, including its drop-in replacement for NumPy, just-in-time compilation with jit(), automatic gradient computation using grad(), vectorization with vmap(), and parallelization through pmap(). Discover how JAX compares to other libraries in terms of speed and examine its limitations. Follow along with practical examples, including an implementation of a training loop, and gain insights into when and why to use JAX for high-performance machine learning research.
Syllabus
Intro & Outline
What is JAX
Speed comparison
Drop-in Replacement for NumPy
jit: just-in-time compiler
Limitations of JIT
grad: Automatic Gradients
vmap: Automatic Vectorization
pmap: Automatic Parallelization
Example Training Loop
What’s the catch?
Taught by
AssemblyAI
Related Courses
NFNets - High-Performance Large-Scale Image Recognition Without NormalizationYannic Kilcher via YouTube Coding a Neural Network from Scratch in Pure JAX - Machine Learning with JAX - Tutorial 3
Aleksa Gordić - The AI Epiphany via YouTube Diffrax - Numerical Differential Equation Solvers in JAX
Fields Institute via YouTube JAX- Accelerated Machine Learning Research via Composable Function Transformations in Python
Fields Institute via YouTube Getting Started with Automatic Differentiation
PyCon US via YouTube