JAX Crash Course - Accelerating Machine Learning Code
Offered By: AssemblyAI via YouTube
Course Description
Overview
Dive into a comprehensive 27-minute crash course on JAX, exploring its capabilities as a NumPy-compatible library for accelerating machine learning code on CPU, GPU, and TPU. Learn about JAX's key features, including its drop-in replacement for NumPy, just-in-time compilation with jit(), automatic gradient computation using grad(), vectorization with vmap(), and parallelization through pmap(). Discover how JAX compares to other libraries in terms of speed and examine its limitations. Follow along with practical examples, including an implementation of a training loop, and gain insights into when and why to use JAX for high-performance machine learning research.
Syllabus
Intro & Outline
What is JAX
Speed comparison
Drop-in Replacement for NumPy
jit: just-in-time compiler
Limitations of JIT
grad: Automatic Gradients
vmap: Automatic Vectorization
pmap: Automatic Parallelization
Example Training Loop
What’s the catch?
Taught by
AssemblyAI
Related Courses
Introduction to Artificial IntelligenceStanford University via Udacity Natural Language Processing
Columbia University via Coursera Probabilistic Graphical Models 1: Representation
Stanford University via Coursera Computer Vision: The Fundamentals
University of California, Berkeley via Coursera Learning from Data (Introductory Machine Learning course)
California Institute of Technology via Independent