JAX Crash Course - Accelerating Machine Learning Code
Offered By: AssemblyAI via YouTube
Course Description
Overview
Dive into a comprehensive 27-minute crash course on JAX, exploring its capabilities as a NumPy-compatible library for accelerating machine learning code on CPU, GPU, and TPU. Learn about JAX's key features, including its drop-in replacement for NumPy, just-in-time compilation with jit(), automatic gradient computation using grad(), vectorization with vmap(), and parallelization through pmap(). Discover how JAX compares to other libraries in terms of speed and examine its limitations. Follow along with practical examples, including an implementation of a training loop, and gain insights into when and why to use JAX for high-performance machine learning research.
Syllabus
Intro & Outline
What is JAX
Speed comparison
Drop-in Replacement for NumPy
jit: just-in-time compiler
Limitations of JIT
grad: Automatic Gradients
vmap: Automatic Vectorization
pmap: Automatic Parallelization
Example Training Loop
What’s the catch?
Taught by
AssemblyAI
Related Courses
Amazon FSx for Lustre Primer (Italian)Amazon Web Services via AWS Skill Builder Amazon FSx for Lustre Primer (Korean)
Amazon Web Services via AWS Skill Builder Amazon FSx for Lustre Primer (Portuguese)
Amazon Web Services via AWS Skill Builder Amazon FSx for Lustre Primer (Spanish)
Amazon Web Services via AWS Skill Builder Amazon FSx for Lustre Primer (Traditional Chinese)
Amazon Web Services via AWS Skill Builder