Shared Memory Parallelism in Julia with Multi-Threading - Parallel Depth-First Scheduling
Offered By: The Julia Programming Language via YouTube
Course Description
Overview
Syllabus
Welcome!.
Why we need threads?.
Task parallelism.
Data parallelism.
Julia's experimental threading infrastructure added in 2015/2016.
Successes of aforementioned threading infrastructure.
What we've learned.
Problem is not adding threads to Julia, but making them useful at every level.
Nested parallelism: parallel code calling function from a library that is also parallel.
Example: multiplying two n x n matrices.
Example: running code sequentially.
Example: you need O(n^2) space.
Example: running code in parallel on 4 cores with OpenMP, OMP_NESTED = 1.
Example: such parallel code needs O(n^3) in space.
Another way: work-stealing.
Problem: work-stealing algorithm essentially run like a serial algorithm.
Parallel depth-first scheduling.
partr -- parallel task runtime.
partr implementation.
partr -- priority queues.
partr -- handling nested parallelism.
Possible problem: we do not synchronize at each spawn point.
Why all these things are important?.
Q&A: is Julia more suitable for implementation of partr than other languages?.
Taught by
The Julia Programming Language
Related Courses
Intro to Parallel ProgrammingNvidia via Udacity Introduction to Linear Models and Matrix Algebra
Harvard University via edX Введение в параллельное программирование с использованием OpenMP и MPI
Tomsk State University via Coursera Supercomputing
Partnership for Advanced Computing in Europe via FutureLearn Fundamentals of Parallelism on Intel Architecture
Intel via Coursera