Shared Memory Parallelism in Julia with Multi-Threading - Parallel Depth-First Scheduling
Offered By: The Julia Programming Language via YouTube
Course Description
Overview
Syllabus
Welcome!.
Why we need threads?.
Task parallelism.
Data parallelism.
Julia's experimental threading infrastructure added in 2015/2016.
Successes of aforementioned threading infrastructure.
What we've learned.
Problem is not adding threads to Julia, but making them useful at every level.
Nested parallelism: parallel code calling function from a library that is also parallel.
Example: multiplying two n x n matrices.
Example: running code sequentially.
Example: you need O(n^2) space.
Example: running code in parallel on 4 cores with OpenMP, OMP_NESTED = 1.
Example: such parallel code needs O(n^3) in space.
Another way: work-stealing.
Problem: work-stealing algorithm essentially run like a serial algorithm.
Parallel depth-first scheduling.
partr -- parallel task runtime.
partr implementation.
partr -- priority queues.
partr -- handling nested parallelism.
Possible problem: we do not synchronize at each spawn point.
Why all these things are important?.
Q&A: is Julia more suitable for implementation of partr than other languages?.
Taught by
The Julia Programming Language
Related Courses
Introduction to Programming for Musicians and Digital ArtistsCalifornia Institute of the Arts via Coursera Introduction to Real-Time Audio Programming in ChucK
California Institute of the Arts via Kadenze The Complete Java Certification Course
Udemy Java In-Depth: Become a Complete Java Engineer!
Udemy Advanced Java programming with JavaFx: Write an email client
Udemy