Scheduling Array Operations for GPU and Distributed Computing
Offered By: Dyalog User Meetings via YouTube
Course Description
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the world of scheduling array operations in this 22-minute conference talk from Dyalog '22. Dive into Juuso Haavisto's research on static scheduling and its applications in optimizing execution across various hardware and compute infrastructure settings. Learn how type theory aids in parallelizing array operations for graphics processing units and distributed computing. Discover the APL approach to data and its benefits for GPU and multi-core programming. Examine ongoing research in academia and industry, abstract interpretation for software bug detection, and the concept of types in academic contexts. Understand the language trilemma of performance, productivity, and generality. Investigate static semantics and rank polymorphism in array languages, and explore how shape analysis can build constraints that align computer understanding with APL programmers' perspectives.
Syllabus
Introduction, personal background and research goals
How an APL approach to data can help with GPU/multi-core programming
Related on-going research in academia and industry
Abstract interpretation of computer programs to help find software bugs
An abstract academic concept of types
The language trilemma of performance, productivity and generality
Static semantics and rank polymorphism in array languages
Using shape analysis to build constraints which help computers see things how the APLer does
Taught by
Dyalog User Meetings
Related Courses
CUDA Advanced LibrariesJohns Hopkins University via Coursera CUDA at Scale for the Enterprise
Johns Hopkins University via Coursera Parallel Computing with CUDA
Pluralsight Learn to Write Unity Compute Shaders
Udemy CUDA programming Masterclass with C++
Udemy