Scheduling Array Operations for GPU and Distributed Computing
Offered By: Dyalog User Meetings via YouTube
Course Description
Overview
Explore the world of scheduling array operations in this 22-minute conference talk from Dyalog '22. Dive into Juuso Haavisto's research on static scheduling and its applications in optimizing execution across various hardware and compute infrastructure settings. Learn how type theory aids in parallelizing array operations for graphics processing units and distributed computing. Discover the APL approach to data and its benefits for GPU and multi-core programming. Examine ongoing research in academia and industry, abstract interpretation for software bug detection, and the concept of types in academic contexts. Understand the language trilemma of performance, productivity, and generality. Investigate static semantics and rank polymorphism in array languages, and explore how shape analysis can build constraints that align computer understanding with APL programmers' perspectives.
Syllabus
Introduction, personal background and research goals
How an APL approach to data can help with GPU/multi-core programming
Related on-going research in academia and industry
Abstract interpretation of computer programs to help find software bugs
An abstract academic concept of types
The language trilemma of performance, productivity and generality
Static semantics and rank polymorphism in array languages
Using shape analysis to build constraints which help computers see things how the APLer does
Taught by
Dyalog User Meetings
Related Courses
The Benefits of Learning a Different Programming LanguageACCU Conference via YouTube A Novice Introduces APL Programming Language
ACCU Conference via YouTube The Power of Function Composition
NDC Conferences via YouTube Orthotope - APL-Inspired Arrays for Haskell - Lambda Days 2022
Code Sync via YouTube Apple Array Allocation - Static Memory Management for Flat, Immutable Arrays
ACM SIGPLAN via YouTube