Scheduling Array Operations for GPU and Distributed Computing
Offered By: Dyalog User Meetings via YouTube
Course Description
Overview
Explore the world of scheduling array operations in this 22-minute conference talk from Dyalog '22. Dive into Juuso Haavisto's research on static scheduling and its applications in optimizing execution across various hardware and compute infrastructure settings. Learn how type theory aids in parallelizing array operations for graphics processing units and distributed computing. Discover the APL approach to data and its benefits for GPU and multi-core programming. Examine ongoing research in academia and industry, abstract interpretation for software bug detection, and the concept of types in academic contexts. Understand the language trilemma of performance, productivity, and generality. Investigate static semantics and rank polymorphism in array languages, and explore how shape analysis can build constraints that align computer understanding with APL programmers' perspectives.
Syllabus
Introduction, personal background and research goals
How an APL approach to data can help with GPU/multi-core programming
Related on-going research in academia and industry
Abstract interpretation of computer programs to help find software bugs
An abstract academic concept of types
The language trilemma of performance, productivity and generality
Static semantics and rank polymorphism in array languages
Using shape analysis to build constraints which help computers see things how the APLer does
Taught by
Dyalog User Meetings
Related Courses
Cloud Computing Concepts, Part 1University of Illinois at Urbana-Champaign via Coursera Cloud Computing Concepts: Part 2
University of Illinois at Urbana-Champaign via Coursera Reliable Distributed Algorithms - Part 1
KTH Royal Institute of Technology via edX Introduction to Apache Spark and AWS
University of London International Programmes via Coursera Réalisez des calculs distribués sur des données massives
CentraleSupélec via OpenClassrooms