Asynchronous MPI Communication with OpenMP Tasks - Spawning Task Dependency Graphs Across Nodes
Offered By: NHR@FAU via YouTube
Course Description
Overview
Explore asynchronous MPI communication with OpenMP tasks in this informative seminar from the NHR PerfLab series. Learn how to improve scalability of parallel codes by replacing block-synchronous execution with fine-grained synchronization using OpenMP tasks and dependencies. Discover the potential of detached tasks introduced in OpenMP 5.0 and their combination with MPI detached communication to build task dependency graphs across MPI processes. Gain insights into integrating MPI detached communication into your projects for real asynchronous communication benefits. Compare parallel performance of different synchronization levels through example code demonstrations. Understand how this approach can also be applied using C++ futures/promises for those not using OpenMP tasks. Presented by Joachim Jenke, a postdoctoral researcher from RWTH Aachen University, this seminar offers valuable knowledge for HPC application developers seeking to enhance correctness and performance.
Syllabus
Date and time: Tuesday, April 4, 2:00 p.m. – p.m. CET
Taught by
NHR@FAU
Related Courses
High Performance ComputingGeorgia Institute of Technology via Udacity Введение в параллельное программирование с использованием OpenMP и MPI
Tomsk State University via Coursera High Performance Computing for Scientists and Engineers
Indian Institute of Technology, Kharagpur via Swayam High Performance Computing
University of Iceland via YouTube Introduction to parallel programming with OpenMP and MPI
Indian Institute of Technology Delhi via Swayam