YoVDO

Parallel Programming Concepts

Offered By: openHPI

Tags

Programming Courses Parallel Programming Courses

Course Description

Overview

Since the very beginning of computer technology, processors have been built with ever-increasing clock frequencies and smarter optimizations for achieving a faster software execution. Developers and the software industry are used to applications becoming faster by merely exchanging the underlying hardware. However, since the beginning of the century it has become apparent that this approach no longer works.

Moore's law about the ever-increasing number of transistors per chip is still valid, but power consumption, thermal management and memory latency issues are making make serial code acceleration increasingly harder. Instead, hardware vendors now use additional transistors for multiple processing elements (‘cores’) per processor chip and deeper memory hierarchies. Modern hardware has the capability to transform any desktop, server, or even mobile system into some kind of parallel computer. This makes parallel programming the new default for application development. The exploitation of any additional horsepower from hardware is now in the responsibility of the software.

The openHPI online course “Parallel Programming Concepts” presents relevant theoretical and practical foundations for parallel programming. We show crucial theoretical ideas such as semaphores and actors, the architecture of modern parallel hardware, different programming models such as task parallelism, message passing and functional programming, and several patterns and best practices.

The course is suitable for all participants who are interested in getting a broader overview of parallelism, especially beyond the usage of multiple threads. Participants should have knowledge in at least one programming language - other skills are not necessary.


Syllabus

  • Week 1: Terminology and fundamental concepts
  • Week 2: Shared Memory Parallelism - Basics
  • Week 3: Shared memory parallelism – Programming
  • Week 4: Accelerators
  • Week 5: Distributed memory parallelism
  • Week 6: Patterns, best practices and examples
  • Exam: Exam

Taught by

Dr. Peter Tröger

Tags

Related Courses

2D image processing
Higher School of Economics via Coursera
Abstraction, Problem Decomposition, and Functions
University of Colorado System via Coursera
Advanced CloudFormation: Macros (French)
Amazon Web Services via AWS Skill Builder
Advanced Deep Learning Methods for Healthcare
University of Illinois at Urbana-Champaign via Coursera
Advanced Java Concurrency
Vanderbilt University via Coursera