chapter eight

8 MPI: the parallel backbone

 

This chapter covers

  • The basics of creating and running an MPI program
  • Sending messages from one process to another
  • Using collective communication to perform common communication patterns
  • Techniques for communication exchanges to link meshes on separate processes
  • Creating custom MPI datatypes for simpler code and better performance
  • Using MPI Cartesian topology functions to streamline communication
  • Writing applications with hybrid MPI+OpenMP

The importance of the Message Passing Interface (MPI) standard is that it allows a program to access additional compute nodes and thus run larger and larger problems by just adding more nodes to the simulation. The name message passing refers to the ability to easily send messages from one process to another.

MPI is ubiquitous in the field of high performance computing. Across many scientific fields, the use of supercomputers entails an MPI implementation. MPI was launched as an open standard in 1994 and within months had become the dominant parallel computing library-based language. Since 1994, the use of MPI has led to scientific breakthroughs from physics to machine learning and self-driving cars!

8.1       The basics for an MPI program

8.1.1   Basic MPI function calls for every MPI program

8.1.2   Compiler wrappers for simpler MPI programs

8.1.3   Using parallel startup commands

8.1.4   Minimum working example of an MPI program

8.2       8.2 The send and receive commands for process-to-process communication

8.3       Collective communication: a powerful component of MPI

8.3.1   Using a barrier to synchronize timers

8.3.2   Using the broadcast to handle small file input

8.3.3   Using a reduction to get a single value from across all processes

8.3.4   Using gather to put order in debug printouts

8.3.5   Using scatter and gather to send data out to processes for work

8.4       Data parallel examples

8.4.1   Stream triad to measure bandwidth on the node

8.4.2   Ghost cell exchanges in a two-dimensional mesh

8.4.3   Ghost cell exchanges in a three-dimensional stencil calculation

8.5       Advanced MPI functionality to simplify code and enable optimizations