chapter eight
8 MPI: the parallel backbone
This chapter covers
- The basics of creating and running an MPI program
- Sending messages from one process to another
- Using collective communication to perform common communication patterns
- Techniques for communication exchanges to link meshes on separate processes
- Creating custom MPI datatypes for simpler code and better performance
- Using MPI Cartesian topology functions to streamline communication
- Writing applications with hybrid MPI+OpenMP
The importance of the Message Passing Interface (MPI) standard is that it allows a program to access additional compute nodes and thus run larger and larger problems by just adding more nodes to the simulation. The name message passing refers to the ability to easily send messages from one process to another.
MPI is ubiquitous in the field of high performance computing. Across many scientific fields, the use of supercomputers entails an MPI implementation. MPI was launched as an open standard in 1994 and within months had become the dominant parallel computing library-based language. Since 1994, the use of MPI has led to scientific breakthroughs from physics to machine learning and self-driving cars!