CS267: Lecture 7

Message Passing Programming (MPI)

September 19, 2001

Lecturer: Kathy Yelick

Abstract

We give a brief overview of data parallel programming, and how it might be applied to a simple variation of the sharks and fish problem (fish in a current).  This will help in understanding some of the constructs that appear in message passing programs, which is the main topic of this lecture.  We describe message passing programming as exemplified by the MPI library, which is the most common message passing library available on MPPs and clusters.   We start with the three basic components of a parallel programming model: creating parallelism, communication, and synchronization, although for MPI the second is by far the most interesting.  We describe the basics of h point-to-point and collective communication operations, as well as some of the more advanced features of MPI.

These notes steal liberally from lectures by Bill Saphir, Bill Gropp, Rusty Lusk, as well as previous CS267 instructors (Jim Demmel, David Culler, David Bailey, and Bob Lucas).

2001 Lecture Notes

PowerPoint, Postscript, PDF

Readings