Structure of an MPI Job

An MPI job consists of two or more cooperating processes which may be running on the same computer or on different computers.

MPI programs can be compiled with a standard compiler and the appropriate compile and link flags. However, MPI systems provide simple wrappers that eliminate the need to include the MPI flags.

For example, an MPI program can be run using 4 cores on a stand-alone computer with:

mypc: mpicc my-mpi-prog.c -o my-mpi-prog
mypc: mpirun -n 4 ./my-mpi-prog
        

One process in the job is generally designated as the root process. The root process is not required to do anything special, but it typically plays a different role than the rest. Often the root process is responsible for things like partitioning a matrix and distributing it to the other processes, and then gathering the results from the other processes.

Usually, mpirun starts up N identical processes, which then determine for themselves which process they are (by calling an MPI function that returns a different rank value to each process) and then follow different paths depending on the result.

/* Initialize data for the MPI functions */
if ( MPI_Init(&argc, &argv) != MPI_SUCCESS )
{
    fputs("MPI_Init failed.\n", stderr);
    exit(EX_UNAVAILABLE);
}

if ( MPI_Comm_rank(MPI_COMM_WORLD, &my_rank) != MPI_SUCCESS )
{
    fputs("MPI_Comm_rank failed.\n", stderr);
    exit(EX_UNAVAILABLE);
}

/*
 *  For this job, the process with rank 0 will assume the role of
 *  the "root" process, which will run different code than the
 *  other processes.
 */
if (my_rank == ROOT_RANK)
{
    // Root process code
}
else
{
    // Non-root process code
}

Note

All processes in this MPI job contain code that will never be executed, but this is not considered a problem, since the size of code is generally dwarfed by the size of the data. Hence, it is not usually worth the effort to create separate programs for different processes within an MPI job.