Hybrid Program Structure: SPMD

0.Program Structure Implementation Strategy: Single Program, multiple data¶

file: hybrid-MPI+OpenMP/00.spmd/spmd.c

Build inside 00.spmd directory:

make spmd

Execute on the command line inside 00.spmd directory:

mpirun -np <number of processes> ./spmd

Note

This command is going to run all processes on the machine on which you type it. See Running the examples on your cluster for more information about running the code on a cluster of machines. This note applies for all the examples below.

This is a simple example of the single program, multiple data (SPMD) pattern. The MPI program creates the MPI execution environment, defines the size of the MPI_COMM_WORLD and gives a unique rank to each process. The program then enters the OpenMP threaded portion of the code. The thread_num and get_num_threads functions from the OpenMP program are called. The MPI program then prints the thread number, number of threads, process rank, number of processes and hostname of each process. Lastly, the MPI execution environment is terminated by all processes.

To do:

Compile and run the program varying the number of processes. How many threads are working within each process? Uncomment the #pragma directive, recompile and rerun the program, varying the number of processes as before. Can you explain the behavior of the program in terms of processes and threads?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
/* spmd.c
 * ... illustrates the single program multiple data
 *      (SPMD) pattern using MPI and OpenMP commands.
 *
 * Joel Adams, Calvin College, November 2009.
 *
 * Usage: mpirun -np N ./spmd 
 *
 * Exercise:
 * - Build and run the program, 
 *     varying N's value as 1, 2, 3, 4, ...
 * - Compare the results to the source code
 * - Uncomment the commented-out #pragma directive
 * - Rebuild and rerun the program, varying N as before
 * - Compare the results to the source code
 */

#include <stdio.h>    // printf()
#include <stdlib.h>   // atoi()
#include <mpi.h>      // MPI commands
#include <omp.h>      // OpenMP commands

int main(int argc, char** argv) {
	int processID= -1, numProcesses = -1, length = -1;
	char hostName[MPI_MAX_PROCESSOR_NAME];

	MPI_Init(&argc, &argv);
	MPI_Comm_rank(MPI_COMM_WORLD, &processID);
	MPI_Comm_size(MPI_COMM_WORLD, &numProcesses);
	MPI_Get_processor_name (hostName, &length);

//        #pragma omp parallel 
        {
          int threadID = omp_get_thread_num();
          int numThreads = omp_get_num_threads();

	  printf("Hello from thread %d of %d from process %d of %d on %s\n",
		 threadID, numThreads,
                 processID, numProcesses, hostName);
        }

	MPI_Finalize();
	return 0;
}

1.Program Structure Implementation Strategy: Single Program, multiple data with user-defined number of threads¶

file: hybrid-MPI+OpenMP/01.spmd2/spmd.c

Build inside 01.spmd2 directory:

make spmd2

Execute on the command line inside 01.spmd2 directory:

mpirun -np <number of processes> ./spmd2 [numThreads]

Here is a second SPMD example with user-defined number of threads. We enter the number of threads to use on the command line. This way you can use as many threads as you would like.

To do:

Compile and run the program varying the number of processes and number of threads. Compare the behavior of the program to the source code.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
/* spmd2.c
 * ... illustrates the single program multiple data
 *      (SPMD) pattern using MPI and OpenMP commands
 *      with the user controlling numThreads
 *      from the command line.
 *
 * Joel Adams, Calvin College, November 2009.
 *
 * Usage: mpirun -np N ./spmd2 [numThreads]
 *
 * Exercise:
 * - Build and run, varying N = 1, 2, 3, 4, ...
 * - Compare behavior to source code
 * - Rerun with N = 1, varying numThreads = 1, 2, 3, 4, ...
 * - Compare behavior to source code
 * - Rerun with N = 2, varying numThreads = 1, 2, 3, 4, ...
 * - Compare behavior to source code
 * - Rerun with N = 3, varying numThreads = 1, 2, 3, 4, ...
 * - Compare behavior to source code
 * - ...
 */

#include <stdio.h>    // printf()
#include <stdlib.h>   // atoi()
#include <mpi.h>      // MPI commands
#include <omp.h>      // OpenMP commands

int processCommandLine(int argc, char ** argv) {
   if (argc == 2) {
      return atoi( argv[1] );
   } else {
      return 1;
   }
}

int main(int argc, char** argv) {
	int processID= -1, numProcesses = -1, length = -1;
        int numThreads = -1;
	char hostName[MPI_MAX_PROCESSOR_NAME];

	MPI_Init(&argc, &argv);
	MPI_Comm_rank(MPI_COMM_WORLD, &processID);
	MPI_Comm_size(MPI_COMM_WORLD, &numProcesses);
	MPI_Get_processor_name (hostName, &length);

        numThreads = processCommandLine(argc, argv);
        #pragma omp parallel num_threads(numThreads)
        {
          int threadID = omp_get_thread_num();

	  printf("Hello from thread %d of %d from process %d of %d on %s\n",
		 threadID, numThreads,
                 processID, numProcesses, hostName);
        }

	MPI_Finalize();
	return 0;
}