MPI: The Message Passing Interface
Mike Bailey
[email protected]
Oregon State University
Oregon State University Computer Graphics mjb – May 21, 2015
MPI: The Basic Idea Network
Memory
C CPU
Memory
• • •
C CPU
Programs on different CPUs coordinate computations by passing p g messages g between each other
Oregon State University Computer Graphics mjb – May 21, 2015
Setting Up and Finishing #include int main( int argc, char *argv[ ] ) { • • • MPI_Init( &argc, &argv );
• • •
MPI_Finalize( );
If yyou don’t need to p process command line arguments, g , yyou can also call: MPI_Init( NULL, NULL );
Oregon State University Computer Graphics mjb – May 21, 2015
MPI Follows a SPMD Model A “communicator” is a collection of CPUs that are capable of sending messages to each other
This requires MPI drivers getting installed on all those CPUs.
Getting information about our place in the communicator: iintt numCPUs; CPU int me;
// total t t l # off cpus involved i l d // which one I am
MPI_Comm_Size( MPI_COMM_WORLD, &numCPUs ); MPI_Comm_Rank( MPI_COMM_WORLD, &me );
“rank”
Oregon State University Computer Graphics mjb – May 21, 2015
Sending Data from a Source CPU to a Destination CPU MPI_Send( array, numToSend, type, dst, tag, MPI_COMM_WORLD );
address of data to send from
# bytes
MPI_CHAR MPI_INT MPI_LONG MPI_FLOAT MPI_DOUBLE • • •
rank of the CPU to send to
An integer to differentiate this transmission from any other transmission
Rules: • One message from a specific src to a specific dst cannot overtake a previous message from the same src to the same dst. • There are no guarantees on order from different src’s .
Oregon State University Computer Graphics mjb – May 21, 2015
Receiving Data from a Destination CPU from a Source CPU MPI_Recv( array, maxCanReceive, type, src, tag, MPI_COMM_WORLD, &status );
address of data to receive into
# bytes
MPI_CHAR MPI_INT MPI LONG MPI_LONG MPI_FLOAT MPI_DOUBLE • • •
Rank of the CPU we are expecting to get a transmission from
An integer to differentiate what transmission we are looking for with this call
Rules: • Receiver always y blocks • One message from a specific src to a specific dst cannot overtake a previous message from the same src to the same dst • No restriction on order from different src’s • status isUniversity type MPI_Status Oregon State Computer Graphics
-- this can be replaced with MPI_STATUS_IGNORE mjb – May 21, 2015
Example This same code runs on all CPUs: int numCPUs; int me; char *out = “Hello, Beavers!”; char in[128]; MPI_Comm_Size( MPI_COMM_WORLD, &numCPUs ); MPI_Comm_Rank( MPI_COMM_WORLD, &me ); if( me == 0 ) { f ( int for( i t dst d t = 1; 1 dst d t < numCPUs; CPU dst++ d t++ ) { MPI_Send( out, strlen(out)+1, MPI_CHAR, dst, 0, MPI_COMM_WORLD ); } } else { MPI_Recv( in, sizeof(in), MPI_CHAR, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE ); }
Oregon State University Computer Graphics mjb – May 21, 2015
MPI Reduction MPI Reduce( in_array, MPI_Reduce( in array out_array, out array count, count type type, operator operator, dst, dst MPI_COMM_WORLD MPI COMM WORLD );
MPI_MIN MPI_MAX MPI_SUM MPI PROD MPI_PROD MPI_MINLOC MPI_MAXLOC • • •
MPI_CHAR MPI_INT MPI_LONG MPI_FLOAT MPI_DOUBLE • • •
+
+
Who is given the answer
+
Reduction
Oregon State University Computer Graphics mjb – May 21, 2015
MPI Broadcasting
MPI_Bcast( array, count, src, MPI_COMM_WORLD );
Broadcast
Oregon State University Computer Graphics mjb – May 21, 2015
MPI Scatter
MPI_Scatter( snd_array, snd_count, snd_type, rcv_array, rcv_count, rcv_type, src, MPI_COMM_WORLD );
MPI_CHAR MPI INT MPI_INT MPI_LONG MPI_FLOAT MPI_DOUBLE • • •
MPI_CHAR MPI INT MPI_INT MPI_LONG MPI_FLOAT MPI_DOUBLE • • •
Scatter Oregon State University Computer Graphics mjb – May 21, 2015
MPI Scatter Example
#define NUMDATA float *Array; #define EACH_CPUS_SHARE float Local[ EACH_CPUS_SHARE ];
1000000 ?????
if( me == 0 ) { Array = new float[ NUMDATA ]; > MPI_Scatter( Array, EACH_CPUS_SHARE, MPI_FLOAT, Local, EACH_CPUS_SHARE, MPI_FLOAT, me, MPI_WORLD_COMM ); } else { MPI_Scatter( NULL, EACH_CPUS_SHARE, MPI_FLOAT, Local, EACH_CPUS_SHARE, MPI_FLOAT, 0, MPI_WORLD_COMM ); }
Scatter Oregon State University Computer Graphics mjb – May 21, 2015
MPI Gather MPI_Gather( snd_array, snd_count, snd_type, rcv_array, rcv_count, rcv_type, dst, MPI_COMM_WORLD );
MPI_CHAR MPI_INT MPI_LONG MPI_FLOAT MPI_DOUBLE • • •
Oregon State University Computer Graphics
MPI_CHAR MPI_INT MPI_LONG MPI_FLOAT MPI_DOUBLE • • •
Gather mjb – May 21, 2015
MPI Gather Example
#define NUMDATA float *Array; #define EACH_CPUS_SHARE float Local[ EACH_CPUS_SHARE ];
1000000 ?????
if( me == 0 ) { Array = new float[ NUMDATA ]; MPI_Gather( Local, EACH_CPUS_SHARE, MPI_FLOAT, Array, EACH_CPUS_SHARE, MPI_FLOAT, me, MPI_WORLD_COMM ); > } else { MPI_Gather( Local, EACH_CPUS_SHARE, MPI_FLOAT, NULL, EACH_CPUS_SHARE, MPI_FLOAT, 0, MPI_WORLD_COMM ); }
Oregon State University Computer Graphics
Gather mjb – May 21, 2015
MPI Barriers
wait
wait
wait
wa ait
wait
MPI_Barrier( MPI_COMM_WORLD );
Barrier
Oregon State University Computer Graphics mjb – May 21, 2015
MPI Derived Types Idea: In addition to types MPI MPI_INT, INT, MPI_FLOAT, MPI FLOAT, etc., allow the creation of new MPI types so that you can transmit an “array of structures”. Reason: There is significant overhead with each transmission. Better to send one entire array of structures instead of sending several arrays separately.
MPI_Type_create_struct( count, blocklengths, displacements, types, datatype );
struct point { int pointSize; float x, y, z; }; MPI_datatype point_t; int blocklengths[ ] = { 1, 1, 1, 1 }; int displacements[ ] = { 0 0, 4 4, 8 8, 12 }}; MPI_type types[ ] = { MPI_INT, MPI_FLOAT, MPI_FLOAT, MPI_FLOAT ); MPI_Type_create_struct( 4, blocklengths, displacements, types, &point_t );
You can now use “point” everywhere you could have used MPI_INT, MPI_FLOAT,etc. Oregon State University Computer Graphics mjb – May 21, 2015