Introduction to Parallel Programming with MPI

Introduction to Parallel Programming with MPI Purdue School on HPC September 4-5, 2008 Dave Seaman ([email protected]) Rosen Center for Advanced Com...
Author: Malcolm Tyler
3 downloads 0 Views 339KB Size
Introduction to Parallel Programming with MPI Purdue School on HPC September 4-5, 2008 Dave Seaman

([email protected]) Rosen Center for Advanced Computing

MPI Features ƒ Messages Allow Data Interchange between Processes ƒ Distributed Memory ƒ Portability

Message Passing ƒ Simple point-to-point messages transfer data from one process to another. ƒ Collective operations allow multiple nodes to cooperate in processing shared data. ƒ Message operations may be blocking or nonblocking.

Distributed Memory ƒ Most MPI calls assume that each process has its own memory. ƒ Large arrays need not be stored on a single node but may be split among multiple nodes. ƒ Implementations may optimize for shared memory, when applicable.

Portability ƒ MPI is available on a wide variety of computers. ƒ Vendors often provide their own versions of MPI as part of the OS or as part of an HPC toolkit. ƒ Free MPI implementations abound (MPICH, MPICH2, OpenMPI and others).

Skeleton MPI Program #include int main (int argc, char *argv[]) { int tasks, iam; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &tasks); MPI_Comm_rank(MPI_COMM_WORLD, &iam); /* Do the work here. */

MPI_Finalize(); return 0; }

Fortran Skeleton program skeleton ! include 'mpif. 'mpif.h'

use mpi implicit none integer :: tasks, iam, ioerror call MPI_INIT(ioerror) call MPI_COMM_SIZE(MPI_COMM_WORLD, tasks, ioerror) call MPI_COMM_RANK(MPI_COMM_WORLD, iam, ioerror) ! Do the work here.

call MPI_FINALIZE(ioerror) end program skeleton

Example 1: hello.c #define MSG_LENGTH 15 /* … */

int i, tag=1; char message[MSG_LENGTH]; MPI_Status status; /* What follows is the "Do the work here" part in the skeleton. */

if (iam == 0) { strcpy(message, "Hello, world!"); for (i=1; i 2 -> 3 -> … -> 0. */ int send_to = (iam + 1) % numprocs; Int receive_from = (iam + numprocs - 1) % numprocs;

/* First post an asynchronous receive. */ MPI_Request in_request, out_request; Int res = MPI_Irecv( message, BUFFER_SIZE, MPI_CHAR, receive_from, TAG, MPI_COMM_WORLD, &in_request);

Outline of cycle.c (Cont.) If (iam == ROOT_PROC) { /* Start the ball rolling… */ res = MPI_Isend( message, message_length, MPI_CHAR, send_to, TAG, MPI_COMM_WORLD, &out_request); }

/* Wait for the message to arrive. */ res = MPI_Wait( &in_request, &status);

Outline of cycle.c (Cont.) /* The message has arrived. Print or pass it along. */ if (iam == ROOT_PROC) { printf( "%s", message); } else { MPI_Get_count (&status, MPI_CHAR, &message_length); res = MPI_Isend( message, message_length, MPI_CHAR, MPI_COMM_WORLD, &out_request); }

res = MPI_Wait( &out_request, &status);

send_to, TAG,

MPI_Irecv ƒ C Synopsis int MPI_Irecv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request *request)

ƒ Fortran Synopsis TYPE BUF(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR call MPI_IRECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, & REQUEST, IERROR)

ƒ C++ Synopsis MPI::Request MPI::Comm::Irecv(void* buf, int count, const MPI::Datatype& datatype, int source, int tag) const

MPI_Isend ƒ C Synopsis int MPI_Isend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)

ƒ Fortran Synopsis TYPE BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR call MPI_ISEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, & IERROR)

ƒ C++ Synopsis MPI::Request MPI_Isend(const void* buf, int count, const MPI_Datatype& datatype, int dest, int tag) const

MPI_Wait ƒ C Synopsis int MPI_Wait(MPI_Request *request, MPI_Status *status)

ƒ Fortran Synopsis INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR call MPI_WAIT(REQUEST, STATUS, IERROR)

ƒ C++ Synopsis void MPI::Request::Wait(MPI::Status& status) void MPI::Request::Wait()

MPI_Get_count ƒ C Synopsis int MPI_Get_count(MPI_Status *status, MPI_Datatype datatype, int *count)

ƒ Fortran Synopsis INTEGER STATUS(MPI_STATUS_SIZE), DATATYPE, COUNT, IERROR call MPI_GET_COUNT(STATUS, DATATYPE, COUNT, IERROR)

ƒ C++ Synopsis int Status::Get_count(const MPI::Datatype& datatype) const