DAG Workshop
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
National Grid Computing Initiative - GARUDA
An overview of
Message Passing Interface (MPI) BS Ramanjaneyulu System Software Development Group, C-DAC Bangalore
28th – 30th June 2007
DAG’07 at IIT, Mumbai
1
Overview of MPI
1
What is MPI?
Where to use it? Why MPI?
Basics of MPI
How to compile and execute MPI programs?
Example program
MPI Data types
Point-to-point communication
Collective Communication
MPI Features
28th – 30th June 2007
DAG’07 at IIT, Mumbai
DAG Workshop
National Grid Computing Initiative - GARUDA
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
Presentation Outline
2
Overview of MPI
2
What is MPI?
Message passing among processes in parallel computing
DAG Workshop
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
National Grid Computing Initiative - GARUDA
A message passing library specification
Meant for massively parallel computers, clusters and network of workstations
Standard only. Implementation left to individual vendors.
28th – 30th June 2007
DAG’07 at IIT, Mumbai
3
Overview of MPI
3
Local data in each process Necessity of sending & receiving data among processes Message passing model for the rescue
DAG Workshop
National Grid Computing Initiative - GARUDA
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
Message Passing Model
COMMUNICATION NETWORK
28th – 30th June 2007
P
P
P
M
M
M
••••
DAG’07 at IIT, Mumbai
P
M
4
Overview of MPI
4
Where & Why ? You need a parallel program.
You are writing a parallel Library.
You have irregular data relationships that do not fit a data parallel model.
DAG Workshop
National Grid Computing Initiative - GARUDA
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
Where to use MPI?
Why MPI? Standardization (universally accepted) Portable and scalable. Rich functionality (more than 120 functions) Performance opportunities
28th – 30th June 2007
DAG’07 at IIT, Mumbai
5
Overview of MPI
5
MPI Implementations
SUN MPI
DAG Workshop
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
National Grid Computing Initiative - GARUDA
Public domain MPI (MPICH)
IBM MPI Intel MPI C-MPI (from C-DAC)
28th – 30th June 2007
DAG’07 at IIT, Mumbai
6
Overview of MPI
6
MPI Basics
Communicate between MPI_Recv …….]
processes
[
MPI_Send,
Exit when all communications are over [ MPI_Finalize ]
28th – 30th June 2007
DAG’07 at IIT, Mumbai
7
Overview of MPI
7
DAG Workshop
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
Initialize for communications [ MPI_Init ]
National Grid Computing Initiative - GARUDA
Basic steps in an MPI program:
DAG Workshop
National Grid Computing Initiative - GARUDA
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
Structure of MPI Program
28th – 30th June 2007
DAG’07 at IIT, Mumbai
8
Overview of MPI
8
Compile and Execute MPI Programs
mpicc program.c
DAG Workshop
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
mpif77 program.f
National Grid Computing Initiative - GARUDA
Compiling
Executing MPI program
mpirun -np 4 –machinefile hosts [other options] [args] -np for the number of processes; hosts is the file containing the info. of the hosts used Contents of hosts file:
28th – 30th June 2007
DAG’07 at IIT, Mumbai
9
Overview of MPI
9
Format of MPI Calls
Example
Rc = MPI_Xxxxx(parameter, ... )
DAG Workshop
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
Format
National Grid Computing Initiative - GARUDA
C Language Bindings
Rc = MPI_Bsend(&buf,count,type,dest,tag,comm)
All MPI Functions are case sensitive. All MPI calls begin with “MPI_” followed by actual function name. C programs should include the file mpi.h Return_integer Rc is of type integer. It is set to MPI_SUCCESS upon success. 28th – 30th June 2007
DAG’07 at IIT, Mumbai
Overview of MPI
10
10
Format of MPI Calls
(Contd…)
Fortran Bindings
CALL MPI_BSEND(buf,count,type,dest,tag,comm,ierr)
Case is not important here.
Fortran programs should include the file mpif.h
Additional parameter ierr to take care of the function status
28th – 30th June 2007
DAG’07 at IIT, Mumbai
Overview of MPI
11
11
DAG Workshop
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
Example
CALL MPI_XXXXX(parameter,..., ierr)
National Grid Computing Initiative - GARUDA
Format
MPI Communications National Grid Computing Initiative - GARUDA
DAG Workshop
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
Point-to-Point Communication Collective Communication
28th – 30th June 2007
DAG’07 at IIT, Mumbai
Overview of MPI
12
12
Communication between two processes.
source process sends message to destination process.
Communication takes place within a communicator.
Destination process dentified by its rank in the communicator.
28th – 30th June 2007
DAG’07 at IIT, Mumbai
Overview of MPI
DAG Workshop
National Grid Computing Initiative - GARUDA
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
MPI Point-to-Point Communication
13
13
Communicator National Grid Computing Initiative - GARUDA
Messages are sent / received within a given “universe.”
DAG Workshop
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
The communicator is “communication universe.” MPI_COMM_WORLD is the default communicator.
28th – 30th June 2007
DAG’07 at IIT, Mumbai
Overview of MPI
14
14
What is ‘Rank’ in Communicator?
Unique integer identifier for every process.
Assigned by the system during initialization.
Ranks are contiguous & begin at zero.
Used to specify the source and destination of messages.
Often used to control program execution (if rank=0 do this / if rank=1 do that).
28th – 30th June 2007
DAG’07 at IIT, Mumbai
DAG Workshop
National Grid Computing Initiative - GARUDA
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
Rank:
Overview of MPI
15
15
MPI Send and Receive Sending and Receiving messages Process 2
Send
DAG Workshop
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
National Grid Computing Initiative - GARUDA
Process 1
Recv
Fundamental questions:
To whom is the data sent?
What is sent?
How does the receiver identify it?
28th – 30th June 2007
DAG’07 at IIT, Mumbai
Overview of MPI
16
16
MPI Send and Receive National Grid Computing Initiative - GARUDA
MPI_Send (&buf,count,datatype,dest,tag,comm)
DAG Workshop
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
MPI_Recv (&buf,count,datatype,source,tag,comm,&status) Arguments of MPI_send & Receive: buf: Address where the data starts. count: Number of elements (items) of data in the message. datatype: Datatype of the message passed / received. source or destination : Sending or Receiving processes tag: Integer to distinguish messages comm: Communicator 28th – 30th June 2007
DAG’07 at IIT, Mumbai
Overview of MPI
17
17
Example Program in MPI
int MyRank, Numprocs, tag, rc, i;
DAG Workshop
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
National Grid Computing Initiative - GARUDA
# include # include “mpi.h” main (int argc, char **argv) {
MPI_Status status; char message[12]; rc = MPI_Init (&argc, &argv); rc = MPI_Comm_size (MPI_COMM_WORLD, &Numprocs); rc = MPI_Comm_rank (MPI_COMM_WORLD, &MyRank); tag = 100; strcpy (message, “Hello_World”); 28th – 30th June 2007
DAG’07 at IIT, Mumbai
Overview of MPI
18
18
(Contd…)
Workshop on Developing Applications on Grid - GARUDA C-DAC/SSDG/2007
for (i=1; i