mpi
|
|
|
- 姓解 章佳
- 9 years ago
- Views:
Transcription
1 MPI I
2 II MPI FORTRAN C MPI MPI C /FORTRAN MPI MPI MPI MPI MPI MPI-2 MPI-1 MPI-2 MPI MPI
3 ...IX...XI... XII...XIV... XVII MPI MPI MPI MPI MPI MPI MPI MPI MPI Hello World! FORTRAN77+MPI C+MPI MPI MPI MPI...23 III
4 6.1.2 MPI MPI status MPI MPI MPI MPI MPI MPI MPI MPI MPI MPI MPI MPI Jacobi MPI Jacobi Jacobi Jacobi MPI MPI MPICH MPI Linux MPICH...80 IV
5 Windows NT MPICH MPI MPI Jacobi Jacobi MPI V
6 MPI π MINLOC MAXLOC MPI MPI MPI Jacobi MPI MPI MPI-1 C MPI-1 Fortran VI
7 18.3 MPI-2 C MPI-2 Fortran MPI MPI MPI socket I/O VII
8 MPI MPI MPICH VIII
9 IX 21, Great Challenge, 90 HPCC ASCI (Computational Science and Engineering) 777 THNPSC-1 THNPSC MPI MPI MPI THNPSC-1 THNPSC MPI MPI MPI
10 MPI MPI MPI MPI MPI X
11 XI -- PC MPI MPI 1994 MPI MPI FORTRAN 77 C MPI 6 FORTRAN 77 C MPI 6 Fortran90 C++ MPI Fortran 90 C++ MPI FORTRAN77 C MPI MPI MPI MPI MPI MPI MPI MPI-2 I/O MPI
12 1 FORTRAN77+MPI C+MPI Fortran90+MPI MPI_REAL MPI_REAL MPI_BYTE MPI_BYTE MPI_BYTE MPI_CHARACTER MPI MPI MPI Jacobi MPI_SEND MPI_RECV Jacobi MPI_SENDRECV Jacobi Jacobi MPI_WAIT MPI_REQUEST_FREE Jacobi Jacobi MPI_Gather MPI_Gatherv MPI_Scatter XII
13 42 MPI_Scatterv MPI_Allgather MPI_Allgatherv MPI_Alltoall π MPI_MAXLOC MPI_ADDRESS MPI Jacobi XIII
14 SPMD SPMD MPMD FORTRAN77+MPI FORTRAN77+MPI FORTRAN77+MPI C+MPI C+MPI MPI MPI MPI MPI_SEND MPI_RECV tag MPI Jacobi Jacobi MPI_SENDRECV Jacobi MPI NT MPI...89 XIV
15 41 NT MPI NT MPI NT MPI NT MPI MPI MPI MPI_ALLTOALL MPI π π MPI_TYPE_CONTIGUOUS MPI_TYPE_VECTOR MPI_TYPE_INDEXED MPI_TYPE_STRUCT MPI_PUT MPI_GET XV
16 85 MPI_ACCUMULATE MPI_WIN_FENCE MPI_FILE_READ_AT MPI_FILE_WRITE_AT XVI
17 MPI MPI FORTRAN MPI C MPI MPI MPI MPI C FORTRAN MPI MPI Fortran MPI C MPI_MAXLOC I/O XVII
18 MPI 1
19 SIMD Single-Instruction Multiple-Data MIMD Multiple- Instruction Multiple-Data 1 SIMD A=A+1 SIMD A 1 SIMD SIMD MIMD A=B+C+D-E+F*G A=(B+C)+(D-E)+(F*G) B+C D-E F*G SIMD MIMD SPMD Single-Program Multuple-Data MPMD Multiple-Program Multiple-Data 1 2
20 SPMD MPMD MPMD D D M SIMD MIMD M SPMD MPMD S SISD MISD S SPSD MPSD S M I S M P
21 Cluster Computing
22 1.3 5
23 SIMD SPMD B C A A=B+C B C A 1
24 1 SIMD/SPMD SIMD/MIMD/SPMD/MPMD MPI 4 7
25 4 2.3 FORTRAN C MPI FORTRAN C 8
26 SIMD MIMD SPMD MPMD SPMD MPMD
27 5 SPMD 5 6 SPMD 6 SPMD MPMD SPMD MPMD 7 MPMD 10
28 7 MPMD
29 MPI MPI MPI MPI MPICH Linux NT MPI MPI MPI MPI-1 MPI-2 12
30 4 MPI MPI MPI MPI 4.1 MPI MPI MPI 1 MPI MPI FORTRAN+MPI C+MPI MPI FORTRAN77/C/Fortran90/C++ / / 2 MPI MPI MPI MPI 3 MPI MPI MPI MPI MPI, MPI 4.2 MPI MPI C Fortran 77 PVM NX Express p4 13
31 MPI 4.3 MPI MPI Venus (IBM) NX/2 (Intel) Express (Parasoft) Vertex (ncube) P4 (ANL) PARMACS (ANL) Zipcode (MSU) Chimp (Edinburgh University) PVM (ORNL, UTK, Emory U.) Chameleon (ANL) PICL (ANL) MPI Dongarra,Hempel,Hey Walker MPI 1.0 MPI MPI MPI MPI MPI1.1 MPI MPI I/O MPI MPI MPI MPI-2 MPI MPI-1 MPI-2 I/O MPI-1 MPI MPI MPI MPI FORTRAN C FORTRAN C MPI-1 MPI FORTRAN 77 C FORTRAN 77 C MPI-1 MPI Fortran90 FORTRAN Fortran90 FORTRAN 77 Fortran90 C++ C MPI-2 FORTRAN 77 C Fortran90 C++ MPI-2 14
32 4.5 MPI MPICH MPI MPICH MPI-1 MPI MPICH MPICH MPICH MPI-2 Argonne MSU MPICH CHIMP Edinburgh MPI EPCC Edinburgh Parallel Computing Centre ftp://ftp.epcc.ed.ac.uk/pub/packages/chimp/release/ CHIMP Alasdair Bruce, James (Hamish) Mills, Gordon Smith LAM (Local Area Multicomputer) MPI Ohio State University LAM/MPI MPI 2 MPI Mpich Argonne and MSU Chimp Edinburgh ftp://ftp.epcc.ed.ac.uk/pub/packages/chimp/ Lam Ohio State University MPI MPI MPI FORTRAN C MPI MPI 15
33 5 MPI Hello World MPI MPI FORTRAN C MPI 5.1 MPI Hello World! C Hello World MPI FORTRAN77+MPI 1 FORTRAN77+MPI MPI FORTRAN mpif.h MPI C FORTRAN MPI FORTRAN mpif.h Fortran90+MPI MPI-2 Fortran90 C++ Fortran90 include mpif.h use mpi MPI Fortran90 4 MPI MPI_MAX_PROCESSOR_NAME MPI MPI processor_name myid numprocs namelen rc ierr MPI MPI MPI_INIT MPI_FINALIZE MPI MPI MPI FORTRAN MPI_COMM_RANK myid MPI_COMM_SIZE numprocs MPI_GET_PROCESSOR_NAME processor_name namelen write FORTRAN FORTRAN 4 tp5 4 tp MPI 10 16
34 program main include 'mpif.h' character * (MPI_MAX_PROCESSOR_NAME) processor_name integer myid, numprocs, namelen, rc,ierr call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) call MPI_GET_PROCESSOR_NAME(processor_name, namelen, ierr) write(*,10) myid,numprocs,processor_name 10 FORMAT('Hello World! Process ',I2,' of ',I1,' on ', 20A) call MPI_FINALIZE(rc) end 1 FORTRAN77+MPI Hello World! Process 1 of 4 on tp5 Hello World! Process 0 of 4 on tp5 Hello World! Process 2 of 4 on tp5 Hello World! Process 3 of 4 on tp5 8 FORTRAN77+MPI 1 4 tp1 tp3 tp4 tp Hello World! Process 0 of 4 on tp5 Hello World! Process 1 of 4 on tp1 Hello World! Process 2 of 4 on tp3 Hello World! Process 3 of 4 on tp4 9 FORTRAN77+MPI 4 17
35 Hello World MPI_INIT MPI_INIT MPI_INIT MPI_INIT MPI_COMM_RANK MPI_COMM_RANK MPI_COMM_RANK MPI_COMM_RAN myid=0 myid=1 myid=2 myid=3 MPI_GET_PROCESSOR_NAME MPI_GET_PROCESSOR_NAME MPI_GET_PROCESSOR_NAME MPI_GET_PROCESSOR_NAME processor_name= tp5 processor_name= tp5 processor_name= tp5 processor_name= tp5 namelen=3 namelen=3 namelen=3 namelen=3 write write write write Hello World! Process 0 of 4 on tp5 Hello World! Process 1 of 4 on tp5 Hello World! Process 2 of 4 on tp5 Hello World! Process 3 of 4 on tp5 MPI_FINALIZE MPI_FINALIZE MPI_FINALIZE MPI_FINALIZE Hello World 10 FORTRAN77+MPI C+MPI 3 C+MPI FORTRAN77+MPI MPI C mpi.h mpif.h MPI FORTRAN77 MPI_MAX_PROCESSOR_NAME MPI MPI 18
36 processor_name FORTRAN77 myid numprocs namelen MPI MPI_Init MPI_Finalize MPI FORTRAN77+MPI FORTRAN77 C FORTRAN77 MPI FORTRAN77 MPI FORTRAN77 C MPI_ MPI MPI C MPI_Comm_rank myid MPI_Comm_size numprocs MPI_Get_processor_name processor_name namelen fprintf C 4 tp5 tp MPI FORTRAN77+MPI Hello World! Process 0 of 4 on tp5 Hello World! Process 1 of 4 on tp5 Hello World! Process 3 of 4 on tp5 Hello World! Process 2 of 4 on tp5 11 C+MPI 1 4 tp1 tp3 tp4 tp FORTRAN77+MPI C+MPI Hello World! Process 0 of 4 on tp5 Hello World! Process 1 of 4 on tp1 Hello World! Process 2 of 4 on tp3 Hello World! Process 3 of 4 on tp4 12 C+MPI 4 19
37 #include "mpi.h" #include <stdio.h> #include <math.h> void main(argc,argv) int argc; char *argv[]; { int myid, numprocs; int namelen; char processor_name[mpi_max_processor_name]; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Get_processor_name(processor_name,&namelen); fprintf(stderr,"hello World! Process %d of %d on %s\n", myid, numprocs, processor_name); } MPI_Finalize(); 3 C+MPI program main use mpi character * (MPI_MAX_PROCESSOR_NAME) processor_name integer myid, numprocs, namelen, rc, ierr call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) call MPI_GET_PROCESSOR_NAME(processor_name, namelen, ierr) print *,"Hello World! Process ",myid," of ", numprocs, " on", processor_name call MPI_FINALIZE(rc) end 4 Fortran90+MPI 20
38 MPI 13 MPI 13 MPI 5.2 MPI MPI MPI MPI_ MPI_ MPI FORTRAN MPI FORTRAN C MPI MPI_Aaaa_aaa MPI FORTRAN MPI_SUCCESS MPI FORTRAN FORTRAN 1 C 0 FORTRAN 77 MPI ANSI FORTRAN 77 ANSI FORTRAN 77 MPI, MPI MPI mpif.h, mpif.h 21
39 5.3 MPI MPI MPI MPI MPI Hello World MPI Hello World SPMD Single Program Multiple Data 22
40 6 MPI MPI MPI MPI MPI MPI MPI MPI FORTRAN 77 C MPI C FORTRAN 14 MPI 14 MPI MPI FORTRAN 77 C MPI-2 C++ MPI IN OUT INOUT IN MPI MPI OUT MPI INOUT MPI MPI OUT INOUT MPI INOUT 23
41 MPI IN OUT INOUT MPI MPI OUT INOUT void copyintbuffer( int *pin, int *pout, int len ) { int i; for (i=0; i<len; ++i) *pout++ = *pin++; } int a[10]; copyintbuffer( a, a+3, 7); C, MPI FORTRAN77 MPI MPI, C FORTRAN 77 MPI_INIT MPI_INIT() int MPI_Init(int *argc, char ***argv) C C argc argv argc argv MPI_INIT(IERROR) INTEGER IERROR FORTRAN77 FORTRAN77 IERROR C FORTRAN77 void*,<type> MPI C FORTRAN77 MPI MPI_SEND C FORTRAN77 void * <type> 24
42 6.1.2 MPI MPI_INIT() int MPI_Init(int *argc, char ***argv) MPI_INIT(IERROR) INTEGER IERROR MPI 1 MPI_INIT MPI_INIT MPI MPI MPI MPI_FINALIZE() int MPI_Finalize(void) MPI_FINALIZE(IERROR) INTEGER IERROR MPI 2 MPI_FINALIZE MPI_FINALIZE MPI MPI MPI MPI_COMM_RANK(comm,rank) IN comm OUT rank comm int MPI_Comm_rank(MPI_Comm comm, int *rank) MPI_COMM_RANK(COMM,RANK,IERROR) INTEGER COMM,RANK,IERROR MPI 3 MPI_COMM_RANK 25
43 6.1.5 MPI_COMM_SIZE(comm,size) IN comm OUT size comm int MPI_Comm_size(MPI_Comm comm, int *size) MPI_COMM_SIZE(COMM,SIZE,IERROR) INTEGER COMM,SIZE,IERROR MPI 4 MPI_COMM_SIZE MPI_SEND(buf,count,datatype,dest,tag,comm) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) int MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_SEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR MPI 5 MPI_SEND MPI_SEND count datatype dest tag MPI_SEND count datatype buf datatype MPI MPI_SEND 26
44 6.1.7 MPI_RECV source datatype tag count count datatype datatype buf MPI count datatype MPI MPI_RECV MPI_RECV(buf,count,datatype,source,tag,comm,status) OUT buf ( ) IN count ( ) IN datatype ( ) IN source ( ) IN tag ( ) IN comm ( ) OUT status ( ) int MPI_Recv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS (MPI_STATUS_SIZE) IERROR MPI 6 MPI_RECV status status MPI C MPI_SOURCE MPI_TAG MPI_ERROR status.mpi_source status.mpi_tag status.mpi_error tag FORTRAN status MPI_STATUS_SIZE status(mpi_source) status(mpi_tat) status(mpi_error) tag 27
45 status MPI_GET_COUNT Hello, process 1 1 #include "mpi.h" main( argc, argv ) int argc; char **argv; { char message[20]; int myrank; MPI_Init( &argc, &argv ); /* MPI */ MPI_Comm_rank( MPI_COMM_WORLD, &myrank ); /* */ if (myrank == 0) /* 0 */ { /* message MPI_Send strlen(message) MPI_CHAR MPI_COMM_WORLD 0 1 */ strcpy(message,"hello, process 1"); MPI_Send(message, strlen(message), MPI_CHAR, 1, 99,MPI_COMM_WORLD); } else if(myrank==1) /* 1 */ { /* 1 message 20 MPI_CHAR 0 99 MPI_COMM_WORLD status */ MPI_Recv(message, 20, MPI_CHAR, 0, 99, MPI_COMM_WORLD, &status); printf("received :%s:", message); } MPI_Finalize(); 28
46 } /* MPI */ MPI FORTRAN77 MPI FORTRAN MPI FORTRAN77 MPI FORTRAN77 MPI_INTEGER INTEGER MPI_REAL REAL MPI_DOUBLE_PRECISION DOUBLE PRECISION MPI_COMPLEX COMPLEX MPI_LOGICAL LOGICAL MPI_CHARACTER CHARACTER(1) MPI_BYTE MPI_PACKED MPI C 4 4 MPI C MPI C MPI_CHAR signed char MPI_SHORT signed short int MPI_INT signed int MPI_LONG signed long int MPI_UNSIGNED_CHAR unsigned char MPI_UNSIGNED_SHORT unsigned short int MPI_UNSIGNED unsigned int MPI_UNSIGNED_LONG unsigned long int MPI_FLOAT float MPI_DOUBLE double MPI_LONG_DOUBLE long double MPI_BYTE MPI_PACKED MPI_BYTE MPI_PACKED FORTRAN77 C MPI_BYTE (8 ) MPI, Fortran 77 ANSI C MPI 5 29
47 5 MPI MPI MPI_LONG_LONG_INT MPI MPI_DOUBLE_COMPLEX MPI_REAL2 MPI_REAL4 MPI_REAL8 MPI_INTEGER1 MPI_INTEGER2 MPI_INTEGER4 C long long int FORTRAN77 DOUBLE COMPLEX REAL*2 REAL*4 REAL*8 INTEGER*1 INTEGER*2 INTEGER*4 6.3 MPI MPI MPI MPI MPI MPI MPI 1 2 FORTRAN77 INTEGER MPI_INTEGER REAL MPI_REAL FORTRAN77 MPI C int MPI_INT float MPI_FlOAT 30
48 MPI_INTEGER MPI_INTEGER MPI_REAL MPI_REAL C int long MPI MPI_INT MPI_LONG MPI_INT MPI_LONG MPI_INT MPI_LONG MPI MPI_BYTE MPI_PACKED MPI_TYPE MPI_PACK MPI_UNPACK REAL a(20),b(20) CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_SEND(a(1), 10, MPI_REAL, 1, tag, comm, ierr) ELSE IF (rank.eq. 1) THEN CALL MPI_RECV(b(1), 15, MPI_REAL, 0, tag, comm, status, ierr) END IF 6 MPI_REAL REAL a(20),b(20) CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_SEND(a(1), 10, MPI_REAL, 1, tag, comm, ierr) ELSE IF (rank.eq. 1) THEN CALL MPI_RECV(b(1), 40, MPI_BYTE, 0, tag, comm, status, ierr) END IF 7 MPI_REAL MPI_BYTE REAL a(20),b(20) CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_SEND(a(1), 40, MPI_BYTE, 1, tag, comm, ierr) ELSE IF (rank.eq. 1) THEN CALL MPI_RECV(b(1), 60, MPI_BYTE, 0, tag, comm, status, ierr) END IF 31
49 8 MPI_BYTE MPI_BYTE MPI_BYTE, MPI_PACKED 6 7 MPI_REAL MPI_BYTE 8 MPI_BYTE MPI_CHARACTER FORTRAN 77 CHARACTER CHARACTER FORTRAN 77 CHARACTER*10 a CHARACTER*10 b CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_SEND(a, 5, MPI_CHARACTER, 1, tag, comm, ierr) ELSE IF (rank.eq. 1) THEN CALL MPI_RECV(b(6), 5, MPI_CHARACTER, 0, tag, comm, status, ierr) END IF 9 MPI_CHARACTER 9 1 b 0 a Fortran CHARACTER, MPI MPI Fortran CHARACTER MPI
50 33 MPI_SEND( buf, count,datatype,dest,tag,comm) MPI MPI MPI MPI MPI MPI (,MPI_BYTE ) MPI,, a b ( ) a b 6.4 MPI MPI MPI < / > < > MPI_SEND MPI_RECV 17 17
51 16 MPI_SEND MPI_RECV(buf,count,datatype,source,tag,comm,status) 17 MPI_RECV / tag MPI_SEND( x,1,,1,tag1,comm) 1 MPI_SEND( y,1,,1,tag2,comm) 2 tag y x tag 1 MPI_RECV(x,1,,0,tag1,comm,status) tag MPI_RECV(y,1,,0,tag2,comm,status) tag MPI source,tag comm, source MPI_ANY_SOURCE tag tag MPI_ANY_TAG tag MPI_ANY_SOURCE MPI_ANY_TAG comm source ( source = MPI_ANY_SOURCE) tag( = MPI_ANY_TAG) MPI_ANY_SOURCE MPI_ANY_TAG MPI = Source = destination 34
52 MPI MPI N 0 N-1 MPI_COMM_WORLD MPI MPI MPI_COMM_WORLD 6.5 MPI MPI MPI MPI MPI MPI MPI MPI MPI
53 7 MPI MPI MPI MPI 7.1 MPI MPI MPI MPI_WTIME() double MPI_Wtime(void) DOUBLE PRECISION MPI_WTIME() MPI 7 MPI_WTIME MPI_WTIME, double starttime, endtime;... starttime = MPI_Wtime() endtime = MPI_Wtime() printf("that tooks %f secodes\n", endtime-starttime); 10 MPI_WTICK double MPI_Wtick DOUBLE PRECISION MPI_WTICK MPI 8 MPI_WTICK MPI_WTICK MPI_WTIME 36
54 MPI #include <stdio.h> #include <stdlib.h> #include "mpi.h" #include "test.h" int main( int argc, char **argv ) { int err = 0; double t1, t2; double tick; int i; MPI_Init( &argc, &argv ); t1 = MPI_Wtime();/* t1*/ t2 = MPI_Wtime();/* t2*/ if (t2 - t1 > 0.1 t2 - t1 < 0.0) { /* 0.1 */ err++; fprintf( stderr, "Two successive calls to MPI_Wtime gave strange results: (%f) (%f)\n", t1, t2 ); } /* 10 1 */ for (i = 0; i<10; i++) { t1 = MPI_Wtime();/* */ sleep(1);/* 1 */ t2 = MPI_Wtime();/* */ if (t2 - t1 >= ( ) && t2 - t1 <= 5.0) break; /* */ if (t2 - t1 > 5.0) i = 9; /* */ } /* 10 */ if (i == 10) { /* */ fprintf( stderr, "Timer around sleep(1) did not give 1 second; gave %f\n",t2 - t1 ); err++; 37
55 } } tick = MPI_Wtick(); /* */ if (tick > 1.0 tick < 0.0) { /* */ err++; fprintf( stderr, "MPI_Wtick gave a strange result: (%f)\n", tick ); } MPI_Finalize( ); 11 MPI 7.2 MPI MPI rank MPI MPI_GET_PROCESSOR_NAME name, resultlen OUT name OUT resultlen int MPI_Get_processor_name ( char *name, int *resultlen) MPI_GET_PROCESSOR_NAME NAME, RESULTLEN, IERROR CHARACTER *(*) NAME INTEGER RESULTLEN, IERROR MPI 9 MPI_GET_PROCESSOR_NAME MPI_GET_PROCESSOR_NAME MPI_GET_VERSION(version, subversion) OUT version OUT subversion int MPI_Get_version(int * version, int * subversion) MPI_GET_VERSION(VERSION, SUBVERSION,IERROR) INTEGER VERSION, SUBVERSION, IERROR MPI 10 MPI_GET_VERSION MPI_GET_VERSION MPI version subversion MPI 38
56 program main include 'mpif.h' character*(mpi_max_processor_name) name integer resultlen, version, subversion, ierr C call MPI_Init( ierr ) name = " " C C C call MPI_Get_processor_name( name, resultlen, ierr ) name resultlen call MPI_GET_VERSION(version, subversion,ierr) MPI errs = 0 do i=resultlen+1, MPI_MAX_PROCESSOR_NAME if (name(i:i).ne. " ") then name resultlen errs = errs + 1 endif enddo if (errs.gt. 0) then print *, 'Non-blanks after name' else print *, name, " MPI version",version, ".", subversion endif call MPI_Finalize( ierr ) end 12 MPI 7.3 MPI MPI_INIT MPI MPI_INITALIZED MPI_INIT MPI_INITALIZED(flag) OUT flag MPI_INIT int MPI_Initialized(int *flag) MPI_INITALIZED(FLAG, IERROR) LOGICAL FLAG INTEGER IERROR MPI 11 MPI_INITALIZED 39
57 MPI_INITALIZED MPI_INIT flag=true flag=false MPI MPI MPI_ABORT(comm, errorcode) IN comm IN errorcode int MPI_Abort(MPI_Comm comm, int errorcode) MPI_ABORT(COMM, ERRORCODE, IERROR) INTEGER COMM, ERRORCODE, IERROR MPI 12 MPI_ABORT MPI_ABORT comm master #include "mpi.h" #include <stdio.h> /* masternode == 0 masternode!= 0 */ int main( int argc, char **argv ) { int node, size, i; int masternode = 0; /* */ MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &node); MPI_Comm_size(MPI_COMM_WORLD, &size); /* */ for (i=1; i<argc; i++) { fprintf(stderr,"myid=%d,procs=%d,argv[%d]=%s\n",node,size,i,argv[i]); if (argv[i] && strcmp( "lastmaster", argv[i] ) == 0) { masternode = size-1; /* master*/ } } if(node == masternode) { /* master */ fprintf(stderr,"myid=%d is masternode Abort!\n",node); MPI_Abort(MPI_COMM_WORLD, 99); } 40
58 } else { /* master */ fprintf(stderr,"myid=%d is not masternode Barrier!\n",node); MPI_Barrier(MPI_COMM_WORLD); } MPI_Finalize(); 13 MPI N-1 19 #include <stdio.h> #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank, value, size; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); /* */ do { /* */ 41
59 if (rank == 0) { fprintf(stderr, "\nplease give new value="); /* 0 */ scanf( "%d", &value ); fprintf(stderr,"%d read <-<- (%d)\n",rank,value); if (size>1) { MPI_Send( &value, 1, MPI_INT, rank + 1, 0, MPI_COMM_WORLD ); fprintf(stderr,"%d send (%d)->-> %d\n", rank,value,rank+1); /* */ } } else { MPI_Recv( &value, 1, MPI_INT, rank - 1, 0, MPI_COMM_WORLD, &status ); /* */ fprintf(stderr,"%d receive (%d)<-<- %d\n",rank,value,rank-1); if (rank < size - 1) { MPI_Send( &value, 1, MPI_INT, rank + 1, 0, MPI_COMM_WORLD ); fprintf(stderr,"%d send (%d)->-> %d\n", rank,value,rank+1); /* */ } } MPI_Barrier(MPI_COMM_WORLD); /* */ } while ( value>=0); MPI_Finalize( ); }
60 Please give new value=76 0 read <-<- (76) 0 send (76)->-> 1 1 receive (76)<-<- 0 1 send (76)->-> 2 2 receive (76)<-<- 1 2 send (76)->-> 3 3 receive (76)<-<- 2 3 send (76)->-> 4 4 receive (76)<-<- 3 4 send (76)->-> 5 5 receive (76)<-<- 4 5 send (76)->-> 6 6 receive (76)<-<- 5 Please give new value=-3 0 read <-<- (-3) 0 send (-3)->-> 1 1 receive (-3)<-<- 0 2 receive (-3)<-<- 1 3 receive (-3)<-<- 2 4 receive (-3)<-<- 3 4 send (-3)->-> 5 5 receive (-3)<-<- 4 6 receive (-3)<-<- 5 1 send (-3)->-> 2 2 send (-3)->-> 3 3 send (-3)->-> 4 5 send (-3)->->
61 0 hello hello hello hello hello 1 2 hello 21 #include "mpi.h" #include <stdio.h> #include <stdlib.h> void Hello( void ); int main(int argc, char *argv[]) { int me, option, namelen, size; char processor_name[mpi_max_processor_name]; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&me); MPI_Comm_size(MPI_COMM_WORLD,&size); /* */ if (size < 2) { /* 2 */ fprintf(stderr, "systest requires at least 2 processes" ); MPI_Abort(MPI_COMM_WORLD,1); } MPI_Get_processor_name(processor_name,&namelen); /* */ fprintf(stderr,"process %d is alive on %s\n", me, processor_name); MPI_Barrier(MPI_COMM_WORLD); /* */ Hello(); /* */ MPI_Finalize(); } 44
62 void Hello( void ) /* */ { int nproc, me; int type = 1; int buffer[2], node; MPI_Status status; MPI_Comm_rank(MPI_COMM_WORLD, &me); MPI_Comm_size(MPI_COMM_WORLD, &nproc); /* */ if (me == 0) { /* 0 */ printf("\nhello test from all to all\n"); fflush(stdout); } for (node = 0; node<nproc; node++) { /* */ if (node!= me) { /* */ buffer[0] = me; /* */ buffer[1] = node; /* */ MPI_Send(buffer, 2, MPI_INT, node, type, MPI_COMM_WORLD); /* */ MPI_Recv(buffer, 2, MPI_INT, node, type, MPI_COMM_WORLD, &status); /* */ if ( (buffer[0]!= node) (buffer[1]!= me) ) { /* */ (void) fprintf(stderr, "Hello: %d!=%d or %d!=%d\n", buffer[0], node, buffer[1], me); printf("mismatch on hello process ids; node = %d\n", node); } printf("hello from %d to %d\n", me, node); /* */ fflush(stdout); } } } 15 45
63 7.6 tag 22 ROOT i N-1 ROOT ROOT 0 22 #include "mpi.h" #include <stdio.h> int main(argc, argv) int argc; char **argv; { int rank, size, i, buf[1]; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); if (rank == 0) { for (i=0; i<100*(size-1); i++) { MPI_Recv( buf, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status ); printf( "Msg=%d from %d with tag %d\n", buf[0], status.mpi_source, status.mpi_tag ); } } else { for (i=0; i<100; i++) buf[0]=rank+i; MPI_Send( buf, 1, MPI_INT, 0, i, MPI_COMM_WORLD ); } MPI_Finalize(); } 16 46
64 7.7 MPI MPI 17 CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_RECV(recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 1, tag, comm, ierr) ELSE IF( rank.eq. 1) CALL MPI_RECV(recvbuf, count, MPI_REAL, 0, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 0, tag, comm, ierr) END IF
65 24 CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_SEND(sendbuf, count, MPI_REAL, 1, tag, comm, ierr) CALL MPI_RECV(recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) ELSE rank.eq.1 CALL MPI_SEND(sendbuf, count, MPI_REAK, 0, tag, comm, status, ierr) CALL MPI_RECV(recvbuf, count, MPI_REAL, 0, tag, comm, status, ierr) END IF MPI
66 CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_SEND(sendbuf, count, MPI_REAL, 1, tag, comm, ierr) CALL MPI_RECV(recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) ELSE rank.eq. 1 CALL MPI_RECV(recvbuf, count, MPI_REAL, 0, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 0, tag, comm, ierr) END IF C A A D A D A D B C B C A C D B 49
67 7.8 MPI MPI MPI 50
68 51 8 MPI MPI MPI MPI MPI Jacobi MPI MPI MPI MPI SPMD MPI MPMD MPMD SPMD MPI SPMD SPMD SPMD SPMD SPMD SPMD 8.1 MPI Jacobi Jacobi 20 Jacobi Jacobi
69 REAL A(N+1,N+1), B(N+1,N+1) DO K=1,STEP DO J=1,N DO I=1,N B(I,J)=0.25*(A(I-1,J)+A(I+1,J)+A(I,J+1)+A(I,J-1)) END DO END DO DO J=1,N DO I=1,N A(I,J)=B(I,J) END DO END DO 20 Jacobi MPI Jacobi Jacobi M M A(M,M) M=4*N 0 A(M,1:N) 1 A(M,N+1:2*N), 3 A(M,2*N+1:3*N) 3 A(M,3*N+1:M) 1 52
70 M*N N Jacobi 27 FORTRAN Jacobi C program main implicit none include 'mpif.h' integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) C integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer status(mpi_status_size) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) 53
71 print *, "Process ", myid, " of ", numprocs, " is alive" C do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do C Jacobi do n=1,steps C C C C if (myid.lt. 3) then call MPI_RECV(a(1,mysize+2),totalsize,MPI_REAL,myid+1,10, * MPI_COMM_WORLD,status,ierr) end if if ((myid.gt. 0) ) then call MPI_SEND(a(1,2),totalsize,MPI_REAL,myid-1,10, * MPI_COMM_WORLD,ierr) end if if (myid.lt. 3) then call MPI_SEND(a(1,mysize+1),totalsize,MPI_REAL,myid+1,10, * MPI_COMM_WORLD,ierr) end if if (myid.gt. 0) then call MPI_RECV(a(1,1),totalsize,MPI_REAL,myid-1,10, 54
72 * MPI_COMM_WORLD,status,ierr) end if begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 endif if (myid.eq. 3) then end_col=mysize endif do j=begin_col,end_col do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) end do end do end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) end do call MPI_Finalize(rc) end 21 MPI_SEND MPI_RECV Jacobi Jacobi Jacobi MPI MPI 55
73 MPI_SENDRECV(sendbuf,sendcount,sendtype,dest,sendtag,recvbuf,recvcount, recvtype, source,recvtag,comm,status) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) IN dest ( ) IN sendtag ( ) OUT recvbuf ( ) IN recvcount ( ) IN recvtype ( ) IN source ( ) IN recvtag ( ) IN comm ( ) OUT status (status) int MPI_Sendrecv(void *sendbuf, int sendcount,mpi_datatype sendtype, int dest, int sendtag, void *recvbuf, int recvcount, MPI_Datatype recvtype, int source, int recvtag, MPI_Comm comm, MPI_Status *status) MPI_SENDRECV(SENDBUF, SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVBUF, RECVCOUNT, RECVTYPE, SOURCE, RECVTAG, COMM, STATUS, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVCOUNT, RECVTYPE,SOURCE, RECVTAG, COMM,STATUS(MPI_STATUS_SIZE), IERROR MPI 13 MPI_SENDRECV MPI_SENDRECV MPI_SENDRECV_REPLACE MPI_SENDRECV MPI_SENDRECV_REPLACE MPI_SENDRECV 56
74 MPI_SENDRECV_REPLACE(buf,count,datatype,dest,sendtag,source,recvtag,comm, status) INOUT buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN sendtag ( ) IN source ( ) IN recvtag ( ) IN comm ( ) OUT status (status) int MPI_Sendrecv_replace(void *buf, int count, MPI_Datatype datatype, int dest, int sendtag, int source,int recvtag, MPI_Comm comm, MPI_Status *status) MPI_SENDRECV_REPLACE(BUF, COUNT, DATATYPE, DEST, SENDTAG, SOURCE, RECVTAG, COMM, STATUS, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, SENDTAG, SOURCE, RECVTAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR MPI 14 MPI_SENDRECV_REPLACE Jacobi MPI_SENDRECV MPI_SENDRECV Jacobi 57
75 22 MPI_SENDRECV program main implicit none include 'mpif.h' integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer status(mpi_status_size) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) print *, "Process ", myid, " of ", numprocs, " is alive" C do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do C 58
76 do n=1,steps C C if (myid.eq. 0) then call MPI_SEND(a(1,mysize+1),totalsize,MPI_REAL,myid+1,10, * MPI_COMM_WORLD,ierr) else if (myid.eq. 3) then call MPI_RECV(a(1,1),totalsize,MPI_REAL,myid-1,10, * MPI_COMM_WORLD,status,ierr) else call MPI_SENDRECV(a(1,mysize+1),totalsize,MPI_REAL,myid+1,10, * a(1,1),totalsize,mpi_real,myid-1,10, * MPI_COMM_WORLD,status,ierr) end if if (myid.eq. 0) then call MPI_RECV(a(1,mysize+2),totalsize,MPI_REAL,myid+1,10, * MPI_COMM_WORLD,status,ierr) else if (myid.eq. 3) then call MPI_SEND(a(1,2),totalsize,MPI_REAL,myid-1,10, * MPI_COMM_WORLD,ierr) else call MPI_SENDRECV(a(1,2),totalsize,MPI_REAL,myid-1,10, * a(1,mysize+2),totalsize,mpi_real,myid+1,10, * MPI_COMM_WORLD,status,ierr) end if begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 endif if (myid.eq. 3) then end_col=mysize endif do j=begin_col,end_col do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) 59
77 end do end do end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) end do call MPI_Finalize(rc) end 22 MPI_SENDRECV Jacobi Jacobi MPI_PROC_NULL MPI MPI_PRC_NULL MPI_PROC_NULL Jacobi program main implicit none include 'mpif.h' integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer left,right,tag1,tag2 integer status(mpi_status_size) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) print *, "Process ", myid, " of ", numprocs, " is alive" 60
78 C do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do C C C C tag1=3 tag2=4 if (myid.gt. 0) then left=myid-1 else left=mpi_proc_null end if if (myid.lt. 3) then right=myid+1 else right=mpi_proc_null end if Jacobi do n=1,steps call MPI_SENDRECV(a(1,mysize+1),totalsize,MPI_REAL,right,tag1, * a(1,1),totalsize,mpi_real,left,tag1, * MPI_COMM_WORLD,status,ierr) call MPI_SENDRECV(a(1,2),totalsize,MPI_REAL,left,tag2, * a(1,mysize+2),totalsize,mpi_real,right,tag2, * MPI_COMM_WORLD,status,ierr) 61
79 begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 endif if (myid.eq. 3) then end_col=mysize endif do j=begin_col,end_col do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) end do end do end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) end do call MPI_Finalize(rc) end 23 Jacobi 8.2 MPI C=A B 29 B A B A A 62
80 A B 29 program main include "mpif.h" integer MAX_ROWS,MAX_COLS, rows, cols parameter (MAX_ROWS=1000, MAX_COLS=1000) double precision a(max_rows, MAX_COLS),b(MAX_COLS),c(MAX_COLS) double precision buffer (MAX_COLS), ans integer myid, master, numprocs, ierr, status(mpi_status_size) integer i,j,numsent, numrcvd, sender integer anstype, row call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, myid, ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD, numprocs, ierr) master=0 rows=100 cols=100 C if (myid.eq. master) then A B do i=1,cols b(i)=1 do j=1,rows a(i,j)=i end do end do numsent=0 numrcvd=0 63
81 C C C C C C C C C C C C B call MPI_BCAST(b,cols,MPI_DOUBLE_PRECISION,master, $ MPI_COMM_WORLD, ierr) A numprocs-1 do i=1,min(numprocs-1,rows) do j=1,cols buffer(j)=a(i,j) end do call MPI_SEND(buffer, cols, MPI_DOUBLE_PRECISION,i, $ i,mpi_comm_world, ierr) numsent=numsent+1 end do do i=1,row call MPI_RECV(ans, 1,MPI_DOUBLE_PRECISION, MPI_ANY_SOURCE, $ MPI_ANY_TAG,MPI_COMM_WORLD, status, ierr) sender=status(mpi_source) anstype=status(mpi_tag) C c(anstype)=ans if (numsent.lt. rows) then do j=1,cols buffer(j)=a(numsent+1,j) end do call MPI_SEND(buffer,cols, MPI_DOUBLE_PRECISION, sender, $ numsent+1,mpi_comm_world, ierr) numsent=numsent+1 else 0 call MPI_SEND(1.0,0,MPI_DOUBLE_PRECISION,sender, $ 0, MPI_COMM_WORLD, ierr) end if else B call MPI_BCAST(b,cols,MPI_DOUBLE_PRECISION,master, $ MPI_COMM_WORLD, ierr) 64
82 C A 90 call MPI_RECV(buffer,cols, MPI_DOUBLE_PRECISION, master, $ MPI_ANY_TAG, MPI_COMM_WORLD, status,ierr) C 0 if (status(mpi_tag).ne. 0) then row=status(mpi_tag) ans=0.0 do i=1,cols ans=ans+buffer(i)*b(i) end do C call MPI_SEND(ans, 1, MPI_DOUBLE_PRECISION, master, row, $ MPI_COMM_WORLD, ierr) goto 90 end if endif call MPI_FINALIZE(ierr) end
83 #include <stdio.h> #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank, size; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); if (rank == 0) master_io(); /* 0 */ else slave_io(); /* */ MPI_Finalize( ); } #define MSG_EXIT 1 #define MSG_PRINT_ORDERED 2 /* */ #define MSG_PRINT_UNORDERED 3 /* */ /* */ int master_io( void ) { int i,j, size, nslave, firstmsg; char buf[256], buf2[256]; MPI_Status status; MPI_Comm_size( MPI_COMM_WORLD, &size );/* */ nslave = size - 1;/* */ while (nslave > 0) {/* */ MPI_Recv( buf, 256, MPI_CHAR, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status );/* */ switch (status.mpi_tag) { case MSG_EXIT: nslave--; break;/* 1*/ case MSG_PRINT_UNORDERED:/* */ fputs( buf, stdout ); break; case MSG_PRINT_ORDERED:/* */ firstmsg = status.mpi_source; for (i=1; i<size; i++) {/* */ if (i == firstmsg) fputs( buf, stdout );/* */ 66
84 else {/* */ MPI_Recv( buf2, 256, MPI_CHAR, i, MSG_PRINT_ORDERED, MPI_COMM_WORLD, &status );/* */ fputs( buf2, stdout ); } } break; } } } /* */ int slave_io( void ) { char buf[256]; int rank; MPI_Comm_rank( MPI_COMM_WORLD, &rank );/* */ sprintf( buf, "Hello from slave %d ordered print\n", rank ); MPI_Send( buf, strlen(buf) + 1, MPI_CHAR, 0, MSG_PRINT_ORDERED, MPI_COMM_WORLD );/* */ sprintf( buf, "Goodbye from slave %d, ordered print\n", rank ); MPI_Send( buf, strlen(buf) + 1, MPI_CHAR, 0, MSG_PRINT_ORDERED, MPI_COMM_WORLD );/* */ sprintf( buf, "I'm exiting (%d),unordered print\n", rank ); MPI_Send( buf, strlen(buf) + 1, MPI_CHAR, 0, MSG_PRINT_UNORDERED, MPI_COMM_WORLD );/* */ MPI_Send( buf, 0, MPI_CHAR, 0, MSG_EXIT, MPI_COMM_WORLD );/* */ } Hello from slave 1,ordered print Hello from slave 2,ordered print Hello from slave 3,ordered print Hello from slave 4,ordered print Hello from slave 5,ordered print Hello from slave 6,ordered print Hello from slave 7,ordered print Hello from slave 8,ordered print Hello from slave 9,ordered print 67
85 Goodbye from slave 1,ordered print Goodbye from slave 2,ordered print Goodbye from slave 3,ordered print Goodbye from slave 4,ordered print Goodbye from slave 5,ordered print Goodbye from slave 6,ordered print Goodbye from slave 7,ordered print Goodbye from slave 8,ordered print Goodbye from slave 9,ordered print I'm exiting (1),unordered print I'm exiting (3),unordered print I'm exiting (4),unordered print I'm exiting (7),unordered print I'm exiting (8),unordered print I'm exiting (9),unordered print I'm exiting (2),unordered print I'm exiting (5),unordered print I'm exiting (6),unordered print MPI MPI MPI MPI 68
86 9 MPI MPI standard mode bufferedmode synchronous-mode ready-mode MPI MPI 6 MPI MPI_SEND MPI_RECV MPI_BSEND MPI_SSEND MPI_RSEND MPI B S R 9.1 MPI 32 MPI MPI MPI 69
87 MPI_BSEND(buf, count, datatype, dest, tag, comm) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) int MPI_Bsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_BSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR MPI 15 MPI_BSEND MPI_BSEND MPI_SEND 70
88 33 MPI MPI MPI_BUFFER_ATTACH( buffer, size) IN buffer ( ) IN size ( ) int MPI_Buffer_attach( void* buffer, int size) MPI_BUFFER_ATTACH( BUFFER, SIZE, IERROR) <type>bufferr(*) INTEGER SIZE, IERROR MPI 16 MPI_BUFFER_ATTACH MPI_BUFFER_ATTACH size MPI MPI_BUFFER_DETACH( buffer, size) OUT buffer ( ) OUT size ( ) int MPI_Buffer_detach( void** buffer, int* size) MPI_BUFFER_DETACH( BUFFER, SIZE, IERROR) <type>buffer(*) INTEGER SIZE, IERROR MPI 17 MPI_BUFFER_DETACH MPI_BUFFER_DETACH size buffer 5 71
89 #include <stdio.h> #include <stdlib.h> #include "mpi.h" #define SIZE 6 /* */ static int src = 0; static int dest = 1; void Generate_Data ( double *, int ); /* */ void Normal_Test_Recv ( double *, int ); /* */ void Buffered_Test_Send ( double *, int ); /* */ void Generate_Data(buffer, buff_size) double *buffer; int buff_size; { int i; for (i = 0; i < buff_size; i++) buffer[i] = (double)i+1; } void Normal_Test_Recv(buffer, buff_size) double *buffer; int buff_size; { int i, j; MPI_Status Stat; double *b; b = buffer; /* buff_size - 1 */ MPI_Recv(b, (buff_size - 1), MPI_DOUBLE, src, 2000, MPI_COMM_WORLD, &Stat); fprintf(stderr,"standard receive a message of %d data\n",buff_size-1); for (j=0;j<buff_size-1;j++) fprintf(stderr," buf[%d]=%f\n",j,b[j]); b += buff_size - 1; /* */ MPI_Recv(b, 1, MPI_DOUBLE, src, 2000, MPI_COMM_WORLD, &Stat); fprintf(stderr,"standard receive a message of one data\n"); fprintf(stderr,"buf[0]=%f\n",*b); 72
90 } void Buffered_Test_Send(buffer, buff_size) double *buffer; int buff_size; { int i, j; void *bbuffer; int size; fprintf(stderr,"buffered send message of %d data\n",buff_size-1); for (j=0;j<buff_size-1;j++) fprintf(stderr,"buf[%d]=%f\n",j,buffer[j]); /* buff_size - 1 */ MPI_Bsend(buffer, (buff_size - 1), MPI_DOUBLE, dest, 2000, MPI_COMM_WORLD); buffer += buff_size - 1; fprintf(stderr,"buffered send message of one data\n"); fprintf(stderr,"buf[0]=%f\n",*buffer); /* 1 */ MPI_Bsend(buffer, 1, MPI_DOUBLE, dest, 2000, MPI_COMM_WORLD); /* */ MPI_Buffer_detach( &bbuffer, &size ); /* */ MPI_Buffer_attach( bbuffer, size ); } int main(int argc, char **argv) { int rank; /* My Rank (0 or 1) */ double buffer[size], *tmpbuffer, *tmpbuf; int tsize, bsize; char *Current_Test = NULL; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == src) /* */ Generate_Data(buffer, SIZE);/* */ MPI_Pack_size( SIZE, MPI_DOUBLE, MPI_COMM_WORLD, &bsize ); /* SIZE MPI_DOUBLE */ tmpbuffer = (double *) malloc( bsize + 2*MPI_BSEND_OVERHEAD ); /* */ 73
91 if (!tmpbuffer) { fprintf( stderr, "Could not allocate bsend buffer of size %d\n", bsize ); MPI_Abort( MPI_COMM_WORLD, 1 ); } MPI_Buffer_attach( tmpbuffer, bsize + MPI_BSEND_OVERHEAD ); /* MPI MPI */ Buffered_Test_Send(buffer, SIZE);/* */ MPI_Buffer_detach( &tmpbuf, &tsize );/* */ } else if (rank == dest) { /* */ Normal_Test_Recv(buffer, SIZE);/* */ } } else { fprintf(stderr, "*** This program uses exactly 2 processes! ***\n"); /* */ MPI_Abort( MPI_COMM_WORLD, 1 ); } MPI_Finalize(); MPI_SSEND(buf, count, datatype, dest, tag, comm) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) int MPI_Ssend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_SSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR) MPI 18 MPI_SSEND 34 74
92 tag 1 2 #include <stdio.h> #include "mpi.h" #define SIZE 10 /* Amount of time in seconds to wait for the receipt of the second Ssend message */ static int src = 0; static int dest = 1; int main( int argc, char **argv) { int rank; /* My Rank (0 or 1) */ int act_size = 0; int flag, np, rval, i; int buffer[size]; MPI_Status status, status1, status2; int count1, count2; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size( MPI_COMM_WORLD, &np ); if (np!= 2) { fprintf(stderr, "*** This program uses exactly 2 processes! ***\n"); MPI_Abort( MPI_COMM_WORLD, 1 ); } act_size = 5;/* */ if (rank == src) { /* */ act_size = 1; MPI_Ssend( buffer, act_size, MPI_INT, dest, 1, MPI_COMM_WORLD ); /* tag 1*/ fprintf(stderr,"mpi_ssend %d data,tag=1\n", act_size); 75
93 } act_size = 4; MPI_Ssend( buffer, act_size, MPI_INT, dest, 2, MPI_COMM_WORLD ); /* 4 tag 2*/ fprintf(stderr,"mpi_ssend %d data,tag=2\n", act_size); } else if (rank == dest) {/* */ MPI_Recv( buffer, act_size, MPI_INT, src, 1, MPI_COMM_WORLD, &status1 ); /* act_size tag 1*/ MPI_Recv( buffer, act_size, MPI_INT, src, 2, MPI_COMM_WORLD, &status2 ); /* act_size tag 2*/ MPI_Get_count( &status1, MPI_INT, &count1 );/* 1 */ fprintf(stderr,"receive %d data,tag=%d\n",count1,status1.mpi_tag); MPI_Get_count( &status2, MPI_INT, &count2 );/* 2 */ fprintf(stderr,"receive %d data,tag=%d\n",count2,status2.mpi_tag); } MPI_Finalize(); MPI_RSEND(buf, count, datatype, dest, tag, comm) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) int MPI_Rsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_RSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR MPI 19 MPI_RSEND 35 76
94 C program rsendtest include 'mpif.h' integer ierr call MPI_Init(ierr) call test_rsend 77
95 call MPI_Finalize(ierr) end subroutine test_rsend include 'mpif.h' integer TEST_SIZE parameter (TEST_SIZE=2000) integer ierr, prev, next, count, tag, index, i, outcount, $ requests(2), indices(2), rank, size, $ status(mpi_status_size), statuses(mpi_status_size,2) logical flag real send_buf( TEST_SIZE ), recv_buf ( TEST_SIZE ) call MPI_Comm_rank( MPI_COMM_WORLD, rank, ierr ) call MPI_Comm_size( MPI_COMM_WORLD, size, ierr ) if (size.ne. 2) then print *, 'This test requires exactly 2 processes' call MPI_Abort( 1, MPI_COMM_WORLD, ierr ) endif C C C C C C next = rank + 1 if (next.ge. size) next = 0 prev = rank - 1 if (prev.lt. 0) prev = size - 1 if (rank.eq. 0) then print *, " Rsend Test " end if tag = 1456 count = TEST_SIZE / 3 if (rank.eq. 0) then call MPI_Recv( MPI_BOTTOM, 0, MPI_INTEGER, next, tag, $ MPI_COMM_WORLD, status, ierr ) 0 0 MPI_BOTTOM MPI print *,"Process ",rank," post Ready send" call MPI_Rsend(send_buf, count, MPI_REAL, next, tag, $ MPI_COMM_WORLD, ierr) else print *, "process ",rank," post a receive call" call MPI_Irecv(recv_buf, TEST_SIZE, MPI_REAL, 78
96 C C C C C $ MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, $ requests(1), ierr) 1 call MPI_Send( MPI_BOTTOM, 0, MPI_INTEGER, next, tag, $ MPI_COMM_WORLD, ierr ) MPI_Irecv call MPI_Wait( requests(1), status, ierr ) print *,"Process ", rank," Receive Rsend message from ", $ status(mpi_source) end if end 28 MPI 79
97 10 MPICH MPI MPI MPICH MPICH Linux NT MPI MPICH MPI MPICH MPI MPICH MPI MPICH MPI MPICH Argonne National Laboratory Mississippi State University IBM MPI 10.1 Linux MPICH MPICH mpich.tar.gz mpich.tar.z mpich.tar.gz gunzip ftp ftp://ftp.mcs.anl.org/pub/mpi ftp://ftp.mcs.anl.org/pub/mpisplit ftp://ftp.mcs.anl.org/pub/mpisplit cat 2 tar zxvf mpich.tar.gz gunzip c mpich.tar.gz tar xovf zcat mpich.tar.z tar xovf uncompress mpich.tar.z tar xvf mpich.tar 3 mpich cd mpich Makefile./configure prefix./configure prefix=/usr/local/mpich make configure MPI make MPI 80
98 5 cd examples/basic make cpi../../bin/mpirun np 4 cpi $(HOME)/mpich make testing 6 mpich make install prefix $ HOME /mpich-1.2.1/mpi-2-c++ mpich C++ $ HOME /mpich-1.2.1/bin mpich $ HOME /mpich-1.2.1/doc mpich $ HOME /mpich-1.2.1/examples mpich $ HOME /mpich-1.2.1/f90modules mpich Fortran90 $ HOME /mpich-1.2.1/include mpich $ HOME /mpich-1.2.1/lib mpich $ HOME /mpich-1.2.1/man mpich $ HOME /mpich-1.2.1/mpe mpich $ HOME /mpich-1.2.1/mpid mpich $ HOME /mpich-1.2.1/romio mpich I/O $ HOME /mpich-1.2.1/share upshot jumpshot $ HOME /mpich-1.2.1/src mpich $ HOME /mpich-1.2.1/util mpich $ HOME /mpich-1.2.1/www mpich MPI 81
99 mpicc/mpicc/mpif77/mpif90 mpicc C++ MPI mpicc C mpif77 mpif90 FORTRAN77 Fortran90 MPI MPI mpicc C -mpilog MPE log -mpitrace MPI -mpilog -mpianim -show -help -echo C++/C/FORTRAN77/Fortran MPI SPMD Single Program Multiple Data MPI MASTER/SLAVER MPI MPI C FORTRAN MPI 1 2 N MPI 37 MPI 82
100 MPI 37 1 MPI MPI 2 3 mpirun MPI MPI MPI /etc/hosts.equiv MPI tp5 16 MPI tp1,tp2,...,tp16 tp1,...,tp16 /etc/hosts.equiv tp5 tp5 /etc/hosts.equiv.rhosts MPI home.rhosts tp1 pact tp5 pact tp1 pact home.rhosts tp5 pact MPI MPI mpirun np N program N program MPI $(HOME)/mpich/util/machines/machines.LINUX tp5.cs.tsinghua.edu.cn tp1.cs.tsinghua.edu.cn tp2.cs.tsinghua.edu.cn tp3.cs.tsinghua.edu.cn tp4.cs.tsinghua.edu.cn tp8.cs.tsinghua.edu.cn 83
101 6 MPI tp5.cs.tsinghua.edu.cn $(HOME)/mpich/examples/basic/ mpirun np 6 cpi {tp1,tp2,tp3,tp4,tp8} $(HOME)/mpich/examples/basic/ cpi mashines.linux mpirun machinefile hosts np 6 cpi hosts mpirun p4pg pgfile cpi pgfile 38 < > < > < > < > < > < > < > < > < > tp5 0 /home/pact/mpich/examples/basic/cpi tp1 1 /home/pact/mpich/examples/basic/cpi tp2 1 /home/pact/mpich/examples/basic/cpi tp3 1 /home/pact/mpich/examples/basic/cpi tp4 1 /home/pact/mpich/examples/basic/cpi tp8 1 /home/pact/mpich/examples/basic/cpi 39 0 tp5 0 tp5 MPI mpirun MPI MPI MPI 84
102 mpirun -np <number of processes> <program name and arguments> MPI MPI chameleon ( chameleon/pvm, chameleon/p4,...) meiko ( meiko ) paragon (paragon ch_nx ) p4 ( ch_p4 ) ibmspx (IBM SP2 ch_eui) anlspx (ANLs SPx ch_eui) ksr (KSR 1 2 ch_p4) sgi_mp (SGI ch_shmem) cray_t3d (Cray T3D t3d) smp (SMPs ch_shmem) execer ( ) MPI MPI mpirun [mpirun_options...] <progname> [options...] -arch <architecture> ${MPIR_HOME}/util/machines machines.<arch> -h -machine <machine name> use startup procedure for <machine name> -machinefile <machine-file name> -np <np> -nolocal -stdin filename -t -v -dbx dbx -gdb gdb -xxgdb xxgdb -tv totalview NEC - CENJU-3 -batch -stdout filename -stderr filename Nexus -nexuspg filename -np -nolocal -leave_pg -nexusdb filename Nexus -e execer -pg p4 execer -leave_pg P4 -p4pg filename -np -nolocal 85
103 -leave_pg -tcppg filename tcp -np nolocal -leave_pg -p4ssport num p4 num num=0 MPI_P4SSPORT MPI_USEP4SSPORT MPI_P4SSPORT -p4ssport 0 -mvhome home -mvback files -maxtime min -nopoll -mem value -cpu time CPU IBM SP2 -cac name ANL Intel Paragon -paragontype name -paragonname name shells -paragonpn name Paragon -arch -np MPI sun4 rs6000 sun4 2 rs mpirun -arch sun4 -np 2 -arch rs6000 -np 3 program sun4 program.sun4 rs6000 program.rs6000 %a mpirun -arch sun4 -np 2 -arch rs6000 -np 3 program.%a /tmp/me/sun4 /tmp/me/rs6000 mpirun -arch sun4 -np 2 -arch rs6000 -np 3 /tmp/me/%a/program mpiman MPI UNIX man Web HTML mpiman xman, X -xmosaic xmosaic Web -mosaic mosaic Web -netscape netscape Web -xman X xman -man man program ( mpiman -man MPI_Send) mpireconfig 86
104 make MPICH make mpireconfig filename filename filename.in 10.2 Windows NT MPICH NT MPICH MPICH.NT tcp/ip, VIA sockets VI MS Visual C Digital Fortran 6.0 FORTRAN MPI PMPI C FORTRAN ftp://ftp.mcs.anl.gov/pub/mpi/nt/mpich.nt all.zip setup MPICH NT c:\program Files\Argonne National Lab\MPICH.NT MPI launcher MPI sdk C/C++ MPI MS Visual C++ makefile project include [MPICH Home]\include Debug - /MTd Release - /MT Debug - ws2_32.lib mpichd.lib pmpichd.lib romiod.lib Release - ws2_32.lib mpich.lib pmpich.lib romio.lib pmpich*.lib MPI PMPI_ * lib [MPICH Home]\lib MPI build 87
105 FORTRAN FORTRAN Visual Fortran 6+ mpif.h Visual Fortran 6+ /iface:cref /iface:nomixed_str_len_arg C/C++ NT MPICH VIA NT MPICH Remote Shell Server MPIRun.exe Simple Launcher MPIRun.exe MPICH Remote Shell Server MPI DCOM server SYSTEM MPIRun Remote Shell Server MPIRun MPI Remote Shell Server MPIRun.exe MPI MPIRun -np MPIRun.exe c:\program Files\Argonne National Lab\MPICH.NT \RemoteShell\Bin MPI MPI MPIConfig MPIConfig c:\program Files\Argonne National Lab\MPICH.NT \RemoteShell\Bin MPI MPIConfig MPI Refresh: Find: Verify: DCOM server Set: "set HOSTS" MPIRun "set TEMP" remote shell service MPI C:\ 88
106 timeout Remote Shell Server MPIRun.exe MPI 40 MPIRun configfile [-logon] [args...] MPIRun -np #processes [-logon] [-env "var1=val1 var2=val2..."] executable [args...] MPIRun -localonly #processes [-env "var1=val1 var2=val2..."] executable [args...] NT MPI exe c:\somepath\myapp.exe \\host\share\somepath\myapp.exe [args arg1 arg2 arg3...] [env VAR1=VAL1 VAR2=VAL2... VARn=VALn] hosts hosta #procs [path\myapp.exe] hostb #procs [\\host\share\somepath\myapp2.exe] hostc #procs NT MPI 8 NT01 NT02... NT08 MPI testmpint c:\mpint 42 mpiconf1 exe c:\mpint\testmpint.exe hosts NT01 1 NT02 1 NT03 1 NT04 1 NT05 1 NT06 1 NT07 1 NT NT MPI 1 mpirun mpiconf1 testmpint 8 89
107 43 mpiconf2 exe c:\mpint\testmpint.exe hosts NT01 1 c:\mpint\testmpint.exe NT02 1 d:\mpint\testmpint2.exe NT03 1 e:\mpint\testmpint1.exe NT04 1 c:\testmpint.exe NT05 1 c:\test\testmpint9.exe NT06 1 d:\abc\abc.exe NT07 1 c:\temp\testmpint7.exe NT08 1 c:\mpint\testmpint.exe 43 NT MPI 2 mpirun mpiconf2 testmpint 8 44 mpiconf3 exe c:\mpint\testmpint.exe hosts NT01 2 c:\mpint\testmpint.exe NT02 3 d:\mpint\testmpint2.exe NT03 1 e:\mpint\testmpint1.exe NT04 4 c:\testmpint.exe NT05 1 c:\test\testmpint9.exe NT06 1 d:\abc\abc.exe NT07 2 c:\temp\testmpint7.exe NT08 1 c:\mpint\testmpint.exe 44 NT MPI 3 mpirun mpiconf2 MPI 15 NT01 2 NT NT
108 mpirun -localonly 8 testmpint 8 MPIRun.exe -localonly #procs -tcp -tcp sockets -env "var1=val1 var2=val2 var3=val3...varn=valn" -logon mpirun mpiregister.exe MPIRegister.exe c:\program Files\Argonne National Lab\ MPICH.NT \RemoteShell\Bin\MPIRegister.exe MPIRun.exe mpirun MPIRegister MPIRegister -remove mpirun MPIRegister -remove MPI 91
109 11 MPI 11.1 ierr Fortran MPI ierr C Fortran Fortran status status MPI_Recv status Fortran C 10 string10 character*10 string10 character string Fortran MPI MPI_ MPI MPI_ argc argv MPI argc argv C MPI argc argv MPI_Init argc argv MPI argc MPI argv MPI argc argv MPI_Init MPI_Finalize MPI MPI_Init MPI_Finalize MPI MPI_Recv MPI_Bcast MPI_Bcast MPI_Bcast MPI_Send MPI MPI_Recv MPI_Bcast MPI_Bcast MPI_Recv MPI 92
110 MPI MPI MPI MPI pthread MPICH MPICH MPI_Send MPI_Recv 1 2 MPI 1 2 MPI_Send 2 MPI_Send 1 MPI_Recv 2MPI_Recv MPI_Sendrecv MPI_Send MPI_Recv MPI_Buffer_attach MPI_Isend MPI_Irecv C FORTRAN C FORTRAN, address( MPI_BOTTOM ) FORTRAN COMMON, C 11.2 MPI MPICH MPI SPMD 93
111 NT Lilux MPI 11.3 MPI 94
112 MPI MPI MPI 95
113 96 12 MPI
114 CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_BSEND(buf1, count, MPI_REAL, 1, tag, comm, ierr) CALL MPI_BSEND(buf2, count, MPI_REAL, 1, tag, comm, ierr) ELSE IF rank.eq.1 THEN CALL MPI_RECV(buf1, count, MPI_REAL, 0, MPI_ANY_TAG, comm, status, ierr) CALL MPI_RECV(buf2, count, MPI_REAL, 0, tag, comm, status, ierr) END IF i 1 i MPI I/O I/O 97
115 47 MPI + MPI MPI 7 MPI MPI_ISEND MPI_IRECV MPI_IBSEND MPI_ISSEND MPI_IRSEND MPI_SEND_INIT MPI_RECV_INIT MPI_BSEND_INIT MPI_SSEND_INIT MPI_RSEND_INIT 98
116 8 MPI_TEST MPI_TESTANY MPI_TESTSOME MPI_TESTALL MPI_WAIT MPI_WAITANY MPI_WAITSOME MPI_WAITALL
117 49 MPI_ISEND MPI_ISEND request MPI_IRECV request MPI_ISEND(buf, count, datatype, dest, tag, comm, request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Isend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request) MPI_ISEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR MPI 20 MPI_ISEND MPI_IRECV(buf, count, datatype, source, tag, comm, request) OUT buf ( ) IN count ( ) IN datatype ( ) IN source ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Irecv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request *request) MPI_IRECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR 100
118 MPI 21 MPI_IRECV 12.4 MPI B,S,R I(immediate) MPI_ISSEND MPI_ISSEND(buf, count, datatype, dest, tag, comm, request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Issend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request) MPI_ISSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR MPI 22 MPI_ISSEND MPI_IBSEND MPI_IBSEND(buf, count, datatype, dest, tag, comm, request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Ibsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request) MPI_IBSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) 101
119 MPI 23 MPI_IBSEND MPI_IRSEND MPI_IRSEND(buf, count, datatype, dest, tag, comm, request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Irsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request) MPI_IRSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR MPI 24 MPI_IRSEND MPI MPI_WAIT MPI_TEST MPI_WAIT status MPI_WAIT MPI_TEST MPI_TEST MPI_WAIT flag=true MPI_TEST MPI_WAIT flag=false 102
120 MPI_WAIT(request, status) INOUT request ( ) OUT status ( ) int MPI_Wait(MPI_Request *request, MPI_Status *status) MPI_WAIT(REQUEST, STATUS, IERROR) INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR MPI 25 MPI_WAIT MPI_TEST(request, flag, status) INOUT request ( ) OUT flag ( ) OUT status ( ) int MPI_Test(MPI_Request*request, int *flag, MPI_Status *status) MPI_TEST(REQUEST, FLAG, STATUS, IERROR) LOGICAL FLAG INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR MPI 26 MPI_TEST C C C C CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_ISEND(a(1), 10, MPI_REAL, 1, tag, comm, request, ierr) CALL MPI_WAIT(request, status, ierr) ELSE (rank.eq. 1) THEN CALL MPI_IRECV(a(1), 15, MPI_REAL, 0, tag, comm, request, ierr) CALL MPI_WAIT(request, status, ierr) END IF 30 MPI_WAIT 103
121 MPI MPI_WAIT MPI_WAITANY MPI_WAITANY index=i MPI_WAITANY I MPI_WAIT(array_of_requests[I],status) MPI_WAITALL DO I=1,COUNT MPI_WAIT(array_of_requests[I],status) END DO MPI_WAITSOME MPI_WAITANY MPI_WAITALL outcount array_of_requests array_of_indices array_of_statuses MPI_WAITANY(count, array_of_requests, index, status) IN count ( ) INOUT array_of_requests ( ) OUT index ( ) OUT status ( ) int MPI_Waitany(int count, MPI_Request *array_of_requests, int *index, MPI_Status *status) MPI_WAITANY(COUNT, ARRAY_OF_REQUESTS, INDEX, STATUS, IERROR) INTEGER COUNT, ARRAY_OF_REQUESTS(*), INDEX, STATUS(MPI_STATUS_SIZE) IERROR MPI 27 MPI_WAITANY 104
122 MPI_WAITALL( count, array_of_requests, array_of_statuses) IN count ( ) INOUT array_of_requests ( ) OUT array_of_statuses ( ) int MPI_Waitall(int count, MPI_Request *array_of_requests, MPI_Status *array_of_statuses) MPI_WAITALL(COUNT, ARRAY_OF_REQUESTS, ARRAY_OF_STATUSES, IERROR) INTEGER COUNT, ARRAY_OF_REQUESTS(*) INTEGER ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR MPI 28 MPI_WAITALL MPI_WAITSOME(incount,array_of_requests,outcount,array_of_indices,array_of_statuses) IN incount ( ) INOUT array_of_requests ( ) OUT outcount ( ) OUT array_of_indices ( ) OUT array_of_statuses ( ) int MPI_Waitsome(int incount,mpi_request *array_of_request, int *outcount, int *array_of_indices, MPI_Status *array_of_statuses) MPI_WAITSOME(INCOUNT, ARRAY_OF_REQUESTS, OUTCOUNT, ARRAY_OF_INDICES,ARRAY_OF_STATUSES, IERROR) INTEGER INCOUNT, ARRAY_OF_REQUESTS(*), OUTCOUNT, ARRAY_OF_INDICES(*) ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR MPI 29 MPI_WAITSOME MPI_TESTANY flag=true,, flag=false MPI_TESTALL flag=true flag=false MPI_TESTSOME MPI_WAITSOME outcount array_of_requests array_of_indices array_of_statuses outcount=0 105
123 MPI_TESTANY(count, array_of_requests, index, flag, status) IN count ( ) INOUT array_of_requests ( ) OUT index MPI_UNDEFINED ( ) OUT flag ( ) OUT status ( ) int MPI_Testany(int count, MPI_Request *array_of_requests, int *index, int *flag, MPI_Status *status) MPI_TESTANY(COUNT, ARRAY_OF_REQUESTS, INDEX, FLAG, STATUS, IERROR) LOGICAL FLAG INTEGER COUNT, ARRAY_OF_REQUESTS(*), INDEX, STATUS(MPI_STATUS_SIZE) IERROR MPI 30 MPI_TESTANY MPI_TESTALL(count, array_of_requests, flag, array_of_statuses) IN count ( ) INOUT array_of_requests ( ) OUT flag ( ) OUT array_of_statuses ( ) int MPI_Testall(int count, MPI_Request *array_of_requests, int *flag, MPI_Status *array_of_statuses) MPI_TESTALL(COUNT, ARRAY_OF_REQUESTS, FLAG, ARRAY_OF_STATUSES, IERROR) LOGICAL FLAG INTEGER COUNT, ARRAY_OF_REQUESTS(*), ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR MPI 31 MPI_TESTALL 106
124 MPI_TESTSOME(incount,array_of_requests,outcount,array_of_indices,array_of_statuses) IN incount ( ) INOUT array_of_requests ( ) OUT outcount ( ) OUT array_of_indices ( ) OUT array_of_statuses ( ) int MPI_Testsome(int incount,mpi_request *array_of_request, int *outcount, int *array_of_indices, MPI_Status *array_of_statuses) MPI_TESTSOME(INCOUNT, ARRAY_OF_REQUESTS, OUTCOUNT, ARRAY_OF_INDICES,ARRAY_OF_STATUSES, IERROR) INTEGER INCOUNT, ARRAY_OF_REQUESTS(*), OUTCOUNT, ARRAY_OF_INDICES(*) ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR MPI 32 MPI_TESTSOME 12.6 MPI MPI MPI_CANCEL 107
125 MPI_CANCEL(request) IN request ( int MPI_Cancel(MPI_Request *request) MPI_CANCEL(REQUEST,IERROR) INTEGER REQUEST,IERROR MPI 33 MPI_CANCEL MPI_WAIT MPI_TEST status MPI_TEST_CANCELLED(status,flag) IN status ( ) OUT flag ( ) int MPI_Test_cancelled(MPI_Status status, int *flag) MPI_TEST_CANCELLED(STATUS,FLAG,IERROR) LOGICAL FLAG INTEGER STATUS(MPI_STATUS_SIZE),IERROR MPI 34 MPI_TEST_CANCELLED MPI_TEST_CANCELLED MPI_TEST_CANCELLED flag=true MPI_Comm_rank( MPI_COMM_WORLD, &rank ); if (rank == 0) { MPI_Send(sbuf, 1, MPI_INT, 1, 99, MPI_COMM_WORLD ); /* 0 */ } else if (rank ==1) { MPI_Irecv( rbuf, 1, MPI_INT, 0, 99, MPI_COMM_WORLD, request); /* 1 */ MPI_Cancel( request); /* */ MPI_Wait(&request,&status);/* */ MPI_Test_cancelled(&status,&flag);/* */ if (flag) MPI_Irecv( rbuf, 1, MPI_INT, 0, 99 MPI_COMM_WORLD, request);/* */ }
126 MPI_REQUEST_FREE request MPI_REQUEST_NULL MPI_REQUEST_FREE(request) INOUT request int MPI_Request_free(MPI_Request * request) MPI_REQUEST_FREE(REQUEST, IERROR) INTEGER REQUEST, IERROR MPI 35 MPI_REQUEST_FREE C C C C C C CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank) IF(rank.EQ.0) THEN DO i=1, n CALL MPI_ISEND(outval, 1, MPI_real, 1, 0, req, ierr) CALL MPI_REQUEST_FREE(req, ierr) req MPI_ISEND CALL MPI_IRECV(inval, 1, MPI_REAL, 1, 0, req, ierr) req CALL MPI_WAIT(req, status, ierr) MPI_IRECV END DO ELSE IF(rank.EQ.1) THEN CALL MPI_IRECV(inva, 1, MPI_REAL, 0, 0, req, ierr) CALL MPI_WAIT(req, status) 0 DO I=1, n-1 CALL MPI_ISEND(outval, 1, MPI_REAL, 0, 0, req, ierr) CALL MPI_REQUEST_FREE(req, ierr) req MPI_ISEND CALL MPI_IRECV(inval, 1, MPI_REAL, 0, 0, req, ierr) req CALL MPI_WAIT(req, status, ierr) 109
127 C END IF MPI_IRECV END DO CALL MPI_ISEND(outval, 1, MPI_REAL, 0, 0, req, ierr) CALL MPI_WAIT(req, status) 32 MPI_REQUEST_FREE 12.7 MPI MPI_PROBE MPI_IPROBE MPI_IPROBE(source,tag,comm,flag,status) MPI_IPROBE <source, tag, comm>, flag=true status MPI_RECV(..., source, tag,cmm,status) status MPI_IPROBE status MPI_RECV status MPI_IPROBE MPI_IPROBE flag=false status MPI_IPROBE flag=true status source,tag MPI_IPROBE source MPI_ANY_SOURCE source tag tag MPI_ANY_TAG comm MPI_PROBE MPI_IPROBE. MPI_PROBE (source,tag,cmm,status) IN source MPI_ANY_SOURCE( ) IN tag tag tag MPI_ANY_TAG IN comm OUT status int MPI_Probe(int source,int tag,mpi_comm comm,mpi_status *status) MPI_PROBE(SOURCE,TAG,COMM,STATUS,IERROR) INTEGER SOURCE,TAG,COMM,STATUS(MPI_STATUS_SIZE),IERROR MPI 36 MPI_PROBE 110
128 MPI_IPROBE(source,tag,comm,flag,status) IN source MPI_ANY_SOURCE IN tag tag tag MPI_ANY_TAG IN comm OUT flag OUT status int MPI_Iprobe(int source,int tag,mpi_comm comm,int *flag, MPI_Status *status) MPI_IPROBE(SOURCE,TAG,COMM,FLAG,STATUS,TERROR) LOGICAL FLAG INTEGER SOURCE,TAG,COMM,STATUS(MPI_STATUS_SIZE),IERROR MPI 37 MPI_IPROBE C C C C C CALL MPI_COMM_RANK(comm,rank,ierr) IF (rank.eq. 0) THEN CALL MPI_SEND(i,1,MPI_INTEGER,2,0,comm,ierr) 0 2 ELSE IF (rank.eq.1) THEN CALL MPI_SEND(x,1,MPI_REAL,2,0,comm,ierr) 1 2 ELSE IF (rank.eq.2 ) THEN DO i=1,2 CALL MPI_PROBE(MPI_ANY_SOURCE,0, comm,status,ierr) 2 IF (status(mpi_source) = 0) THEN 0 CALL MPI_RECV(i,1,MPI_INTEGER,0,0,status,ierr) ELSE 1 CALL MPI_RECV(x,1,MPI_REAL,1,0,status,ierr) END IF END DO END IF 33 CALL MPI_COMM_RANK(comm,rank,ierr) IF (rank.eq.0) THEN CALL MPI_SEND(i,1,MPI_INTEGER,2,0,comm,ierr) ELSE IF(rank.EQ.1) THEN 111
129 C C C C CALL MPI_SEND(x,1,MPI_REAL,2,0,comm,ierr) ELSE IF ( rank.eq. 2) THEN DO i=1,2 CALL MPI_PROBE(MPI_ANY_SOURCE,0 comm,status,ierr) IF (status(mpi_source)=0) THEN CALL MPI_RECV(i,1,MPI_INTEGER,MPI_ANY_SOURCE $ 0,status,ierr) MPI_PROBE ELSE CALL MPI_RECV(x,1,MPI_REAL,MPI_ANY_SOURCE, 0,status,ierr) MPI_PROBE END IF END DO END IF 34 MPI_ANY_SOURCE source MPI_PROBE 12.8 A B B C C C C CALL MPI_COMM_RANK(comm, rank, ierr) IF (RANK.EQ.0) THEN CALL MPI_ISEND(a, 1, MPI_REAL, 1, 0, comm, r1, ierr) 0 1 a CALL MPI_ISEND(b, 1, MPI_REAL, 1, 0, comm, r2, ierr) 0 1 b ELSE IF ( rank.eq.1) CALL MPI_IRECV(a, 1, MPI_REAL, 0, MPI_ANY_TAG, comm, r1, ierr) 1 a b b CALL MPI_IRECV(b, 1, MPI_REAL, 0, 0, comm, r2, ierr) 1 b END IF CALL MPI_WAIT(r1,status) CALL MPI_WAIT(r2,status) 112
130 C Jacobi Jacobi Jacobi Jacobi program main implicit none include 'mpif.h' integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer left,right,tag1,tag2 integer status(mpi_status_size,4) integer req(4) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) print *, "Process ", myid, " of ", numprocs, " is alive" C 113
131 do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do do i=1,totalsize a(i,1)=8.0 a(i,mysize+2)=8.0 end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do tag1=3 tag2=4 C C if (myid.gt. 0) then left=myid-1 else left=mpi_proc_null end if if (myid.lt. 3) then right=myid+1 else right=mpi_proc_null end if begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 114
132 C endif if (myid.eq. 3) then end_col=mysize endif do n=1,steps C do i=2,totalsize-1 b(i,begin_col)=(a(i,begin_col+1)+a(i,begin_col-1)+ * a(i+1,begin_col)+a(i-1,begin_col))*0.25 b(i,end_col)=(a(i,end_col+1)+a(i,end_col-1)+ * a(i+1,end_col)+a(i-1,end_col))*0.25 end do C call MPI_ISEND(b(1,end_col),totalsize,MPI_REAL,right,tag1, * MPI_COMM_WORLD,req(1),ierr) call MPI_ISEND(b(1,begin_col),totalsize,MPI_REAL,left,tag2, * MPI_COMM_WORLD,req(2),ierr) C C C call MPI_IRECV(a(1,1),totalsize,MPI_REAL,left,tag1, * MPI_COMM_WORLD,req(3),ierr) call MPI_IRECV(a(1,mysize+2),totalsize,MPI_REAL,right,tag2, * MPI_COMM_WORLD,req(4),ierr) do j=begin_col+1,end_col-1 do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) end do end do do i=1,4 CALL MPI_WAIT(req(i),status(1,i),ierr) end do end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) 115
133 C end do call MPI_Finalize(rc) end 36 Jacobi MPI MPI MPI 1 MPI_SEND_INIT 2 MPI_START 3 MPI_WAIT 4 MPI_REQUEST_FREE MPI_START MPI_START MPI_REQUEST_FREE MPI_SEND_INIT(buf,count,datatype,dest,tag,comm,request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Send_init(void* buf, int count, MPI_Data type,int dest, int tag, MPI_Comm comm, MPI_Request *request) MPI_SEND_INIT(BUF,COUNT,DATATYPE,DEST,TAG,COMM,REQUEST, IERRR) <type> BUF (*) INTEGER COUNT,DATATYPE,DEST,TAG,COMM,REQUEST,IERRROR MPI 38 MPI_SEND_INIT 116
134 MPI_SEND_INIT MPI_BSEND_INIT(buf,count,datatype,dest,tag,comm,request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Bsend_init(void* buf,int count,mpi_datatype datatype,int dest, int tag, MPI_Comm comm,mpi_request *request) MPI_BSEND_INIT(BUF,COUNT,DATATYPE,DEST,TAG,COMM,REQUEST,IERROR) <type> BUF (*) INTEGER,COUNT,DATATYPE,DEST,TAG,COMM,REQUEST,IERROR MPI 39 MPI_BSEND_INIT MPI_BSEND_INIT MPI_SSEND_INIT(buf,count,datatype,dest,tag,comm,request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Ssend_init(void* buf,int count,mpi_datatype datatype,int dest, int tag, MPI_Comm comm,mpi_request *request) MPI_SSEND_INIT(BUF,COUNT,DATATYPE,DEST,TAG,COMM,REQUEST,IERROR) <type> BUF (*) INTEGER COUNT,DATATYPE,DEST,TAG,COMM,REQUEST,IERROR MPI 40 MPI_SSEND_INIT MPI_SSEND_INIT 117
135 MPI_RSEND_INIT(buf,count,datatype,dest,tag,comm,request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Rsend_init(void* buf,int count,mpi_datatype datatype,int dest, int tag, MPI_Comm comm,mpi_request *request) MPI_RSEND_INIT(BUF,COUNT,DATATYPE,DEST,TAG,COMM,REQUEST, IERROR) <type> BUF (*) INTEGER COUNT,DATATYPE,DEST,TAG,COMM,REQUEST,IERROR MPI_RSEND_INIT MPI 41 MPI_RSEND_INIT MPI_RECV_INIT(buf,count,datatype,source,tag,comm,request) OUT buf ( ) IN count ( ) IN datatype ( ) IN source MPI_ANY_SOURCE( ) IN tag MPI_ANY_TAG( ) IN comm ( ) OUT request ( ) int MPI_Recv_init(void* buf,int count,mpi_datatype datatype,int source, int tag, MPI_Comm comm,mpi_request *request) MPI_RECV_INIT(BUF,COUNT,DATATYPE,SOURCE,TAG,COMM,REQUEST, IERROR) <type> BUF (*) INTEGER COUNT,DATATYPE,SOURCE,TAG,COMM,REQUEST,IERROR MPI 42 MPI_RECV_INIT MPI_RECV_INIT buf OUT MPI_RECV_INIT ( ) MPI_START 118
136 MPI_START(request) INOUT request int MPI_Start(MPI_Request *request) MPI_START(REQUEST,IERROR) INTEGER REQUEST,IERROR ( ) MPI 43 MPI_START request MPI_START MPI_SEND_INIT MPI_START MPI_ISEND MPI_BSEND_INIT MPI_START MPI_IBSEND MPI_STARTALL(count,array_of_requests) IN count ( ) IN array_of_requests ( ) int MPI_Startall(int count, MPI_Request *array_of_requests) MPI_STARTALL(COUNT, ARRAY_OF_REQUESTS,IERROR) INTEGER COUNT, ARRAY_OF_REQUESTS(*),IERROR MPI 44 MPI_STARTALL MPI_STARTALL array_of_request MPI_START MPI_START MPI_STARTALL MPI_WAIT MPI_TEST MPI_START MPI_STARTALL MPI_REQUEST_FREE MPI_REQUEST_FREE MPI_START MPI_START Jacobi Jacobi program main implicit none include 'mpif.h' 119
137 integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer left,right,tag1,tag2 integer status(mpi_status_size,4) integer req(4) C C call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) print *, "Process ", myid, " of ", numprocs, " is alive" do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do do i=1,totalsize a(i,1)=8.0 a(i,mysize+2)=8.0 end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do tag1=3 tag2=4 120
138 C if (myid.gt. 0) then left=myid-1 else left=mpi_proc_null end if if (myid.lt. 3) then right=myid+1 else right=mpi_proc_null end if C begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 endif if (myid.eq. 3) then end_col=mysize endif C call MPI_SEND_INIT(b(1,end_col),totalsize,MPI_REAL,right,tag1, * MPI_COMM_WORLD,req(1),ierr) call MPI_SEND_INIT(b(1,begin_col),totalsize,MPI_REAL,left,tag2, * MPI_COMM_WORLD,req(2),ierr) C C call MPI_RECV_INIT(a(1,1),totalsize,MPI_REAL,left,tag1, * MPI_COMM_WORLD,req(3),ierr) call MPI_RECV_INIT(a(1,mysize+2),totalsize,MPI_REAL,right,tag2, * MPI_COMM_WORLD,req(4),ierr) do n=1,steps do i=2,totalsize-1 b(i,begin_col)=(a(i,begin_col+1)+a(i,begin_col-1)+ * a(i+1,begin_col)+a(i-1,begin_col))*0.25 b(i,end_col)=(a(i,end_col+1)+a(i,end_col-1)+ * a(i+1,end_col)+a(i-1,end_col))*0.25 end do C 4 121
139 C C call MPI_STARTALL(4,req,ierr) do j=begin_col+1,end_col-1 do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) end do end do call MPI_WAITALL(4,req,status,ierr) end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) end do C do i=1,4 CALL MPI_REQUEST_FREE(req(i),ierr) end do call MPI_FINALIZE(rc) end 37 Jacobi MPI 122
140 13 MPI MPI MPI 13.1 MPI MPI ROOT 1 N 50 N ROOT
141 ROOT ROOT ROOT N N N-1 53 MPI 53
142 0 0 N MPI MPI I II III result= recvbuf Op(message) result 54 MPI 125
143 13.2 MPI_BCAST(buffer,count,datatype,root,comm) IN/OUT buffer ( ) IN count / ( ) IN datatype / ( ) IN root ( ) IN comm ( ) int MPI_Bcast(void* buffer,int count,mpi_datatype datatype,int root, MPI_Comm comm) MPI_BCAST(BUFFER,COUNT,DATATYPE,ROOT,COMM,IERROR) <type> BUFFER(*) INTEGER COUNT,DATATYPE,ROOT,COMM,IERROR MPI 45 MPI_BCAST MPI_BCAST root root comm datatype count datatype count datatype MPI_BCAST A ROOT A A A A A 55 ROOT #include <stdio.h> #include "mpi.h" int main( argc, argv ) 126
144 int argc; char **argv; { int rank, value; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); do { if (rank == 0) /* 0 */ scanf( "%d", &value ); MPI_Bcast( &value, 1, MPI_INT, 0, MPI_COMM_WORLD );/* */ printf( "Process %d got %d\n", rank, value );/* */ } while (value >= 0); } MPI_Finalize( ); return 0; MPI_GATHER rank N N sendcount sendtype recvcount recvtype, sendbuf sendcount sendtype root comm root comm 127
145 MPI_GATHER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) OUT recvbuf ( ) IN recvcount (, ) IN recvtype (, ) IN root ( ) IN comm ( ) int MPI_Gather(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) MPI_GATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE,ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR MPI 46 MPI_GATHER A B C D ROOT H A B C D... H ROOT 56 MPI_GATHERV MPI_GATHER recvcounts displs MPI_GATHERV ROOT MPI_GATHER MPI_GATHER sendbuf sendcount sendtype root comm root comm 128
146 MPI_GATHERV(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs,recvtype, root, comm) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) OUT recvbuf (, ) IN recvcounts ( ), IN displs, recvbuf IN recvtype ( ) IN root ( ) IN comm ( ) int MPI_Gatherv(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int *recvcounts, int *displs, MPI_Datatype recvtype, int root, MPI_Comm comm) MPI_GATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS, DISPLS, RECVTYPE, ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*), RECVTYPE, ROOT, COMM, IERROR MPI 47 MPI_GATHERV 100 MPI_Comm comm; int gsize,sendarray[100]; int root,*rbuf;... MPI_Comm_size(comm,&gsize); rbuf=(int *)malloc(gsize*100*sizeof(int)); MPI_Gather(sendarray,100,MPI_INT,rbuf,100,MPI_INT,root,comm); 39 MPI_Gather 100, (100 ), MPI_GATHERV displs 100 MPI_Comm comm; int gsize, sendarray[100]; int root, *rbuf, stride; int *displs, i, *rcounts;... MPI_Comm_size(comm, &gsize); 129
147 rbuf = (int *)malloc(gsize*stride*sizeof(int)); displs = (int *)malloc(gsize*sizeof(int)); rcounts = (int *)malloc(gsize*sizeof(int)); for (i=0; i<gsize; ++i) { displs[i] = i*stride; rcounts[i] = 100; } MPI_Gatherv(sendarray, 100, MPI_INT, rbuf, rcounts, displs, MPI_INT, root, comm); 40 MPI_Gatherv 13.4 MPI_SCATTER ROOT MPI_SCATTER MPI_GATHER A B C D... H ROOT A B C D ROOT H 57 sendcount sendtype recvcount recvtype recvbuf recvcount recvtype root comm root comm MPI_GATHER MPI_GATHERV MPI_SCATTER MPI_SCATTERV MPI_SCATTER MPI_GATHER MPI_SCATTERV MPI_GATHERV MPI_SCATTERV MPI_SCATTER ROOT sendcounts displs, sendcount[i] sendtype i recvcount recvtype recvbuf 130
148 recvcount recvtype root comm root comm MPI_SCATTER(sendbuf,sendcount,sendtype,recvbuf,recvcount,recvtype, root,comm) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) OUT recvbuf ( ) IN recvcount ( ) IN recvtype ( ) IN root ( ) IN comm ( ) int MPI_Scatter(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) MPI_SCATTER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR MPI 48 MPI_SCATTER MPI_SCATTERV(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm) IN sendbuf ( ) IN sendcounts ( ) IN displs ( ) IN sendtype ( ) OUT recvbuf ( ) IN recvcount ( ) IN recvtype ( ) IN root ( ) IN comm ( ) int MPI_Scatterv(void* sendbuf, int *sendcounts, int *displs, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) MPI_SCATTERV(SENDBUF, SENDCOUNTS, DISPLS, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNTS(*), DISPLS(*), SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR MPI 49 MPI_SCATTERV 131
149 100 MPI_Comm comm; int gsize,*sendbuf; int root,rbuf[100];... MPI_Comm_size(comm, &gsize); sendbuf = (int *)malloc(gsize*100*sizeof(int));... MPI_Scatter(sendbuf, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm); 41 MPI_Scatter MPI_Comm comm; int gsize,*sendbuf; int root,rbuf[100],i,*displs,*scounts;... MPI_Comm_size(comm, &gsize); sendbuf = (int *)malloc(gsize*stride*sizeof(int));... displs = (int *)malloc(gsize*sizeof(int)); scounts = (int *)malloc(gsize*sizeof(int)); for (i=0; i<gsize; ++i) { displs[i] = i*stride; scounts[i] = 100; } MPI_Scatterv(sendbuf, scounts, displs, MPI_INT, rbuf, 100, MPI_INT, root, comm); 42 MPI_Scatterv 13.5 MPI_GATHER ROOT MPI_ALLGATHER ROOT MPI_GATHER MPI_ALLGATHER MPI_GATHER MPI_GATHER ROOT MPI_ALLGATHER MPI_ALLGATHER MPI_GATHER MPI_ALLGATHERV MPI_GATHERV MPI_ALLGATHERV j recvbuf j j sendcount sendtype 132
150 recvcounts[j] recvtype MPI_ALLGATHER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype,comm) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) OUT recvbuf ( ) IN recvcount ( ) IN recvtype ( ) IN comm ( ) int MPI_Allgather(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm) MPI_ALLGATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR MPI 50 MPI_ALLGATHER N N
151 MPI_ALLGATHERV(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) OUT recvbuf ( ) IN recvcounts ( ) IN displs ( ) IN recvtype ( ) IN comm ( ) int MPI_Allgatherv(void* sendbuf, int sendcount,mpi_datatype sendtype, void* recvbuf, int *recvcounts, int *displs, MPI_Datatype recvtype, MPI_Comm comm) MPI_ALLGATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS, DISPLS, RECVTYPE, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*), RECVTYPE, COMM, IERROR MPI 51 MPI_ALLGATHERV MPI_Comm comm; int gsize,sendarray[100]; int *rbuf;... MPI_Comm_size(comm, &gsize); rbuf = (int *)malloc(gsize*100*sizeof(int)); MPI_Allgather(sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, comm); MPI_Allgather MPI_Allgatherv MPI_Comm comm; int gsize, sendarray[100]; int root, *rbuf, stride; int *displs, i, *rcounts;... MPI_Comm_size(comm, &gsize); rbuf = (int *)malloc(gsize*stride*sizeof(int)); displs = (int *)malloc(gsize*sizeof(int)); 134
152 rcounts = (int *)malloc(gsize*sizeof(int)); for (i=0; i<gsize; ++i) { displs[i] = i*stride; rcounts[i] = 100; } MPI_Allgatherv(sendarray, 100, MPI_INT, rbuf, rcounts, displs, MPI_INT, root, comm); 44 MPI_Allgatherv 13.6 MPI_ALLTOALL MPI_ALLGATHER MPI_ALLTOALL MPI_ALLTOALL i j j recvbuf i sendcount sendtype recvcount recvtype MPI_ALLTOALL i j MPI_ALLTOALL(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) OUT recvbuf ( ) IN recvcount ( ) IN recvtype ( ) IN comm ( ) int MPI_Alltoall(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm) MPI_ALLTOALL(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR MPI 52 MPI_ALLTOALL 135
153 0 A 00 A 01 A 02 A 03 A 00 A 10 A 20 A 30 1 A 10 A 11 A 12 A 13 A 01 A 11 A 21 A 31 2 A 20 A 21 A 22 A 23 A 02 A 12 A 22 A 32 3 A 30 A 31 A 32 A 33 A 03 A 13 A 23 A MPI_ALLTOALL MPI_ALLTOALL #include "mpi.h" #include <stdlib.h> #include <stdio.h> #include <string.h> #include <errno.h> int main( argc, argv ) int argc; char *argv[]; { int rank, size; int chunk = 2; /* */ int i,j; int *sb; int *rb; int status, gstatus; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&size); sb = (int *)malloc(size*chunk*sizeof(int));/* */ if (!sb ) { perror( "can't allocate send buffer" ); 136
154 } MPI_Abort(MPI_COMM_WORLD,EXIT_FAILURE); } rb = (int *)malloc(size*chunk*sizeof(int));/* */ if (!rb ) { perror( "can't allocate recv buffer"); free(sb); MPI_Abort(MPI_COMM_WORLD,EXIT_FAILURE); } for ( i=0 ; i < size ; i++ ) { for ( j=0 ; j < chunk ; j++ ) { sb[i*chunk+j] = rank + i*chunk+j;/* */ printf("myid=%d,send to id=%d, data[%d]=%d\n",rank,i,j,sb[i*chunk+j]); rb[i*chunk+j] = 0;/* 0*/ } } /* MPI_Alltoall */ MPI_Alltoall(sb,chunk,MPI_INT,rb,chunk,MPI_INT, MPI_COMM_WORLD); for ( i=0 ; i < size ; i++ ) { for ( j=0 ; j < chunk ; j++ ) { printf("myid=%d,recv from id=%d, data[%d]=%d\n",rank,i,j,rb[i*chunk+j]); /* */ } } free(sb); free(rb); MPI_Finalize(); 45 MPI_Alltoall MPI_ALLGATHERV MPI_ALLGATHER MPI_ALLTOALLV MPI_ALLTOALL sdispls rdispls comm MPI_ALLTOALL MPI_ALLTOALLV n 1) 2) 137
155 MPI_ALLTOALLV(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm) IN sendbuf ( ) IN sendcounts ( ) IN sdispls IN sendtype ( ) OUT recvbuf ( ) IN recvcounts ( ) IN rdispls IN recvtype ( ) IN comm ( ) int MPI_Alltoallv(void* sendbuf, int *sendcounts, int *sdispls, MPI_Datatype sendtype, void* recvbuf, int *recvcounts, int *rdispls, MPI_Datatype recvtype, MPI_Comm comm) MPI_ALLTOALLV(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE, RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNTS(*), SDISPLS(*), SENDTYPE, RECVCOUNTS(*), RDISPLS(*), RECVTYPE, COMM, IERROR MPI 53 MPI_ALLTOALLV 13.7 MPI_BARRIER(comm) IN comm ( ) int MPI_Barrier(MPI_Comm comm) MPI_BARRIER(COMM, IERROR) INTEGER COMM, IERROR MPI_BARRIER MPI 54 MPI_BARRIER #include "mpi.h" #include "test.h" #include <stdlib.h> 138
156 #include <stdio.h> int main( int argc, char **argv ) { int rank, size, i; int *table; int errors=0; MPI_Aint address; MPI_Datatype type, newtype; int lens; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); /* Make data table */ table = (int *) calloc (size, sizeof(int)); table[rank] = rank + 1; /* */ MPI_Barrier ( MPI_COMM_WORLD ); /* */ for ( i=0; i<size; i++ ) MPI_Bcast( &table[i], 1, MPI_INT, i, MPI_COMM_WORLD ); /* */ for ( i=0; i<size; i++ ) if (table[i]!= i+1) errors++; MPI_Barrier ( MPI_COMM_WORLD );/* */... /* */ MPI_Finalize(); } MPI_REDUCE op root sendbuf count datatype recvbuf count datatype count datatype op root comm op MPI 139
157 MPI_REDUCE(sendbuf,recvbuf,count,datatype,op,root,comm) IN sendbuf ( ) OUT recvbuf ( ) IN count ( ) IN datatype ( ) IN op ( ) IN root ( ) IN comm ( ) int MPI_Reduce(void* sendbuf, void* recvbuf, int count, PI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm) MPI_REDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER COUNT, DATATYPE, OP, ROOT, COMM, IERROR MPI 55 MPI_REDUCE M Op Op Op Op Op Op 3 Op Op 2 Op Op N-1 ROOT 60 MPI 140
158 13.9 MPI MPI MPI_ALLREDUCE op, MPI_REDUCE, MPI_REDUCE_SCATTER MPI_SCAN 9 MPI MPI_MAX MPI_MIN MPI_SUM MPI_PROD MPI_LAND MPI_BAND MPI_LOR MPI_BOR MPI_LXOR MPI_BXOR MPI_MAXLOC MPI_MINLOC MPI_MINLOC MPI_MAXLOC MPI op datatype. : 10 C FORTRAN MPI C FORTRAN C Fortran MPI MPI_INT MPI_LONG MPI_SHORT MPI_UNSIGNED_SHORT MPI_UNSIGNED MPI_UNSIGNED_LONG MPI_INTEGER MPI_FLOAT MPI_DOUBLE MPI_REAL MPI_DOUBLE_PRECISION MPI_LONG_DOUBLE MPI_LOGICAL MPI_COMPLEX MPI_BYTE : 11 MPI_MAX, MPI_MIN MPI_SUM, MPI_PROD MPI_LAND, MPI_LOR, MPI_LXOR MPI_BAND, MPI_BOR, MPI_BXOR C,Fortran, C,Fortran, C, C,Fortran, 141
159 13.10 p π arctan( ) arctan( 1) arctan( 0) arctan( 1) π dx = x = = = 01 + x f(x)=4/(1+x 2 ) 1 f ( x ) dx = π 0 f(x) 4 f(x)=4/(1+x 2 ) π f(x) 0 1 π 5 π 0 1 N π N i = 1 f ( N 2 i i N ) N = N f ( ) N i = 1 62 π 142
160 π #include "mpi.h" #include <stdio.h> #include <math.h> double f(double); double f(double x) /* f(x) */ { return (4.0 / (1.0 + x*x)); } int main(int argc,char *argv[]) { int done = 0, n, myid, numprocs, i; double PI25DT = ; /* π */ double mypi, pi, h, sum, x; double startwtime = 0.0, endwtime; int namelen; char processor_name[mpi_max_processor_name]; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Get_processor_name(processor_name,&namelen); fprintf(stdout,"process %d of %d on %s\n", myid, numprocs, processor_name); n = 0 if (myid == 0) { printf("please give N="); scanf(&n); startwtime = MPI_Wtime(); } MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD);/* n */ h = 1.0 / (double) n;/* */ sum = 0.0; /* */ for (i = myid + 1; i <= n; i += numprocs) /* numprocs
161 */ { x = h * ((double)i - 0.5); sum += f(x); } mypi = h * sum;/* */ } MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); /* π */ if (myid == 0) /* 0 */ { printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi - PI25DT)); endwtime = MPI_Wtime(); printf("wall clock time = %f\n", endwtime-startwtime); fflush( stdout ); } MPI_Finalize(); 47 π MPI_ALLREDUCE ROOT 144
162 MPI_ALLREDUCE(sendbuf, recvbuf, count, datatype, op, comm) IN sendbuf ( ) OUT recvbuf ( ) IN count ( ) IN datatype ( ) IN op ( ) IN comm ( ) int MPI_Allreduce(void* sendbuf, void* recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) MPI_ALLREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER COUNT, DATATYPE, OP, COMM, IERROR MPI 56 MPI_ALLREDUCE MPI_REDUCE_SCATTER(sendbuf, recvbuf, recvcounts, datatype, op, comm) IN sendbuf ( ) OUT recvbuf ( ) IN recvcounts IN datatype ( ) IN op ( ) IN comm ( ) int MPI_Reduce_scatter(void* sendbuf, void* recvbuf, int *recvcounts MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) MPI_REDUCE_SCATTER(SENDBUF, RECVBUF, RECVCOUNTS, DATATYPE, OP, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER RECVCOUNTS(*), DATATYPE, OP, COMM, IERROR MPI 57 MPI_REDUCE_SCATTER MPI_REDUCE_SCATTER MPI ROOT MPI_REDUCE_SCATTER sendbuf count datatype 145
163 count= irecvcount[i] recvcounts[0] 0 recvcounts[1] 1 recvcounts[n-1] N-1 Op Op M-1 recvcounts[n-1] Op Op Op Op 2 recvcounts[2] Op Op 1 recvcounts[1] Op Op 0 recvcounts[0] 0 1 N MPI_SCAN i 0,...,i i i i-1 i i i
164 MPI_SCAN(sendbuf, recvbuf, count, datatype, op, comm) IN sendbuf ( ) OUT recvbuf ( ) IN count ( ) IN datatype ( ) IN op ( ) IN comm ( ) int MPI_Scan(void* sendbuf, void* recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) MPI_SCAN(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER COUNT, DATATYPE, OP, COMM, IERROR MPI 58 MPI_SCAN A2 B2 C2 2 A1 B1 C1 1 A0 B0 C0 0 A0+B0+C0 A1+B1+C1 A2+B2+C ROOT ROOT 147
165 ROOT A2 B2 C2 2 A0+B0+C0 A1+B1+C1 A2+B2+C2 A1 B1 C1 1 A0+B0+C0 A1+B1+C1 A2+B2+C2 A0 B0 C0 0 A0+B0+C0 A1+B1+C1 A2+B2+C ROOT ROOT 1/N N A2 B2 C2 2 A2+B2+C2 A1 B1 C1 1 A1+B1+C1 A0 B0 C0 0 A0+B0+C
166 A2 B2 C2 2 A0+B0+C0 A1+B1+C1 A2+B2+C2 A1 B1 C1 1 A0+B0 A1+B1 A2+B2 A0 B0 C0 0 A0 B0 C , : switch(rank) { case 0: MPI_Bcast(buf1, count, type, 0, comm); MPI_Bcast(buf2, count, type, 1, comm); break; case 1: MPI_Bcast(buf2, count, type, 1, comm); MPI_Bcast(buf1, count, type, 0, comm); break; }... switch(rank) { case 0: MPI_Bcast(buf1, count, type, 0, comm0); MPI_Bcast(buf2, count, type, 2, comm2); break; case 1: MPI_Bcast(buf1, count, type, 1, comm1); 149
167 } MPI_Bcast(buf2, count, type, 0, comm0); break; case 2: MPI_Bcast(buf1, count, type, 2, comm2); MPI_Bcast(buf2, count, type, 1, comm1); break; 48 comm {0,1}, comm0 {0,1}, comm1 {1,2}, comm2 {2,0}, comm2 comm0 comm0 comm1 comm1 comm2. switch(rank) { case 0: MPI_Bcast(buf1, count, type, 0, comm); MPI_Send(buf2, count, type, 1, tag, comm); break; case 1: MPI_Recv(buf2, count, type, 0, tag, comm, status); MPI_Bcast(buf1, count, type, 0, comm); break; } ,, 1,,. switch(rank) { case 0: MPI_Bcast(buf1, count, type, 0, comm); MPI_Send(buf2, count, type, 1, tag, comm); break; 150
168 } case 1: MPI_Recv(buf2, count, type, MPI_ANY_SOURCE, tag, comm, status); /* 1 2 */ MPI_Bcast(buf1, count, type, 0, comm); MPI_Recv(buf2, count, type, MPI_ANY_SOURCE, tag, comm, status); break; case 2: MPI_Send(buf2, count, type, 1, tag, comm); MPI_Bcast(buf1, count, type, 0, comm); break; 50, 0 1, 2 1 1, MINLOC MAXLOC MPI_MINLOC MPI_MAXLOC ( ) MPI_MAXLOC (u 0,0),(u 1,1),..,(u n-1,n-1) (u,r) u = u r = n max 1 i = 0 { u i }, u i < u r, i = 0,.., r 1 u r MPI_MINLOC MPI_MINLOC (u 0,0),(u 1,1),..,(u n-1,n-1) (u,r) u = u r = n min 1 i = 0 { u i }, u i > u r, i = 0,.., r 1 MPI_MINLOC MPI_MAXLOC, ( ) MPI,MPI_MAXLOC MPI_MINLOC : 151
169 12 MPI Fortran MPI_2REAL MPI_2DOUBLE_PRECISION MPI_2INTEGER 13 MPI C MPI_FLOAT_INT MPI_DOUBLE_INT MPI_LONG_INT MPI_2INT MPI_SHORT_INT MPI_LONG_DOUBLE_INT MPI_2REAL MPI_TYPE_CONTIGUOUS(2, MPI_REAL, MPI_2REAL) MPI_2INTEGER MPI_2DOUBLE_PRECISION MPI_2INT MPI_2REAL MPI_FLOAT_INT : type[0] = MPI_FLOAT type[1] = MPI_INT disp[0] = 0 disp[1] = sizeof(float) block[0] = 1 block[1] = 1 MPI_TYPE_STRUCT(2, block, disp, type, MPI_FLOAT_INT) MPI_LONG_INT MPI_DOUBLE_INT MPI_FLOAT_INT. 3 ( C ), 3. /* 3 ain[3] */ double ain[3],aout[3]; int ind[3]; struct { double val; int rank; } in[3], out[3];/* */ int i, myrank, root; MPI_Comm_rank(MPI_COMM_WORLD, &myrank); for (i=0; i<3; ++i) { 152
170 in[i].val = ain[i]; in[i].rank = myrank; }/* */ MPI_Reduce(in, out, 3, MPI_DOUBLE_INT, MPI_MAXLOC, root, comm); /* */ if (myrank == root) { /* */ for (i=0; i<3; ++i) { aout[i] = out[i].val; ind[i] = out[i].rank; } } 51 MPI_MAXLOC 14 MPI_MAXLOC 0 (30.5,0) (41.7,0) (35.9,0) 1 (12.1,1) (11.3,1) (13.5,1) 2 (100.7,2) (23.2,2) (98.4,2) MPI_MAXLOC(100.7,2) (41.7,0) (98.4,2) MPI_OP_CREATE(function, commute, op) IN function ( ) IN commute true, false OUT op ( ) int MPI_Op_create(MPI_User_function *function,int commute,mpi_op *op) MPI_OP_CREATE(FUNCTION, COMMUTE, OP, IERROR) EXTERNAL FUNCTION LOGICAL COMMUTE INTEGER OP, IERROR MPI 59 MPI_OP_CREATE MPI MPI_OP_CREATE function op MPI commute=true 153
171 commute=false function : invec, inoutvec,len datatype C : typedef void MPI_User_function(void *invec, void *inoutvec, int *len, MPI_Datatype *datatype); Fortran : FUNCTION USER_FUNCTION(INVEC(*), INOUTVEC(*), LEN, TYPE) <type> INVEC(LEN), INOUTVEC(LEN) INTEGER LEN, TYPE datatype MPI_REDUCE : invec inoutvec,len,datatype u[0],...,u[len-1] invec len datatype v[0],...,v[len-1] inoutvec len datatype w[0],...,w[len-1] inoutvec len datatype w[i]= u[i] v[i],i 0 len-1, invec inoutvec len, inoutvec. len, MPI MPI_ABORT MPI_OP_FREE(op) IN op ( ) int MPI_Op_free(MPI_Op *op) MPI_OP_FREE(OP, IERROR) INTEGER OP, IERROR MPI 60 MPI_OP_FREE MPI_OP_FREE op MPI_OP_NULL typedef struct { double real,imag; } Complex; /* */ void myprod(complex *in, Complex *inout, int *len, MPI_Datatype *dptr) { int i; 154
172 Complex c; for (i=0; i < *len; ++i) { c.real = inout->real*in->real - inout->imag*in->imag; c.imag = inout->real*in->imag + inout->imag*in->real; *inout = c; in++; inout++; } } /* */ /* 100 */ Complex a[100], answer[100]; MPI_Op myop; MPI_Datatype ctype; /* MPI */ MPI_Type_contiguous(2, MPI_DOUBLE, &ctype); MPI_Type_commit(&ctype); /* */ MPI_Op_create(myProd, True, &myop); MPI_Reduce(a, answer, 100, ctype, myop, root, comm); /* ( 100 ) */
173 14 MPI MPI < > ={< 0 0>,< 1 1>,...,< n-1 n-1>} 0 1 i n i n-1 68 ={ 0... n-1} 156
174 typemap={(type 0,disp 0 ),...,(type n-1,disp n-1 )}, lb(typemap)=min {disp j }, 0=<j<=n-1 ub(typemap)=max(disp j +sizeof(type j )), 0=<j<=n-1 extent(typemap)=ub(typemap)-lb(typemap)+ε ε type={(double,0),(char,8)}( 0, 8 ) double 8 extent 16( 9 8 ), extent MPI_TYPE_CONTIGUOUS, MPI_TYPE_CONTIGUOUS(count,oldtype,newtype) IN count ( ) IN oldtype ( ) OUT newtype ( ) int MPI_Type_contiguous(int count,mpi_datatype oldtype, MPI_Datatype *newtype) MPI_TYPE_CONTIGUOUS(COUNT,OLDTYPE,NEWTYPE,IERROR) INTEGER COUNT,OLDTYPE,NEWTYPE,IERROR MPI 61 MPI_TYPE_CONTIGUOUS MPI_TYPE_CONTIGUOUS oldtype {(doubel,0),(char,8)}, extent=16, count=3, newtype {(double,0),(char,8),(double,16),(char,24),(double,32),(char,40)} 157
175 69 MPI_TYPE_CONTIGUOUS oldtype {(type 0, disp 0 ), (type n-1,disp n-1 )}, extent = ex. count newtype {(type 0, disp 0 ),..., (type n-1,disp n-1 ), (type 0, disp 0 +ex),..., (type n-1, disp n-1 +ex),..., (type 0, disp 0 +ex(count-1)),..., (type n-1,disp n-1 +ex(count-1))} MPI_TYPE_VECTOR extent MPI_TYPE_VECTOR(count,blocklength,stride,oldtype,newtype) IN count ( ) IN blocklength ( ) IN stride ( ) IN oldtype ( ) OUT newtypr ( ) int MPI_Type_vector(int count,int blocklength,int stride, MPI_Datatype oldtype,mpi_datatype *newtype) MPI_TYPE_VECTOR(COUNT,BLOCKLENGTH,STRIDE,OLDTYPE, NEWTYPE,IERROR) INTEGER COUNT,BLOCKLENGTH,STRIDE,OLDTYPE,NEWTYPE,IERROR MPI 62 MPI_TYPE_VECTOR 158
176 oldtype {(double,0),(char,8)},extent=16. MPI_TYPE_VECTOR(2,3,4,oldtype,newtype) {(double,0),(char,8), (double,16),(char,24), (double,32),(char,40), (double,64),(char,72), (double,80),(char,88),(double,96),(char,104)}., stride 4 70 MPI_TYPE_VECTOR MPI_TYPE_VECTOR(3,1,-2,oldtype,newtype) : {(double,0),(char,8),(double,-32),(char,-24),(double,-64),(char,-56)}., oldtype {(type 0, disp 0 ), (type n-1,disp n-1 )}, extent = ex. bl blocklength. count*bl, {(type 0, disp 0 ),..., (type n-1,disp n-1 type n-1,disp n-1 +ex.(stride+bl-1)),..., (type 0, disp 0 +ex.(count-1)),..., (type n-1, disp n-1 +ex.(count-1)),..., (type 0, disp 0 +ex.(count-1).stride),..., (type n-1, disp n-1 +ex.(stride.(count-1)+bl-1)},..., (type n-1, disp n-1 +ex.(stride.(count-1)+bl-1)}. MPI_TYPE_CONTIGUOUS( count, oldtype, newtype ) MPI_TYPE_VECTOR( count, 1, 1, oldtype, newtype ), MPI_TYPE_VECTOR(1, count, n, 159
177 oldtype, newtype), n. MPI_TYPE_HVECTOR(count,blocklength,stride,oldtype,newtype) IN count ( ) IN blocklength ( ) IN stride ( ) IN oldtype ( ) OUT newtype ( ) int MPI_Type_hvector(int count,int blocklength,mpi_aint stride,mpi_datatype oldtype, MPI_Datatype *newtype) MPI_TYPE_HVECTOR(COUNT,BLOCKLENGTH,STRIDE,OLDTYPE,NEWTYPE,IERROR) INTEGER COUNT,BLOCKLENGTH,STRIDE,OLDTYPE,NEWTYPE,IERROR MPI 63 MPI_TYPE_HVECTOR MPI_TYPE_HVECTOR MPI_TYPE_VECTOR, stride, oldtype {(type 0, disp 0 ), (type n-1,disp n-1 )},extent = ex. bl blocklength. count.bl.n, {(type 0, disp 0 ),..., (type n-1,disp n-1 ), (type 0, disp 0 +ex),..., (type n-1, disp n-1 +ex),..., (type 0, disp 0 +ex.(bl-1)),..., (type n-1, disp n-1 +ex.(bl-1)), (type 0, disp 0 +stride),..., (type n-1, disp n-1 +stride),..., (type 0, disp 0 +stride+ex.(bl-1)),..., (type n-1,disp n-1 +stride+ex.(bl-1)),..., (type 0, disp 0 + stride.(count-1)),..., (type n-1,disp n-1 +stride.(count-1)),..., (type 0, disp 0 +stride.(count-1)+(bl-1).ex),..., (type n-1, dispn-1+stride.(count-1)+(bl-1).ex } MPI_TYPE_INDEXED ( ),. extent. 160
178 MPI_TYPE_INDEXED(count,array_of_blocklengths,array_of_displacemets,oldtype,newtype) IN count IN array_of_blocklengths ( ) IN array_of_displacements ( ) IN oldtype ( ) OUT newtypr ( ) int MPI_Type_indexed(int count,int *array_of_blocklengths, int *array_of_displacements, MPI_Datatype oldtype, MPI_Datatype *newtype) MPI_TYPE_INDEXED(COUNT,ARRAY_OF_BLOCKLENGTHS,ARRAY_OF_DISPLACE MENTS,OLDTYPE,NEWTYPE,IERROR) INTEGER COUNT,ARRAY_OF_BLOCKLENGTHS(*), ARRAY_OF_DISPLACEMENTS(*),OLDTYPE,NEWTYPE,IERROR MPI 64 MPI_TYPE_INDEXED oldtype {(double,0),(char,8)},extent=16. B=(3,1),D=(4,0), MPI_TYPE_INDEXED( 2, B, D, oldtype, newtype ) : {(double,64),(char,72), (double,80),(char,88),(double,96),(char,104), (double,0),(char,8)} MPI_TYPE_INDEXED 161
179 , 64 0., oldtype {(type, disp ), (type,disp )}, extent = ex. B array_of_blocklengths,d array_of_displacements. n.sum(b[i],i=0,...,count-1), {(type 0, disp 0 +D[0].ex),..., (type n-1,disp n-1 +D[0].ex),..., (type 0, disp 0 +(D[0]+B[0]-1).ex),..., (type n-1,disp n-1 +(D[0]+B[0]-1).ex),..., (type 0, disp 0 +D[count-1].ex),..., (type n-1,disp n-1 +D[count-1].ex),..., (type 0, disp 0 +(D[count-1]+B[count-1]-1).ex),..., (type n-1, disp n-1 +(D[count-1]+B[count-1]-1).ex.)}. MPI_TYPE_VECTOR(count,blocklength,stride,oldtype,newtype) MPI_TYPE_INDEXED(count,B,D,oldtype,newtype), D[j]=j.stride, j=0,...,count-1, B[j]=blocklength, j=0,...,count-1. MPI_TYPE_HINDEXED MPI_TYPE_INDEXED, array_of_displacements extent,. MPI_TYPE_HINDEXED(count,array_of_blocklengths,array_of_displacemets,oldtype,newtype) IN count ( ) IN array_of_blocklengths ( ) IN array_of_displacements ( ) IN oldtype ( ) OUT newtypr ( ) int MPI_Type_hindexed(int count,int *array_of_blocklengths, MPI_Aint* array_of_displacements, MPI_Datatype oldtype, MPI_Datatype *newtype) MPI_TYPE_HINDEXED(COUNT,ARRAY_OF_BLOCKLENGTHS, ARRAY_OF_DISPLACEMENTS,OLDTYPE,NEWTYPE,IERROR) INTEGER COUNT, ARRAY_OF_BLOCKLENGTHS(*), ARRAY_OF_DISPLACEMENTS(*), OLDTYPE, NEWTYPE, IERROR MPI 65 MPI_TYPE_HINDEXED oldtype {(type 0, disp 0 ), (type n-1,disp n-1 )},extent = ex. B array_of_blocklengths,d array_of_displacements. n.sum(b[i],i=0,...,count-1), {(type 0, disp 0 +D[0]),..., (type n-1,disp n-1 +D[0]),..., (type 0, disp 0 +(D[0]+B[0]-1).ex),..., (type n-1, disp n-1 +D[0]+ B[0]-1).ex),..., (type 0, disp 0 +D[count-1]),..., (typen-1, disp n-1 +D[count-1]),..., (type 0, disp 0 +D[count-1]+(B[count-1]-1).ex),..., 162
180 (type n-1, disp n-1 +D[count-1]+ B[count-1]-1).ex.)} MPI_TYPE_STRUCT MPI_TYPE_STRUCT(count,array_of_blocklengths,array_of_displacemets,array_of_types, newtype) IN count ( ) IN array_of_blocklengths ( ) IN array_of_displacements ( ) IN array_of_types ( ) OUT newtypr ( ) int MPI_Type_struct(int count,int *array_of_blocklengths, MPI_Aint *array_of_displacements, MPI_Datatype array_of_types, MPI_Datatype *newtype) MPI_TYPE_STRUCT(COUNT,ARRAY_OF_BLOCKLENGTHS,ARRAY_OF_DISPLACEMEN TS, ARRAY_OF_TYPES *,NEWTYPE,IERROR) INTEGER COUNT, ARRAY_OF_BLOCKLENGTHS(*), ARRAY_OF_DISPLACEMENTS(*),ARRAY_OF_TYPES *,NEWTYPE, IERROR MPI 66 MPI_TYPE_STRUCT type1 { double,0 char,8 extent=16. B=(2,1,3),D=(0,16,26),T=(MPI_FLOAT, type1, MPI_CHAR). MPI_TYPE_STRUCT(3,B,D,T,newtype) {(float,0),(float,4),(double,16),(char,24),(char,26),(char,27),(char,28)}. float type1 char MPI_TYPE_STRUCT 163
181 0 MPI_FLOAT 16 type1, 26 MPI_CHAR ( 4 ), T array_of_types, T[i], typemapi ={(type0, disp0), (typen-1,dispn-1 )}, extent = ex. B array_of_blocklengths,d array_of_displacements. n.sum(b[i],i=0,...,count-1), {(type 0, disp 0 +D[0].ex),..., (type n-1,disp n-1 +D[0].ex),..., (type 0, disp 0 +(D[0]+B[0]-1).ex),..., (type n-1, disp n-1 +(D[0]+B[0]-1).ex),..., (type 0, disp 0 +D[count-1].ex),..., (type n-1, disp n-1 +D[count-1].ex),..., (type 0, disp 0 +(D[count-1]+B[count-1]-1).ex),..., (type n-1, disp n-1 +(D[count-1]+B[count-1]-1).ex.)}. MPI_TYPE_HINDEXED(count,B,D,oldtype,newtype) MPI_TYPE_STRUCT( count, B, D, T, newtype), T oldtype MPI MPI_TYPE_COMMIT(datatype) INOUT datatype ( ) int MPI_Type_commit(MPI_Datatype *datatype) MPI_TYPE_COMMIT(DATATYPE,IERROR) INTEGER DATATYPE,IERROR MPI 67 MPI_TYPE_COMMIT MPI_TYPE_FREE(datatype) INOUT datatype ( ) int MPI_Type_free(MPI_Datatype *datatype) MPI_TYPE_FREE(DATATYPE,IERROR) INTEGER DATATYPE,IERROR MPI 68 MPI_TYPE_FREE MPI_TYPE_FREE 164
182 MPI_DATATYPE_NULL C C C C C INTEGER type1, type2 CALL MPI_TYPE_CNTIGUOUS(5, MPI_REAL, type1, ierr) CALL MPI_TYPE_COMMIT(type1, ierr) type1 type2 = type1 type2 type1 CALL MPI_TYPE_VECTOR(3,5,4,MPI_REAL,type1,ierr) type1 CALL MPI_TYPE_COMMIT(type1,ierr) type1 53 #include <stdio.h> #include <stdlib.h> #include "mpi.h" #define NUMBER_OF_TESTS 10 int main( argc, argv ) int argc; char **argv; { MPI_Datatype vec1, vec_n; int blocklens[2]; MPI_Aint indices[2]; MPI_Datatype old_types[2]; double *buf, *lbuf; register double *in_p, *out_p; int rank; int n, stride; double t1, t2, tmin; int i, j, k, nloop; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); n = 1000; 165
183 stride = 24; nloop = /n; buf = (double *) malloc( n * stride * sizeof(double) ); if (!buf) { fprintf( stderr, "Could not allocate send/recv buffer of size %d\n", n * stride ); MPI_Abort( MPI_COMM_WORLD, 1 ); } lbuf = (double *) malloc( n * sizeof(double) ); if (!lbuf) { fprintf( stderr, "Could not allocated send/recv lbuffer of size %d\n", n ); MPI_Abort( MPI_COMM_WORLD, 1 ); } if (rank == 0) printf( "Kind\tn\tstride\ttime (sec)\trate (MB/sec)\n" ); /* */ MPI_Type_vector( n, 1, stride, MPI_DOUBLE, &vec1 ); MPI_Type_commit( &vec1 ); tmin = 1000; for (k=0; k<number_of_tests; k++) { if (rank == 0) { /* */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_COMM_WORLD, &status ); t1 = MPI_Wtime(); for (j=0; j<nloop; j++) { MPI_Send( buf, 1, vec1, 1, k, MPI_COMM_WORLD ); MPI_Recv( buf, 1, vec1, 1, k, MPI_COMM_WORLD, &status ); } t2 = (MPI_Wtime() - t1) / nloop; if (t2 < tmin) tmin = t2; } else if (rank == 1) { /* */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_COMM_WORLD, &status ); for (j=0; j<nloop; j++) { MPI_Recv( buf, 1, vec1, 0, k, MPI_COMM_WORLD, &status ); MPI_Send( buf, 1, vec1, 0, k, MPI_COMM_WORLD ); } 166
184 } } /* */ tmin = tmin / 2.0; if (rank == 0) { printf( "Vector\t%d\t%d\t%f\t%f\n", n, stride, tmin, n * sizeof(double) * 1.0e-6 / tmin ); } MPI_Type_free( &vec1 ); /* */ blocklens[0] = 1; blocklens[1] = 1; indices[0] = 0; indices[1] = stride * sizeof(double); old_types[0] = MPI_DOUBLE; old_types[1] = MPI_UB; MPI_Type_struct( 2, blocklens, indices, old_types, &vec_n ); MPI_Type_commit( &vec_n ); tmin = 1000; for (k=0; k<number_of_tests; k++) { if (rank == 0) { /* */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_COMM_WORLD, &status ); t1 = MPI_Wtime(); for (j=0; j<nloop; j++) { MPI_Send( buf, n, vec_n, 1, k, MPI_COMM_WORLD ); MPI_Recv( buf, n, vec_n, 1, k, MPI_COMM_WORLD, &status ); } t2 = (MPI_Wtime() - t1) / nloop; if (t2 < tmin) tmin = t2; } else if (rank == 1) { /* */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_COMM_WORLD, &status ); for (j=0; j<nloop; j++) { MPI_Recv( buf, n, vec_n, 0, k, MPI_COMM_WORLD, &status ); MPI_Send( buf, n, vec_n, 0, k, MPI_COMM_WORLD ); } } } 167
185 /* */ tmin = tmin / 2.0; if (rank == 0) { printf( "Struct\t%d\t%d\t%f\t%f\n", n, stride, tmin, n * sizeof(double) * 1.0e-6 / tmin ); } MPI_Type_free( &vec_n ); /* Use user-packing with known stride */ tmin = 1000; for (k=0; k<number_of_tests; k++) { if (rank == 0) { /* Make sure both processes are ready */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_COMM_WORLD, &status ); t1 = MPI_Wtime(); for (j=0; j<nloop; j++) { /* If the compiler isn't good at unrolling and changing multiplication to indexing, this won't be as good as it could be */ for (i=0; i<n; i++) lbuf[i] = buf[i*stride]; MPI_Send( lbuf, n, MPI_DOUBLE, 1, k, MPI_COMM_WORLD ); MPI_Recv( lbuf, n, MPI_DOUBLE, 1, k, MPI_COMM_WORLD, &status ); for (i=0; i<n; i++) buf[i*stride] = lbuf[i]; } t2 = (MPI_Wtime() - t1) / nloop; if (t2 < tmin) tmin = t2; } else if (rank == 1) { /* Make sure both processes are ready */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_COMM_WORLD, &status ); for (j=0; j<nloop; j++) { MPI_Recv( lbuf, n, MPI_DOUBLE, 0, k, MPI_COMM_WORLD, &status ); for (i=0; i<n; i++) buf[i*stride] = lbuf[i]; for (i=0; i<n; i++) lbuf[i] = buf[i*stride]; MPI_Send( lbuf, n, MPI_DOUBLE, 0, k, MPI_COMM_WORLD ); } } 168
186 } /* Convert to half the round-trip time */ tmin = tmin / 2.0; if (rank == 0) { printf( "User\t%d\t%d\t%f\t%f\n", n, stride, tmin, n * sizeof(double) * 1.0e-6 / tmin ); } /* Use user-packing with known stride, using addition in the user copy code */ tmin = 1000; for (k=0; k<number_of_tests; k++) { if (rank == 0) { /* Make sure both processes are ready */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_COMM_WORLD, &status ); t1 = MPI_Wtime(); for (j=0; j<nloop; j++) { /* If the compiler isn't good at unrolling and changing multiplication to indexing, this won't be as good as it could be */ in_p = buf; out_p = lbuf; for (i=0; i<n; i++) { out_p[i] = *in_p; in_p += stride; } MPI_Send( lbuf, n, MPI_DOUBLE, 1, k, MPI_COMM_WORLD ); MPI_Recv( lbuf, n, MPI_DOUBLE, 1, k, MPI_COMM_WORLD, &status ); out_p = buf; in_p = lbuf; for (i=0; i<n; i++) { *out_p = in_p[i]; out_p += stride; } } t2 = (MPI_Wtime() - t1) / nloop; if (t2 < tmin) tmin = t2; } else if (rank == 1) { /* Make sure both processes are ready */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_COMM_WORLD, &status ); for (j=0; j<nloop; j++) { MPI_Recv( lbuf, n, MPI_DOUBLE, 0, k, MPI_COMM_WORLD, &status ); in_p = lbuf; out_p = buf; 169
187 for (i=0; i<n; i++) { *out_p = in_p[i]; out_p += stride; } out_p = lbuf; in_p = buf; for (i=0; i<n; i++) { out_p[i] = *in_p; in_p += stride; } MPI_Send( lbuf, n, MPI_DOUBLE, 0, k, MPI_COMM_WORLD ); } } } /* Convert to half the round-trip time */ tmin = tmin / 2.0; if (rank == 0) { printf( "User(add)\t%d\t%d\t%f\t%f\n", n, stride, tmin, n * sizeof(double) * 1.0e-6 / tmin ); } } MPI_Finalize( ); /************ *****************************************/ #include "mpi.h" #include <stdio.h> int main(argc, argv) int argc; char **argv; { int rank, size, i, buf[1]; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); if (rank == 0) { for (i=0; i<100*(size-1); i++) { MPI_Recv( buf, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status ); printf( "Msg from %d with tag %d\n", status.mpi_source, status.mpi_tag ); } } 170
188 } else { for (i=0; i<100; i++) MPI_Send( buf, 1, MPI_INT, 0, i, MPI_COMM_WORLD ); } MPI_Finalize(); return 0; 14.3 MPI MPI_ADDRESS MPI_BOTTOM MPI_ADDRESS(location,address) IN location ( ) OUT address MPI_BOTTOM ( ) int MPI_ADdress(void* location, MPI_Aint *address) MPI_ADDRESS(LOCATION,ADDRESS,IERROR) <type> LOCATION(*) INTEGER ADDRESS,IERROR MPI 69 MPI_ADDRESS REAL A(100,100) INTEGER I1, I2, DIFF CALL MPI_ADDRESS(A(1,1), I1, IERROR) CALL MPI_ADDRESS(A(10,10), I2, IERROR) DIFF = I2 - I1 54 MPI_ADDRESS DIFF [(10-1)*10-(10-1)]*sizeof(real);I1 I2 MPI MPI_Type_struct MPI_ADDRESS #include <stdio.h> #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank; 171
189 struct { int a; double b } value; /* */ MPI_Datatype mystruct; int blocklens[2]; MPI_Aint indices[2]; MPI_Datatype old_types[2]; */ MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); /* One value of each type */ blocklens[0] = 1; /* */ blocklens[1] = 1; /* */ /* The base types */ old_types[0] = MPI_INT;/* */ old_types[1] = MPI_DOUBLE;/* */ /* */ MPI_Address( &value.a, &indices[0] ); MPI_Address( &value.b, &indices[1] ); /* */ indices[1] = indices[1] - indices[0]; indices[0] = 0; MPI_Type_struct( 2, blocklens, indices, old_types, &mystruct );/* MPI MPI_Type_commit( &mystruct );/* */ do { if (rank == 0) scanf( "%d %lf", &value.a, &value.b ); /* 0 */ MPI_Bcast( &value, 1, mystruct, 0, MPI_COMM_WORLD );/* */ printf( "Process %d got %d and %lf\n", rank, value.a, value.b ); } while (value.a >= 0); } /* Clean up the type */ MPI_Type_free( &mystruct );/* */ MPI_Finalize( ); 55 MPI
190 MPI_TYPE_EXTENT(datatype,extent) IN datatype ( ) OUT extent extent( ) int MPI_Type_extent(MPI_Datatype datatype, int *extent) MPI_TYPE_EXTENT(DATATYPE,SIZE,IERROR) INTEGER DATATYPE,EXTENT,IERROR MPI 70 MPI_TYPE_EXTENT MPI_TYPE_EXTENT extent MPI_TYPE_SIZE(datatype,size) IN datatype ( ) OUT size ( ) int MPI_Type_size(MPI_Datatype datatype, int *size) MPI_TYPE_SIZE(DATATYPE,SIZE,IERROR) INTEGER DATATYPE,SIZE,IERROR MPI 71 MPI_TYPE_SIZE MPI_TYPE_SIZE MPI_TYPE_EXTENT MPI_TYPE_SIZE MPI_RECV( buf, count, datatype, dest, tag, comm, status ), datatype : {(type0, disp0),..., (typen-1,dispn-1 )}, status MPI_GET_ELEMENTS MPI_GET_COUNT MPI_GET_ELEMENTS status, datatype, count ) IN status ( ) IN datatype ( ) OUT count ( ) int MPI_Get_elements( MPI_Status status, MPI_Datatype datatype, int *count) MPI_GET_ELEMENTS(STATUS,DATATYPE,COUNT,IERROR) INTEGER STATUS(MPI_STATUS_SIZE),DATATYPE,COUNT,IERROR MPI 72 MPI_GET_ELEMENTS MPI_CET_COUNT 173
191 MPI_GET_ELEMENTS MPI_GET_COUNT MPI_GET_ELEMENT MPI_GET_COUNT(status, datatype,count) IN status ( ) IN datatype ( ) OUT count ( ) int MPI_Get_count(MPI_Status * status, MPI_Datatype datatype, int * count) MPI_GET_COUNT(STATUS, DATATYPE,COUNT,IERROR) INTEGER STATUS(MPI_STATUS_SIZE), DATATYPE,COUNT IERROR MPI 73 MPI_GET_COUNT MPI_GET_COUNT status datatype MPI_GET_COUNT MPI_GET_ELEMENT... CALL MPI_TYPE_CONTIGUOUS(2, MPI_REAL, Type2, ierr) C CALL MPI_TYPE_COMMIT(Type2,ierr)... CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_SEND(a, 2, MPI_REAL, 1, 0, comm, ierr) C 1 2 CALL MPI_SEND(a, 3, MPI_REAL, 1, 0, comm, ierr) C 1 3 ELSE CALL MPI_RECV(a, 2, Type2, 0, 0, comm, stat, ierr) C 0 Type2 CALL MPI_GET_COUNT(stat, Type2, i, ierr) C Type2 i=1 CALL MPI_GET_ELEMENTS(stat, Type2, i, ierr) C REAL i=2 CALL MPI_RECV(a, 2, Type2, 0, 0, comm, stat, ierr) C 0 CALL MPI_GET_COUNT(stat, Type2, i, ierr) C Type2 i=mpi_undefined C 3 REAL Type2 CALL MPI_GET_ELEMENTS(stat, Type2, i, ierr)! returns i=3 C REAL i=3 END IF
192 MPI_GET_ELEMENTS 14.5 MPI MPI_UB MPI_LB extent(mpi_lb)= extent(mpi_ub) =0), typemap ={(type 0, disp 0 ),..., (type n-1,disp n-1 )}, typemap lb(typemap)= min disp lb min {disp such that type =lb} lb,typemap ub(typemap)= max disp +sizeof(type )+ε ub max {disp such that type =ub} ub typemap extent(typemap) =ub(typemap)-lb(typemap)+e MPI_TYPE_LB(datatype,displacement) IN datatype ( ) OUT displacement ( ) int MPI_Type_lb (MPI_Datatype datatype, int *displacement) MPI_TYPE_LB (DATATYPE,DISPLACEMENT,IERROR) INTEGER DATATYPE,DISPLACEMENT,IERROR MPI 74 MPI_TYPE_LB MPI_TYPE_UB(datatype,displacement) IN datatype ( ) OUT displacement ( ) int MPI_Type_ub (MPI_Datatype datatype, int *displacement) MPI_TYPE_UB (DATATYPE,DISPLACEMENT,IERROR) INTEGER DATATYPE,DISPLACEMENT,IERROR MPI 75 MPI_TYPE_UB 175
193 D=(-3,0,6); T=(MPI_LB,MPI_INT,MPI_UB),B=(1,1,1). MPI_TYPE_STRUCT(3,B,D,T,type1) extent 9, 0 {(lb,- 3),(int,0),(ub,6)} MPI_TYPE_CONTIGUOUS(2,type1,type2), {(lb,-3),(int,0),(int,9),(ub,15)} ub REAL a(100,100),b(100,100) INTEGER disp(100),blocklen(100),ltype,myrank,ierr INTEGER status(mpi_status_size) C a b CALL MPI_COMM_RANK(MPI_COMM_WORLD,myrank) C i a(i-1,i) 100-i C 100*(i-1)+i FORTRAN DO i=1, 100 disp(i) = 100*(i-1) + i block(i) = 100-i END DO C CALL MPI_TYPE_INDEXED(100,block,disp,MPI_REAL,1type,ierr CALL MPI_TYPE_COMMTT(1type,ierr) CALL MPE_SENDRECV(a,1,1type,myrank,0,b,1 1type,myrank,0, MPI_COMM_WORLD,statusierr) C 0 a b 57 REAL a(100,100),b(100,100) INTEGER row,xpose,sizeofreal,myrank,ierr INTEGER status(mpi_status_size) C a b CALL MPI_COMM_RANK(MPI_COMM_WORLD,myrank) CALL MPI_TYPE_EXTENT(MPI_REAL,sizeofreal,ierr) C C row 100 C CALL MPI_TYPE_VECTOR(100,1,100,MPI_REAL,row,ierr) C row xpose CALL MPI_TYPE_HVECTOR(100,1,sizeofreal,row,xpose,lerr) CALL MPI_TYPE_COMMIT(xpose,ierr) 176
194 C a b CALL MPI_SENDRECV(a,1,xpose,myrank,0,b,100*100,MPI_REAL,MYRANK,0, _COMM_WORLD,status,ierr) (Pack) (Unpack) MPI_PACK inbuf,incount,datatype inbount datatype outbuf outcount MPI_SEND position position comm MPI_PACK(inbuf, incount, datatype, outbuf, outcount, position, comm ) IN inbuf ( ) IN incount ( ) IN datatype ( ) OUT outbuf ( ) IN outcount ( ) INOUT position ( ) IN comm ( ) int MPI_Pack(void* inbuf, int incount, MPI_datatype, void *outbuf, int outcount, int *position, MPI_Comm comm) MPI_PACK(INBUF,INCOUNT,DATATYPE,OUTBUF,OUTCOUNT,POSITION,COMM, IERROR) INBUF(*),OUTBUF(*) INTEGER INCOUNT,DATATYPE,OUTCOUNT,POSITION,COMM,IERROR MPI 76 MPI_PACK 177
195 MPI_UNPACK(inbuf, insize, position, outbuf, outcount, datatype, comm ) IN inbuf ( ) IN insize ( ) INOUT position, ( ) OUT outbuf ( ) IN outcount, ( ) IN datatype ( ) IN comm ( ) int MPI_Unpack(void* inbuf, int insize, int *position, void *outbuf, int outcount, MPI_Datatype datatype, MPI_Comm comm) MPI_UNPACK(INBUF,INSIZE, POSITION,OUTBUF,OUTCOUNT, DATATYPE, COMM, IERROR) INBUF(*),OUTBUF(*) INTEGER INSIZE, POSITION,OUTCOUNT,, DATATYPE COMM,IERROR MPI 77 MPI_UNPACK MPI_UNPACK MPI_PACK inbuf insize outbuf,outcount,datatype MPI_RECV insize inbuf position position comm MPI_RECV MPI_UNPACK : MPI_RECV,count MPI_UNPACK,count position MPI_PACKED MPI_PACKED ( MPI_PACKED ) MPI_PACKED MPI_UNPACK MPI_UNPACK position=0 position inbuf insize comm 178
196 MPI_PACK_SIZE( incount, datatype, comm, size ) IN incount ( ) IN datatype ( ) IN comm ( ) OUT size incount datatype ( ) int MPI_Pack_size(int incount, MPI_Datatype datatype, MPI_Comm comm, int *size) MPI_PACK_SIZE(INCOUNT,DATATYPE,COMM,SIZE,IERROR) INTEGER INCOUNT,DATATYPE,COMM,SIZE,IERROR MPI 78 MPI_PACK_SIZE MPI_PACK_SIZE size incount datatype ( ) MPI_PACKED MPI_INT */ int position, i,j,a[2]; char buff[1000];... MPI_Comm_rank(MPI_COMM_WORLD,&myrank); if (myrank ==0) { /* 0 */ position =0;/* */ MPI_Pack(&i,1,MPI_INT,buff,1000,&position, MPI_COMM_WORLD); /* i */ MPI_Pack(&j,1,MPI_INT,buff,1000,&position, MPI_COMM_WORLD); /* j */ MPI_Send(buff,position, MPI_PACKED,1,0,MPI_COMM_WORLD); } /* */ else if(myrank==1) { /* 1 */ MPI_Recv(a,2, MPI_INT,0,0,MPI_COMM_WORLD)/* 0 } int position, i; float a[1000]; char buff[1000]; MPI_Status status;
197 MPI_Comm_rank(MPI_Comm_world,&myrank); if (myrank ==0) { /* 0 */ int len[2]; MPI_Aint disp[2]; MPI_Datatype type[2], newtype; i=100 /* */ len[0]=1; len[1]=i; MPI_Address( &i,disp);/*i MPI_BOTTOM */ MPI_Address( a,disp+1); /*a MPI_BOTTOM */ type[0]=mpi_int;/* */ type[1]=mpi_float; /* */ MPI_Type_struct(2,len,disp,type,&newtype);/* 1000 */ MPI_Type_commit(&newtype);/* */ /* */ position =0;/* */ MPI_Pack(MPI_BOTTOM, 1,newtype, buff, 1000,&position,MPI_COMM_WORLD);/* i a buff*/ /* */ MPI_Send(buff,postion, MPI_PACKED,1,0, MPI_COMM_WORLD) } else if(myrank ==1) { MPI_Recv(buff, 1000,MPI_PACKED,0,0,&status); /* */ position =0; MPI_Unpack(buff,1000,&position,&i,1,MPI_INT,MPI_COMM_WORLD); /* */ MPI_Unpack(buff,1000,&position,a,i,MPI_FLOAT,MPI_COMM_WORLD); /* */ } 60 ROOT #include <stdio.h> #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank; 180
198 int packsize, position; int a; double b; char packbuf[100]; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); do { if (rank == 0) {/* 0 */ scanf( "%d %lf", &a, &b ); packsize = 0;/* */ MPI_Pack( &a, 1, MPI_INT, packbuf, 100, &packsize, MPI_COMM_WORLD );/* a */ MPI_Pack( &b, 1, MPI_DOUBLE, packbuf, 100, &packsize, MPI_COMM_WORLD );/* b */ } MPI_Bcast( &packsize, 1, MPI_INT, 0, MPI_COMM_WORLD );/* */ MPI_Bcast( packbuf, packsize, MPI_PACKED, 0, MPI_COMM_WORLD );/* */ if (rank!= 0) { position = 0; MPI_Unpack( packbuf, packsize, &position, &a, 1, MPI_INT, MPI_COMM_WORLD );/* a */ MPI_Unpack( packbuf, packsize, &position, &b, 1, MPI_DOUBLE, MPI_COMM_WORLD );/* b */ } printf( "Process %d got %d and %lf\n", rank, a, b ); } while (a >= 0);/* a */ MPI_Finalize( ); return 0; } MPI MPI-2 I/O 181
199 15 MPI MPI N-1 MPI rank 0 MPI_GROUP_EMPTY MPI_GROUP_NULL MPI_GROUP_EMPTY MPI_GROUP_NULL MPI MPI_INIT MPI_COMM_WORLD MPI_COMM_SELF MPI_COMM_NULL MPI MPI_COMM_WORLD MPI_COMM_GROUP 15.2 MPI MPI_GROUP_SIZE(group,size) IN group OUT size int MPI_Group_size(MPI_Group group,int *size) MPI_GROUP_SIZE(GROUP,SIZE,IERROR) INTEGER GROUP,SIZE,IERROR MPI 79 MPI_GROUP_SIZE MPI_GROUP_SIZE 182
200 MPI_GROUP_RANK(group,rank) IN group OUT rank /MPI_UNDEFINED int MPI_Group_rank(MPI_Group group,int *rank) MPI_GROUP_RANK(GROUP,RANK,IERROR) INTEGER GROUP,RANK,IERROR MPI 80 MPI_GROUP_RANK MPI_GROUP_RANK rank MPI_COMM_RANK MPI_UNDEFINED MPI_GROUP_TRANSLATE_RANKS(group1,n,ranks1,group2,ranks2) IN group1 1 IN n rank1 rank2 IN ranks1 group1 IN group2 2 OUT ranks2 ranks1 group2 int MPI_Group_translate_ranks(MPI_Group group1,int n,int *ranks1, MPI_Group group2,int *ranks2) MPI_GROUP_TRANSLATE_RANKS(GROUP1,N,RANKS1,GROUP2,RANKS2, IERROR) INTEGER GROUP1,N,RANDS1(*),GROUP2,RANKS2,IERROR MPI 81 MPI_GROUP_TRANSLATE_RANKS MPI_GROUP_TRANSLATE_RANKS group1 n rank1 group2 rank2 group2 group1 MPI_UNDEFINED MPI_COMM_WORLD MPI_GROUP_COMPARE(group1,group2,result) IN group1 IN group2 OUT result int MPI_Group_compare(MPI_Group group1,mpi_group group2,int *result) MPI_GROUP_COMPARE(GROUP1,GROUP2,RESULT,IERROR) INTEGER GROUP1,GROUP2,RESULT,IERROR MPI 82 MPI_GROUP_COMPARE 183
201 MPI_GROUP_COMPARE group1 group2 group2 MPI_IDENT group1 group2 MPI_SIMILAR MPI_UNEQUAL MPI_COMM_GROUP(comm,group) IN comm OUT group comm int MPI_Comm_group(MPI_Comm comm, MPI_Group * group) MPI_COMM_GROUP(COMM,GROUP,IERROR) INTEGER COMM,GROUP,IERROR MPI_COMM_GROUP MPI 83 MPI_COMM_GROUP MPI_COMM_GROUP MPI_GROUP_UNION(group1,group2,newgroup) IN group1 IN group2 OUT newgroup int MPI_Group_union(MPI_Group group1,mpi_group group2,mpi_group *newgroup) MPI_GROUP_UNION(GROUP1,GROUP2, NEWGROUP, IERROR) INTEGER GROUP1,GROUP2,NEWGROUP,IERROR MPI 84 MPI_GROUP_UNION MPI_GROUP_UNION newgroup group1 group2 group1 MPI_GROUP_INTERSECTION(group1,group2,newgroup) IN group1 IN group2 OUT newgroup int MPI_Group_intersection(MPI_Group group1,mpi_group group2,mpi_group *newgroup) MPI_GROUP_INTERSECTION(GROUP1,GROUP2,NEWGROUP,IERROR) INTGETER GROUP1,GROUP2,NEWGROUP,IERROR MPI 85 MPI_GROUP_INTERSECTION 184
202 MPI_GROUP_INTERSECTION newgroup group1 group2 MPI_GROUP_DIFFERENCE(group1,group2,newgroup) IN group1 IN group2 OUT newgroup int MPI_Group_difference(MPI_Group group1,mpi_group group2,mpi_group *newgroup) MPI_GROUP_DIFFERENCE(GROUP1,GROUP2,NEWGROUP,IERROR) INTEGER GROUP1,GROUP2,NEWGROUP,IERROR MPI 86 MPI_GROUP_DIFFERENCE MPI_GROUP_DIFFERENCE newgroup group1 group2 MPI_GROUP_EMPTY MPI_GROUP_INCL(group,n,ranks,newgroup) IN group IN n ranks IN ranks OUT newgroup int MPI_Group_incl(MPI_Group group,int n,int *ranks,mpi_group *newgroup) MPI_GROUP_INCL(GROUP,N,RANKS,NEWGROUP,IERROR) INTEGER GROUP,NRANKS(*),NEWGROUP,IERROR MPI 87 MPI_GROUP_INCL MPI_GROUP_INCL n rank[0]... rank[n-1] newgroup n=0 newgroup MPI_GROUP_EMPTY MPI_GROUP_EXCL(group,n,ranks,newgroup) IN group ( ) IN n ranks ( ) IN ranks newgroup OUT newgroup int MPI_Group_excl(MPI_Group group, int n, int *ranks,mpi_group *newgroup) MPI_GROUP_EXCL(GROUP,N,RANKS,NEWGROUP,IERROR) INTEGER GROUP,N,RANKS(*),NEWGROUP,IERROR MPI 88 MPI_GROUP_EXCL 185
203 MPI_GROUP_EXCL newgroup group n ranks[0],...,ranks[n-1] ranks n group n=0, newgroup group MPI_GROUP_RANGE_INCL(group,n,ranges,newgroup) IN group ( ) IN n ranges ( ) IN ranges OUT newgroup int MPI_Group_range_incl(MPI_Group group, int n, int ranges[][3],mpi_group *newgroup) MPI_GROUP_RANGE_INCL(GROUP,N,RANGES,NEWGROUP,IERROR) INTEGER GROUP,N,RANGES(3,*),NEWGROUP,IERROR MPI 89 MPI_GROUP_RANGE_INCL MPI_GROUP_RANGE_INCL group n ranges newgroup ranges (first,last,stride ),...,(first,last,stride ), newgroup group first, first +stride,..., first +(last -first )/stride *stride,... first, first +stride,..., first +(last -first )/stride *stride group (1,9,2) (15,20,3),(21,30,2) 1, 3, 5, 7, 9,15,18,21,23,25,27,29 MPI_GROUP_RANGE_EXCL(group,n,ranges,newgroup) IN group ( ) IN n ranges ( ) IN ranges OUT newgroup int MPI_Group_range_excl(MPI_Group group,int n, int ranges[][3], MPI_Group *newgroup) MPI_GROUP_RANGE_EXCL(GROUP,N,RANGES,NEWGROUP,IERROR) INTEGER GROUP,N,RANGES(3,*),NEWGROUP,IERROR MPI 90 MPI_GROUP_RANGE_EXCL MPI_GROUP_RANGE_EXCL group n rangs newgroup MPI_GROUP_INCL 186
204 MPI_GROUP_FREE(group) IN/OUT group ( ) int MPI_Group_free(MPI_Group *group) MPI_GROUP_FREE(GROUP,IERROR) INTEGER GROUP,IERROR MPI 91 MPI_GROUP_FREE MPI_GROUP_FREE group MPI_GROUP_NULL 15.3 MPI MPI_COMM_SIZE(comm,size) IN comm ( ) OUT size comm ( ) int MPI_Comm_size(MPI_Comm comm, int *size) MPI_COMM_SIZE(COMM,SIZE,IERROR) INTEGER COMM,SIZE,IERROR MPI_COMM_RANK(comm,rank) IN comm ( ) OUT rank int MPI_Comm_rank(MPI_Comm comm, int *rank) MPI_COMM_RANK(COMM,RANK,IERROR) INTEGER COMM,RANK,IERROR rank 187
205 MPI_COMM_COMPARE(comm1,comm2,result) IN comm1 ( ) IN comm2 ( ) OUT result ( ) int MPI_Comm_compare(MPI_Comm comm1,mpi_comm comm2,int *result) MPI_COMM_COMPARE(COMM1,COMM2,RESULT,IERROR) INTEGER COMM1,COMM2,RESULT,IERROR MPI_COMM_COMPARE MPI_IDENT MPI_CONGRUENT MPI 92 MPI_COMM_COMPARE comm1 comm2 MPI_SIMILAR MPI_UNEQUAL MPI MPI_COMM_WORLD MPI MPI_COMM_DUP(comm,newcomm) IN comm ( ) OUT newcomm comm ( ) int MPI_Comm_dup(MPI_Comm comm,mpi_comm *newcomm) MPI_COMM_DUP(COMM,NEWCOMM,IERROR) INTEGER COMM, NEWCOMM,IERROR MPI 93 MPI_COMM_DUP MPI_COMM_DUP comm newcomm newcomm MPI_COMM_CREATE(comm,group,newcomm) IN comm ( ) IN group ( ) OUT newcomm ( ) int MPI_Comm_create(MPI_Comm comm,mpi_group group,mpi_comm *newcomm) MPI_COMM_CREATE(COMM,GROUP,NEWCOMM,IERROR) INTEGER COMM,GROUP,NEWCOMM,IERROR MPI 94 MPI_COMM_CREATE 188
206 MPI_COMM_CREATE group group MPI_COMM_NULL, group group comm MPI_COMM_SPLIT(comm,color,key,newcomm) IN comm ( ) IN color ( ) IN key ( ) OUT newcomm ( ) int MPI_Comm_split(MPI_Comm comm,int color, int key,mpi_comm *newcomm) MPI_COMM_SPLIT(COMM,COLOR,KEY,NEWCOMM,IERROR) INTEGER COMM,COLOR,KEY,NEWCOMM,IERROR MPI 95 MPI_COMM_SPLIT MPI_COMM_SPLIT comm color color color key key color MPI_UNDEFINED newcomm MPI_COMM_NULL color MPI_COMM_FREE(comm) IN/OUT comm int MPI_Comm_free(MPI_Comm *comm) MPI_COMM_FREE(COMM,IERROR) INTEGER COMM,IERROR MPI 96 MPI_COMM_FREE MPI_COMM_FREE MPI_COMM_NULL 0 (commslave) MPI_COMM_WORLD MPI_COMM_WORLD commslave main(int argc,char **argv) { int me,count,count2; 189
207 void *send_buf,*recv_buf,*send_buf2,*recv_buf2; MPI_Group MPI_GROUP_WORLD,grprem; MPI_Comm commslave; static int rank[]={0};... MPI_Init(&argc,&argv); MPI_Comm_group(MPI_COMM_WORLD,&MPI_GROUP_WORLD); /* MPI_COMM_WORLD */ MPI_Comm_rank(MPI_COMM_WORLD,&me); MPI_Group_excl(MPI_GROUP_WORLD,1,ranks,&grprem);/* 0 */ MPI_Comm_create(MPI_COMM_WORLD,grprem,&commslave);/* 0 */ if((me!=0) {/* 0 */... MPI_Reduce(send_buf,recv_buff,count,MPI_INT,MPI_SUM,1,commslave);/* 0*/... } /* MPI_COMM_WORLD */ MPI_Reduce(send_buf2,recv_buff2,count2,MPI_INT,MPI_SUM,0, MPI_COMM_WORLD); MPI_Comm_free(&commslave); MPI_Group_free(&MPI_GROUP_WORLD); MPI_Group_free(&grprem); /* */ MPI_Finalize(); }
208 MPI_COMM_TEST_INTER(comm,flag) IN comm ( ) OUT flag ( ) int MPI_Comm_test_inter(MPI_Comm comm,int *flag) MPI_COMM_TEST_INTER(COMM,FLAG,IERROR) INTEGER COMM,IERROR LOGICAL FLAG MPI_COMM_TEST_INTER true false MPI 97 MPI_COMM_TEST_INTER MPI_COMM_REMOTE_SIZE(comm,size) IN comm ( ) OUT size comm ( ) int MPI_COMM_Comm_remote_size(MPI_Comm comm,int *size) MPI_COMM_REMOTE_SIZE(COMM,SIZE,IERROR) INTEGER COMM,SIZE,IERROR MPI 98 MPI_COMM_REMOTE_SIZE MPI_COMM_REMOTE_SIZE MPI_COMM_REMOTE_GROUP(comm,group) IN comm ( ) OUT group comm ( ) int MPI_Comm_remote_group(MPI_Comm comm,mpi_group *group) MPI_COMM_REMOTE_GROUP(COMM,GROUP,IERROR) INTEGER COMM,GROUP,IERROR MPI 99 MPI_COMM_REMOTE_GROUP MPI_COMM_REMOTE_GROUP 191
209 MPI_INTERCOMM_CREATE(local_comm,local_leader,peer_comm, remote_leader,tag,newintercomm ) IN local_comm ( ) IN local_leader ( ) IN peer_comm local_leader ( ) IN remote_leader peer_comm IN tag ( ) OUT newintercomm ( ) int MPI_Intercomm_create(MPI_Comm local_comm,int local_leader,mpi_comm peer_comm,int remote_leader,int tag,mpi_comm *newintercomm) MPI_INTERCOMM_CREATE(LOCAL_COMM,LOCAL_LEADER,PEER_COMM,REM OTE_LEADER,TAG,NEWINTERCOMM,IERROR) INTEGER LOCAL_COMM,LOCAL_LEADER,PEER_COMM,REMOTE_LEADER, TAG,NEWINTERCOMM,IERROR MPI 100 MPI_INTERCOMM_CREATE MPI_INTERCOMM_CREATE local_comm local_leader local_leader peer_comm remote_leader remote_leader tag tag MPI_WILD_TAG MPI_COMM_WORLD peer_comm MPI_INTERCOMM_MERGE(intercomm,high,newintracomm) IN intercomm ( ) IN high ( ) OUT newintracomm ( ) int MPI_Intercomm_merge(MPI_Comm intercomm,int high,mpi_comm *newintracomm) MPI_INTERCOMM_MERGE(INTERCOMM,HIGH,INTRACOMM,IERROR) INTEGER INTERCOMM,INTRACOMM,IERROR LOGICAL HIGH MPI 101 MPI_INTERCOMM_MERGE MPI_INTERCOMM_MERGE high high=true high=false true high 192
210 main(int argc,char**argv) { MPI_Comm mycomm;/* */ MPI_Comm myfirstcomm;/* */ MPI_Comm mysecondcomm;/* */ int membershipkey; int rank; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); membershipkey=rank%3; /* MPI_COMM_WORLD membershipkey */ MPI_Comm_split(MPI_COMM_WORLD,membershipKey,rank,&myComm); /* 15 */ membershipkey=0 membershipkey=1 membershipkey=2 rank=0 3 6 rank=1 4 7 rank=2 5 8 if(membershipkey==0) {/* 0 */ /* 0 1 myfirstcomm */ MPI_Intercomm_create(myComm,0,MPI_COMM_WORLD,1, 1,&myFirstComm); } else if (membershipkey==1) {/* 1 */ /* 1 0 myfirstcomm */ MPI_Intercomm_create(myComm,0,MPI_COMM_WORLD,0, 1,&myFirstComm); /* 1 2 mysecondcomm */ MPI_Intercomm_ereate(myComm,0,MPI_COMM_WORLD,2, 12,&mySecondComm); } else if (membershipkey==2) {/* 2 */ /* 2 1 myfirstcomm */ MPI_Intercomm_create(myComm,0,MPI_COMM_WORLD,1, 12,&myFirstComm); } 193
211 ... } switch(membershipkey)/* */ { case 1: MPI_COMM_free(&mySecondComm);/* 1 */ case 0: case 2: MPI_COMM_free(&myFirstComm); break; } MPI_Finalize(); MPI Cache, MPI MPI_COMM_DUP * " " MPI * MPI_KEYVAL_CREATE(copy_fn,delte_fn,keyval,extra_state) IN copy_fn keyval IN delete_fn keyval OUT keyval OUT extra_state int MPI_Keyval_create(MPI_Copyfunction *copy_fn,mpi_delete_function *delete_fn,int *keyval,void* extra_state) MPI_KEYVAL_CREATE(COPY_FN,DELETE_FN,KEYVAL,EXTRA_STATE,IERROR) EXTERNAL COPY_FN,DELETE_FN INTEGER KEYVAL,EXTRA_STATE,IERROR MPI 102 MPI_KEYVAL_CREATE MPI_KEYVAL_CREATE,, 194
212 MPI_COMM_DUP, copy_fn copy_fn MPI_Copy_function, : typedef int MPI_Copy_function(MPI_Comm *oldcomm,int *keyval, void *extra_state,void *attribute_val_in, void **attribute_val_out,int *flag) Fortran : FUNCTION COPY_FUNCTION(OLDCOMM,KEYVAL,EXTRA_STATE,ATTRIBUTE_VAL_IN, ATTRIBUTE_VAL_OUT,FLAG) INTEGER OLDCOMM,KEYVAL,EXTRA_STATE,ATTRIBUTE_VAL_IN, ATTRIBUTE_VAL_OUT LOGICAL FLAG oldcomm flag=0 flag=1 attribute_val_out MPI_SUCCESS ( MPI_COMM_DUP ) copy_fn C FORTRAN MPI_NULL_COPY_FN MPI_DUP_FN MPI_NULL_COPY_FN flag=0 MPI_SUCCESS flag=1 MPI_DUP_FN attribute_val_out attribute_val_in MPI_SUCCESS copy_fn MPI_COMM_FREE MPI_ATTR_DELETE delete_fn delete_fn MPI_Delete_function C typedef int MPI_Delete_function(MPI_Comm *comm,int *keyval, void*attribute_val,void *extra_state) Fortran : FUNCTION DELETE_FUNCTION(COMM,KEYVAL,ATTRIBUTE_VAL,EXTRA_STATE) INTEGER COMM,KEYVAL,ATTRIBUTE_VAL,EXTRA_STATE MPI_COMM_FREE MPI_ATTR_DELETE MPI_ATTR_PUT C FORTRAN delete_fn MPI_NULL_DELETE_FN MPI_SUCCESS MPI_KEYVAL_FREE(keyval) IN keyval ( ) int MPI_Keyval_free(int *keyval) MPI_KEYVAL_FREE(KEYVAL,IERROR) INTEGER KEYVAL,IERROR MPI 103 MPI_KEYVAL_FREE MPI_KEYVAL_FREE keyval MPI_KEYVAL_INVALID 195
213 MPI_ATTR_PUT(comm,keyval,attribute_val) IN comm ( ) IN keyval, MPI_KEY_CREATE ( ) IN attribute_val int MPI_Attr_put(MPI_Comm comm,int keyval,void* attribute_val) MPI_ATTR_PUT(COMM,KEYVAL,ATTRIBUTE_VAL,IERROR) INTEGER COMM,KEYVAL,ATTRIBUTE_VAL,IERROR MPI 104 MPI_ATTR_PUT MPI_ATTR_PUT keyval MPI_ATTR_GET MPI_ATTR_GET(comm,keyval,attribute_val,flag) IN comm ( ) IN keyval ( ) OUT attribute_val OUT flag int MPI_Attr_get(MPI_Comm comm,int keyval,void **attribute_val,int *flag) MPI_ATTR_GET(COMM,KEYVAL,ATTRIBUTE_VAL,FLAG,IERROR) INTEGER COMM,KEYVAL,ATTRIBUTE_VAL,IERROR LOGICAL FLAG MPI 105 MPI_ATTR_GET MPI_ATTR_GET keyval, comm,, flag=false attribute_val flag=true MPI_ATTR_DELETE(comm,keyval) IN comm ( ) IN keyval int MPI_Attr_delete(MPI_Comm comm,int keyval) MPI_ATTR_DELETE(COMM,KEYVAL,IERROR) INTEGER COMM,KEYVAL,IERROR MPI 106 MPI_ATTR_DELETE MPI_ATTR_DELETE keyval delete_fn delete_fn 196
214 MPI_SUCCESS MPI_COMM_DUP MPI_COMM_FREE PROGRAM MAIN include 'mpif.h' integer PM_MAX_TESTS parameter (PM_MAX_TESTS=3) integer PM_TEST_INTEGER, fuzzy, Error, FazAttr integer PM_RANK_SELF integer Faz_World parameter (PM_TEST_INTEGER=12345) logical FazFlag external FazCreate, FazDelete call MPI_INIT(PM_GLOBAL_ERROR) C C C C C C C PM_GLOBAL_ERROR = MPI_SUCCESS call MPI_COMM_SIZE (MPI_COMM_WORLD,PM_NUM_NODES, $ PM_GLOBAL_ERROR) call MPI_COMM_RANK (MPI_COMM_WORLD,PM_RANK_SELF, $ PM_GLOBAL_ERROR) call MPI_keyval_create ( FazCreate, FazDelete, FazTag, & fuzzy, Error ) call MPI_attr_get (MPI_COMM_WORLD, FazTag, FazAttr, & FazFlag, Error) if (FazFlag ) then print *, "True,get attr=",fazattr else print *, "False no attr" end if FazAttr = 120 call MPI_attr_put (MPI_COMM_WORLD, FazTag, FazAttr, Error) call MPI_Comm_Dup (MPI_COMM_WORLD, Faz_WORLD, Error) call MPI_Attr_Get ( Faz_WORLD, FazTag, FazAttr, & FazFlag, Error) if (FazFlag) then 197
215 print *, "True,dup comm get attr=",fazattr else print *,"error" end if call MPI_Comm_free( Faz_WORLD, Error ) C call MPI_FINALIZE (PM_GLOBAL_ERROR) end C C C SUBROUTINE FazCreate (comm, keyval, fuzzy, & attr_in, attr_out, flag, ierr ) INTEGER comm, keyval, fuzzy, attr_in, attr_out LOGICAL flag include 'mpif.h' attr_out = attr_in + 1 flag =.true. ierr = MPI_SUCCESS END C C C SUBROUTINE FazDelete (comm, keyval, attr, extra, ierr ) INTEGER comm, keyval, attr, extra, ierr include 'mpif.h' ierr = MPI_SUCCESS if (keyval.ne. MPI_KEYVAL_INVALID)then attr = attr - 1 end if END MPI MPI MPI 198
216 16 MPI MPI 16.1 (inter-communicator), MPI 16 MPI_CART_CREATE MPI_GRAPH_CREATE MPI_CARTDIM_GET MPI_GRAPHDIMS_GET MPI_CART_GET MPI_GRAPH_GET MPI_CART_MAP MPI_GRAPH_MAP 16.2 MPI_CART_CREATE MPI_CART_CREATE reorder = false ndims dims[0] dims[1]... dims[ndims-1] dims[1]*dims[1]*...*dims[ndims-1] comm_old MPI_COMM_NULL MPI_COMM_SPLIT 199
217 comm_old MPI_CART_CREATE(comm_old, ndims, dims, periods, reorder, comm_cart) IN comm_old IN ndims IN dims ndims IN periods ndims IN reorder OUT comm_cart int MPI_Cart_create(MPI_Comm comm_old, int ndims, int *dims, int *periods, int reorder, MPI_Comm *comm_cart) MPI_CART_CREATE(COMM_OLD, NDIMS, DIMS, PERIODS, REORDER, COMM_CART, IERROR) INTEGER COMM_OLD, NDIMS, DIMS(*), COMM_CART, IERROR LOGICAL PERIODS(*), REORDER MPI 107 MPI_CART_CREATE MPI_DIMS_CREATE(nnodes, ndims,dims) IN nnodes IN ndims INOUT dims ndims int MPI_Dims_create(int nnodes, int ndims, int *dims) MPI_DIMS_CREATE(NNODE, NDIMS, DIMS, IERROR) INTEGER NNODES, NDIMS, DIMS(*), IERROR MPI 108 MPI_DIMS_CREATE MPI_DIMS_CREATE ndims nnodes dims MPI_CART_CREATE i dims[i]=k>0 dims[i] dims[i]=0 dims[i] MPI_TOPO_TEST STATUS MPI_GRAPH MPI_CART MPI_UNDEFINED 200
218 MPI_TOPO_TEST(comm, status) IN comm OUT status comm ( ) int MPI_Topo_test(MPI_Comm comm, int *status) MPI_TOPO_TEST(COMM, STATUS, IERROR) INTEGER COMM, STATUS, IERROR MPI 109 MPI_TOPO_TEST MPI_CART_GET(comm, maxdims, dims, periods, coords) IN comm IN maxdims OUT dims OUT periods OUT coords int MPI_Cart_get(MPI_Comm comm, int maxdims, int *dims, int *periods, int *coords) MPI_CART_GET(COMM, MAXDIMS, DIMS, PERIODS, COORDS, IERROR) INTEGER COMM, MAXDIMS, DIMS(*), COORDS(*), IERROR LOGICAL PERIODS(*) MPI 110 MPI_CART_GET MPI_CART_GET dims periods coords MPI_CART_RANK(comm, coords, rank) IN comm IN coords OUT rank int MPI_Cart_rank(MPI_Comm comm, int *coords, int *rank) MPI_CART_RANK(COMM, COORDS, RANK, IERROR) INTEGER COMM, COORDS(*), RANK, IERROR MPI 111 MPI_CART_RANK MPI_CART_RANK MPI_COMM_RANK 201
219 MPI_CARTDIM_GET(comm, ndims) IN comm OUT ndims int MPI_Cartdim_get(MPI_Comm comm, int *ndims) MPI_CARTDIM_GET(COMM, NDIMS, IERROR) INTEGER COMM, NDIMS, IERROR MPI 112 MPI_CARTDIM_GET MPI_CARTDIM_GET comm ndims MPI_CART_SHIFT(comm, direction, disp, rank_source, rank_dest) IN comm IN direction IN disp OUT rank_source OUT rank_dest int MPI_Cart_shift(MPI_Comm comm, int direction, int disp, int *rank_source, int *rank_dest) MPI_CART_SHIFT(COMM, DIRECTION, DISP, RANK_SOURCE, RANK_DEST, IERROR) INTEGER COMM, DIRECTION, DISP, RANK_SOURCE, RANK_DEST, IERROR MPI 113 MPI_CART_SHIFT MPI_CART_SHIFT comm rank_source direction disp rank_dest rank_source rank_dest MPI_PROC_NULL MPI_CART_COORDS(comm, rank, maxdims, coords) IN comm IN rank IN maxdims OUT coords int MPI_Cart_coords(MPI_Comm comm, int rank, int maxdims, int *coords) MPI_CART_COORDS(COMM, RANK, MAXDIMS, COORDS, IERROR) INTEGER COMM, RANK, MAXDIMS, COORDS(*), IERROR MPI 114 MPI_CART_COORDS 202
220 MPI_CART_COORDS rank coords maxdims MPI_CART_SUB(comm, remain_dims, newcomm) IN comm IN remain_dims OUT newcomm int MPI_Cart_sub(MPI_Comm com, int *remain_dims, MPI_Comm *newcomm) MPI_CART_SUB(COMM, REMAIN_DIMS, NEWCOMM, IERROR) INTEGER COMM, NEWCOMM, IERROR LOGICAL REMAIN_DIMS(*) MPI 115 MPI_CART_SUB MPI_CART_SUB remain_dims remain_dims[i] true remain_dims[i] false comm remain_dims= false,true,true <1,1,1><1,1,2><1,1,3><1,1,4> <1,1><1,2><1,3><1,4> <1,2,1><1,2,2><1,2,3><1,2,4> <2,1><2,2><2,3><2,4> <1,3,1><1,3,2><1,3,3><1,3,4> <3,1><3,2><3,3><3,4> 1 <2,1,1><2,1,2><2,1,3><2,1,4> <1,1><1,2><1,3><1,4> <2,2,1><2,2,2><2,2,3><2,2,4> <2,1><2,2><2,3><2,4> <2,3,1><2,3,2><2,3,3><2,3,4> <3,1><3,2><3,3><3,4>
221 MPI_CART_MAP IN comm IN ndims IN dims IN periods comm, ndims, dims, periods, newrank ndims ndims OUT newrank int MPI_Cart_map(MPI_comm comm, int ndims, int * dims, int * periods, int *newrank) MPI_CART_MAP(COMM, NDIMS, DIMS, PERIODS, NEWRANK, IERROR) INTEGER COMM, NDIMS, DIMS(*), NEWRANK, IERROR LOGICAL PERIODS(*) MPI 116 MPI_CART_MAP ndims dims MPI_CART_MAP newrank MPI_UNDEFINED #include <stdio.h> #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank, value, size, false=0; int right_nbr, left_nbr; MPI_Comm ring_comm; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &size ); MPI_Cart_create( MPI_COMM_WORLD, 1, &size, &false, 1, &ring_comm );/* false MPI_PROC_NULL 1 MPI_PROC_NULL, 0, 1,..., size-1, MPI_PROC_NULL */ MPI_Cart_shift( ring_comm, 0, 1, &left_nbr, &right_nbr );/* */ MPI_Comm_rank( ring_comm, &rank );/* */ 204
222 MPI_Comm_size( ring_comm, &size );/* */ do { if (rank == 0) {/* 0 */ scanf( "%d", &value ); MPI_Send( &value, 1, MPI_INT, right_nbr, 0, ring_comm );/* */ } else { MPI_Recv( &value, 1, MPI_INT, left_nbr, 0, ring_comm, &status );/* */ MPI_Send( &value, 1, MPI_INT, right_nbr, 0, ring_comm );/* */ } printf( "Process %d got %d\n", rank, value );/* */ } while (value >= 0);/* */ } MPI_Finalize( ); MPI_GRAPH_CREATE nnodes index edges reorder = false, nnodes comm MPI_COMM_NULL MPI_COMM_SPLIT comm nnodes-1 0 nnodes-1 C index[i] 0 i 0 index[0],1 index[1]-index[0],i index[i]-index[i-1] i 1 nnodes-1 edges 205
223 MPI_GRAPH_CREATE(comm_old, nnodes, index, edges, reorder, comm_graph) IN comm_old IN nnodes IN index IN edges IN reorder OUT comm_graph int MPI_Graph_create(MPI_Comm comm_old, int nnodes, int *index, int *edges, int reorder, MPI_Comm *comm_graph) MPI_GRAPH_CREATE(COMM_OLD, NNODES, INDEX, EDGES, REORDER, COMM_GRAPH,IERROR) INTEGER COMM_OLD, NNODES, INDEX(*), EDGES(*), COMM_GRAPH, IERROR LOGICAL REORDER MPI 117 MPI_GRAPH_CREATE nnodes, index edges 18 nnodes = 4 index = 2, 3, 4, 6 edges = 1, 3, 0, 3, 0, 2 206
224 , C, index[0] 0, index[i] - index[i-1] i, i=1,..., nnodes-1; 0 edges[j], 0 j index[0]-1, i, i > 0, edges[j], index[i-1] j index[i]-1 MPI_GRAPHDIMS_GET(comm, nnodes, nedges) IN comm OUT nnodes OUT nedges int MPI_Graphdims_get(MPI_Comm comm, int *nnodes, int *nedges) MPI_GRAPHDIMS_GET(COMM NNODES, NEDGES, IERROR) INTEGER COMM, NNODES, NEDGES, IERROR MPI 118 MPI_GRAPHDIMS_GET MPI_GRAPHDIMS_GET comm nnodes nedges MPI_GRAPH_GET(comm, maxindex, maxedges, index, edges) IN comm IN maxindex index IN maxedges edges OUT index OUT edges int MPI_Graph_get(MPI_Comm comm, int maxindex, int maxedges, int *index,int *edges) MPI_GRAPH_GET(COMM, MAXINDEX, MAXEDGES, INDEX, EDGES, IERROR) INTEGER COMM, MAXINDEX, MAXEDGES, INDEX(*),EDGES(*), IERROR MPI 119 MPI_GRAPH_GET MPI_GRAPH_GET index edges MPI_GRAPH_NEIGHBORS_COUNT(comm, rank, nneighbors) IN comm IN rank comm OUT nneighbors int MPI_Graph_neighbors_count(MPI_Comm comm, int rank, int *nneighbors) MPI_GRAPH_NEIGHBORS_COUNT(COMM, RANK, NNEIGHBORS, IERROR) INTEGER COMM, RANK, NNEIGHBORS, IERROR MPI 120 MPI_GRAPH_NEIGHBORS_COUNT 207
225 MPI_GRAPH_NEIGHBORS_COUNT rank nneighbors MPI_GRAPH_NEIGHBORS(comm, rank, maxneighbors, neighbors) IN comm IN rank comm IN maxneighbors neighbors OUT neighbors int MPI_Graph_neighbors_count(MPI_Comm comm, int rank, int maxneighbors, int *neighbors) MPI_GRAPH_NEIGHBORS(COMM, RANK, MAXNEIGHBORS, NEIGHBORS, IERROR) INTEGER COMM, RANK, MAXNEIGHBORS, NEIGHBORS *, IERROR MPI 121 MPI_GRAPH_NEIGHBORS MPI_GRAPH_NEIGHBORS rank neighbors MPI_GRAPH_MAP(comm, nnodes, index, edges, newrank) IN comm IN nnodes IN index IN edges OUT newrank int MPI_Graph_map(MPI_Com comm, int nnodes, int *index, int *edges, int*newrank) MPI_GRAPH_MAP(COMM, NNODES, INDEX, EDGES, NEWRANK, IERROR) INTEGER COMM, NNODES, INDEX(*), EDGES(*), NEWRANK, IERROR MPI 122 MPI_GRAPH_MAP MPI_GRAPH_MAP MPI_CART_MAP nnodes index edges MPI newrank 16.4 Jacobi Jacobi MPI Jacobi C FORTRAN 208
226 (0,0) (0,1) (1,0) (1,1) A [0:127] A [0 :127] [0:127] [128:255] A [128:255] A [128:255] [0 :127] [128:255] ( ) ( ) C 209
227 (0,0) (0,1) (1,0) (1,1) 77 #include "mpi.h" #define arysize 256 #define arysize2 (arysize/2) int main(int argc, char *argv[]) { int n, myid, numprocs, i, j, nsteps=10; float a[arysize2+2][arysize2+2],b[arysize2+2][arysize2+2];/* */ double starttime,endtime; int col_tag,row_tag,send_col,send_row,recv_col,recv_row; int col_neighbor,row_neighbor; MPI_Comm comm2d; MPI_Datatype newtype; int right,left,down,top,top_bound,left_bound,down_bound,right_bound; int periods[2]; int dims[2],begin_row,end_row; MPI_Status status; MPI_Init(&argc,&argv); dims[0] = 2; dims[1] = 2; periods[0]=0; periods[1]=0; MPI_Cart_create( MPI_COMM_WORLD, 2, dims, periods, 0,&comm2d);/* 2 2 comm2d*/ 210
228 MPI_Comm_rank(comm2d,&myid); MPI_Type_vector( arysize2, 1, arysize2+2,mpi_float,&newtype);/* */ MPI_Type_commit( &newtype );/* */ MPI_Cart_shift( comm2d, 0, 1, &left, &right);/* */ MPI_Cart_shift( comm2d, 1, 1, &down, &top);/* */ /* */ for(i=0;i<arysize2+2;i++) for(j=0;j<arysize2+2;j++) a[i][j]=0.0; if (top == MPI_PROC_NULL) { for ( i=0;i<arysize2+2;i++) a[1][i]=8.0; } if (down == MPI_PROC_NULL) { for ( i=0;i<arysize2+2;i++) a[arysize2][i]=8.0; } if (left == MPI_PROC_NULL) { for ( i=0;i<arysize2+2;i++) a[i][1]=8.0; } if (right == MPI_PROC_NULL) { for ( i=0;i<arysize2+2;i++) a[i][arysize2]=8.0; } col_tag = 5; row_tag = 6; printf("laplace Jacobi#C(BLOCK,BLOCK)#myid=%d#step=%d#total arysize=%d*%d\n",myid,nsteps,arysize,arysize); top_bound=1; left_bound=1; down_bound=arysize2; right_bound=arysize2; if (top == MPI_PROC_NULL) top_bound=2; if (left == MPI_PROC_NULL) left_bound=2; if (down == MPI_PROC_NULL) down_bound=arysize2-1; if (right == MPI_PROC_NULL) left_bound= arysize2-1; starttime=mpi_wtime(); for (n=0; n<nsteps; n++) { MPI_Sendrecv( &a[1][1], arysize2, MPI_FLOAT, top, row_tag,& a[arysize2+1][1], arysize2, MPI_FLOAT, down, row_tag, comm2d, &status );/* */ 211
229 MPI_Sendrecv( &a[arysize2][1], arysize2, MPI_FLOAT, down, row_tag,& a[0][1], arysize2, MPI_FLOAT, top, row_tag, comm2d, &status );/* */ MPI_Sendrecv( &a[1][1], 1,newtype, left, col_tag,& a[1][arysize2+1], 1, newtype, right, col_tag, comm2d, &status );/* */ MPI_Sendrecv( &a[1][arysize2], 1, newtype, right, col_tag, &a[1][0], 1, newtype, left, col_tag, comm2d, &status );/* */ for ( i=left_bound;i<right_bound;i++) for (j=top_bound;j<down_bound;j++) b[i][j] = (a[i][j+1]+a[i][j-1]+ a[i+1][j]+a[i-1][j])*0.25; } for ( i=left_bound;i<right_bound;i++) for (j=top_bound;j<down_bound;j++) a[i][j] = b[i][j]; } endtime=mpi_wtime(); printf("elapse time=%f\n",endtime-starttime); MPI_Type_free( &newtype ); MPI_Comm_free( &comm2d ); MPI_Finalize(); 66 Jacobi 16.5 MPI 212
230 17 MPI MPI MPI MPI 17.1 MPI_ERRHANDLER_CREATE ( function, errhandler) IN function OUT errhandler MPI int MPI_Errhandler_create (MPI_Handler_function *function, MPI_Errhandler *errhandler) MPI_ERRHANDLER_CREATE ( FUNCTION, ERRHANDLER, IERROR) EXTERNAL FUNCTION INTEGER ERRHANDLER, IERROR MPI 123 MPI_ERRHANDLER_CREATE MPI_ERRHANDLER_CREATE function MPI MPI errhandler MPI_Handler_function C typedef void (MPI_Handler_function) (MPI_Comm *, int *, ) MPI MPI_ERRHANDLER_SET ( comm, errhandler) IN comm IN errhandler MPI int MPI_Errhandler_set (MPI_Comm comm, MPI_Errhandler errhandler) MPI_ERRHANDLER_SET ( COMM, ERRHANDLER, IERROR) INTEGER COMM, ERRHANDLER, IERROR MPI 124 MPI_ERRHANDLER_SET MPI_ERRHANDLER_SET errhandler comm 213
231 MPI_ERRHANDLER_GET(comm, errhandler) IN comm OUT errhandler MPI int MPI_Errhandler_get (MPI_Comm comm, MPI_Errhandler *errhandler) MPI_ERRHANDLER_GET ( COMM, ERRHANDLER, IERROR) INTEGER COMM, ERRHANDLER, IERROR MPI 125 MPI_ERRHANDLER_GET MPI_ERRHANDLER_GET comm errhandler MPI_ERRHANDLER_FREE ( errhandler) IN errhandler MPI int MPI_Errhandler_free (MPI_Errhandler *errhandler) MPI_ERRHANDLER_FREE ( ERRHANDLER, IERROR) INTEGER ERRHANDLER, IERROR MPI 126 MPI_ERRHANDLER_FREE MPI_ERRHANDLER_FREE errhandler errhandler MPI_ERRHANDLERNULL MPI_ERROR_STRING (errorcode, string, resultlen) IN errorcode MPI OUT string errorcode OUT resultlen string int MPI_Error_string (int errorcode, char *string, int *resultlen) MPI_ERROR_STRING (ERRORCODE, STRING, RESULTLEN, IERROR) INTEGER ERRORCODE, RESULTLEN, IERROR CHARACTER *(*) STRING MPI 127 MPI_ERROR_STRING MPI_ERROR_STRING string string MPI_MAX_ERROR_STRING 214
232 MPI_ERROR_CLASS (errorcode, errorclass) IN errorcode MPI OUT errorclass errorcode int MPI_Error_class (int errorcode, int *errorclass) MPI_ERROR_CLASS (ERRORCODE, ERRORCLASS, IERROR) INTEGER ERRORCODE, ERRORCLASS, IERROR MPI_ERROR_CLASS MPI 128 MPI_ERROR_CLASS 19 MPI_SUCCESS MPI_ERR_BUFFER MPI_ERR_COUNT MPI_ERR_TYPE MPI_ERR_ TAG MPI_ERR_COMM MPI_ERR_RANK MPI_ERR_REQUEST MPI_ERR_ROOT MPI_ERR_GROUP MPI_ERR_OP MPI_ERR_TOPOLOGY MPI_ERR_DIMS MPI_ERR_ARG MPI_ERR_UNKNOWN MPI_ERR_TRUNCATE MPI_ERR_OTHER MPI_ERR_INTERN MPI_ERR_LASTCODE MPI 0 = MPI_SUCCESS MPI_ERR_ MPI_ERR_LASTCODE 17.2 MPI 215
233 18 MPI MPI / 18.1 MPI-1 C int MPI_Abort(MPI_Comm comm, int errorcode) MPI MPI int MPI_Address(void * location, MPI_Aint * address) MPI_Get_address int MPI_Allgather(void * sendbuff, int sendcount, MPI_Datatype sendtype, void * recvbuf, int * recvcounts, int * displs, MPI_Datatype recvtype, MPI_Comm comm) MPI_Gather int MPI_Allgatherv(void * sendbuff, int sendcount, MPI_Datatype sendtype, void * recvbuf, int recvcounts, int * displs, MPI_Datatype recvtype, MPI_Comm comm) MPI_Gatherv int MPI_Allreduce(void * sendbuf, void * recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) MPI_Reduce int MPI_Alltoall(void * sendbuf, void * recvbuf, int count, MPI_Datatype datatype, void * recvbuf, int * recvcounts, int * rdispls, MPI_Datatype recvtype, MPI_Comm comm) int MPI_Alltoallv(void * sendbuf, int * sendcount, int * sdispls, MPI_Datatype sendtype, void * recvbuf, int * recvcounts, int * rdispls, MPI_Datatype recvtype, MPI_Comm comm), Int MPI_Attr_delete(MPI_Comm comm, int keyval) MPI_Comm_delete_attr int MPI_Attr_get(MPI_Comm comm, int keyval, void * attribute_val, int * flag) MPI_Comm_get_attr int MPI_Attr_put(MPI_Comm comm, int keyval, void * attribute_val) MPI_Comm_set_attr int MPI_Barrier(MPI_Comm comm) int MPI_Bcast(void * buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm) root int MPI_Bsend(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) 216
234 int MPI_Bsend_init(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Buffer_attach(void * buffer, int size) int MPI_Buffer_detach(void * buffer, int * size) int MPI_Cancel(MPI_Request * request) int MPI_Cart_coords(MPI_Comm comm, int rank, int maxdims, int * coords) int MPI_Cart_create(MPI_Comm comm_old, int ndims, int * dims, int * periods, int reorder, MPI_Comm * comm_cart ) int MPI_Cart_get(MPI_Comm comm, int maxdims, int * dims, int *periods, int * coords) int MPI_Cart_map(MPI_Comm comm, int * ndims, int * periods, int * newrank) int MPI_Cart_rank(MPI_Comm comm, int * coords, int * rank) int MPI_Cart_shift(MPI_Comm comm, int direction, int disp, int * rank_source, int * rank_dest) int MPI_Cart_sub(MPI_Comm comm, int * remain_dims, MPI_Comm * newcomm) int MPI_Cartdim_get(MPI_Comm comm, int* ndims) int MPI_Comm_compare(MPI_comm comm1, MPI_Comm comm2, int * result) int MPI_Comm_create(MPI_Comm comm, MPI_Group group, MPI_Comm * newcomm) Int MPI_Comm_dup(MPI_Comm comm, MPI_Comm *new_comm) int MPI_Comm_free(MPI_Comm* comm) int MPI_Comm_group(MPI_Comm comm, MPI_Group * group) int MPI_Comm_rank(MPI_Comm comm, int * rank) int MPI_Comm_remote_group(MPI_Comm comm, MPI_Group * group) int MPI_Comm_remote_size(MPI_Comm comm, int * size) int MPI_Comm_set_attr(MPI_Comm comm, int keyval, void * attribute_val) 217
235 int MPI_Comm_size(MPI_Comm comm, int * size) int MPI_Comm_split(MPI_Comm comm, int color, int key, MPI_Comm * newcomm) int MPI_Comm_test_inter(MPI_Comm comm, int * flag) int MPI_Dims_create(int nnodes, int ndims, int * dims) int MPI_Errhandler_create(MPI_handler_function * function, MPI_Errhandler * errhandler) MPI MPI_Comm_create_errhandler int MPI_Errhandler_free(MPI_Errhandler * errhandler) MPI int MPI_Errhandler_get(MPI_Comm comm, MPI_Errhandler * errhandler) MPI_Comm_get_errhandler int MPI_Errhandler_set(MPI_Comm comm, MPI_Errhandler errhandler) MPI MPI_Comm_set_errhandler int MPI_Error_class(int errorcode, int * errorclass) int MPI_Error_string(int errorcode, char * string, int * resultlen) int MPI_Finalize(void) MPI int MPI_Gather(void * sendbuf, int sendcount, MPI_Datatype sendtype, void * recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) int MPI_Gatherv(void * sendbuf, int sendcount, MPI_Datatype sendtype, void * recvbuf, int * recvcounts, int * displs, MPI_Datatype recvtype, int root, MPI_Comm comm) int MPI_Get_count(MPI_Status * status, MPI_Datatype datatype, int * count) int MPI_Get_elements(MPI_Statue * status, MPI_Datatype datatype, int * elements) int MPI_Get_processor_name(char * name, int * resultlen) int MPI_Get_version(int * version, int * subversion) MPI int MPI_Graph_create(MPI_Comm comm_old, int nnodes, int * index, int * edges, int reorder, MPI_Comm * comm_graph) int MPI_Graph_get(MPI_Comm comm, int maxindex, int maxedges, int * index, int * edges) int MPI_Graph_map(MPI_Comm comm, int nnodes, int * index, int * edges, int * newrank) 218
236 int MPI_Graph_neighbors_count(MPI_Comm comm, int rank, int * nneighbors) int MPI_Graph_neighbors(MPI_Comm comm, int rank, int * maxneighbors, int * neighbors) int MPI_Graphdims_Get(MPI_Comm comm, int * nnodes, int * nedges) int MPI_Group_compare(MPI_Group group1, MPI_Group group2, int * result) int MPI_Group_diffenence(MPI_Group group1, MPI_Group group2, MPI_Group * newgroup) int MPI_Group_excl(MPI_Group group, int n, int * ranks, MPI_Group * newgroup), int MPI_Group_free(MPI_Group * group) int MPI_Group_incl(MPI_Group group, int n, int * ranks, MPI_Group * newgroup), int MPI_Group_intersection(MPI_Group group1, MPI_Group group2, MPI_Group * newgroup) int MPI_Group_range_excl(MPI_Group group, int n, int ranges[][3], MPI_group * newgroup),, int MPI_Group_range_incl(MPI_Group group, int n, int ranges[][3], MPI_Group * newgroup),, int MPI_Group_rank(MPI_Group group, int * rank) int MPI_Group_size(MPI_Group group, int * size) int MPI_Group_translate_ranks(MPI_Group group1, int n, int * ranks1, MPI_Group group2, int * ranks2) int MPI_Group_union(MPI_Group group1, MPI_Group group2, MPI_Group * newgroup) int MPI_Ibsend(void * buf, int count, MPI_Datatype datatype, int dest, int tga, MPI_Comm comm, MPI_Request * request) int MPI_Init(int * argc, char *** argv) MPI Int MPI_Initialized(int * flag) MPI_Init int MPI_Intercomm_create(MPI_Comm local_comm, int local_leader, MPI_Comm peer_comm, int remote_leader, int tag, MPI_Comm * newintercomm) int MPI_Intercomm_merge(MPI_Comm intercomm, int high, MPI_Comm * newintracomm) int MPI_Iprobe(int source, int tag, MPI_Comm comm, int * flag, MPI_Status * status) 219
237 int MPI_Irecv(void * buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Irsend(viud * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Isend(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Issend(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Keyval_create(MPI_Copy_function * copy_fn, MPI_Delete_function * delete_fn, int * keyval, void * extra_state), MPI_Comm_create_keyval int MPI_Keyval_free(int * keyval) int MPI_Op_create(MPI_Uop function, int commute, MPI_Op * op) int MPI_Op_free(MPI_Op * op) int MPI_Pack(void * inbuf, int incount, MPI_Datatype datetype, void * outbuf, int outcount, int * position, MPI_Comm comm), int MPI_Pack_size(int incount, MPI_Datatype datatype, MPI_Comm comm, int * size) int MPI_Pcontrol(const int level) int MPI_Probe(int source, int tag, MPI_Comm comm, MPI_Status * status) int MPI_Recv(void * buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status * status) int MPI_Recv_init(void * buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Reduce(void * sendbuf, void * recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm) root, int MPI_Reduce_scatter(void * sendbuf, void * recvbuf, int * recvcounts, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) int MPI_Request_free(MPI_Request * request) 220
238 int MPI_Rsend(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) int MPI_Rsend_init(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Scan(void * sendbuf, void * recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) int MPI_Scatter(void * sendbuf, int sendcount, MPI_Datatype sendtype, void * recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) int MPI_Scatterv(void * sendbuf, int * sendcounts, int * displs, MPI_Datatype sendtype, void * recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) int MPI_Send(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) int MPI_Send_init(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Sendrecv(void * sendbuf, int sendcount, MPI_Datatype sendtype, int dest, int sendtag, void * recvbuf, int recvcount, MPI_Datatype recvtype, int source, int recvtag, MPI_Comm comm, MPI_Status * status) int MPI_Sendrecv_replace(void * buf, int count, MPI_Datatype datatype, int dest, int sendtag, int source, int recvtag, MPI_Comm comm, MPI_Status * status) int MPI_Ssend(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) int MPI_Ssend_init(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Start(MPI_Request * request) int MPI_Startall(int count, MPI_Request * array_of_requests) int MPI_Test(MPI_Request * request, int * flag, MPI_Status * status) int MPI_Testall(int count, MPI_Request * array_of_requests, int * flag, MPI_Status * array_of_statuses) int MPI_Testany(int count, MPI_Request * array_of_requests, int * index, int * flag, MPI_Status * status) 221
239 int MPI_Testsome(int incount, MPI_Request * array_of_requests, int * outcount, int * array_of_indices, MPI_Status * array_of_statuses) int MPI_Test_cancelled(MPI_Status * status, int * flag) int MPI_Topo_test(MPI_Comm comm, int * top_type) int MPI_Type_commit(MPI_Datatype * datatype) int MPI_Type_contiguous(int count, MPI_Datatype oldtype, MPI_Datatype * newtype) int MPI_Type_extent(MPI_Datatype datatype, MPI_Aint * extent), MPI_type_get_extent int MPI_Type_free(MPI_Datatype * datatype) int MPI_Type_hindexed(int count, int * array_of_blocklengths, MPI_Aint * array_of_displacements, MPI_Datatype oldtype, MPI_Datatype * newtype),, MPI_type_create_hindexed int MPI_Type_hvector(int count, int blocklength, MPI_Aint stride, MPI_Datatype oldtype, MPI_Datatype * newtype),, MPI_type_create_hvector int MPI_Type_indexed(int cont, int * array_of_blocklengths, int * array_of_displacements, MPI_Datatype oldtype, MPI_Datatype * newtype) int MPI_Type_lb(MPI_Datatype datatype, MPI_Aint * displacement), MPI_type_get_extent int MPI_Type_size(MPI_Datatype datatype, int * size), int MPI_Type_struct(int count, int * array_of_blocklengths, MPI_Aint * array_of_displacements, MPI_Datatype * array_of_types, MPI_Datatype * newtype), MPI_type_create_struct int MPI_Type_ub(MPI_Datatype datatype, MPI_Aint * displacement), MPI_type_get_extent int MPI_Type_vector(int count, int blocklength, int stride, MPI_Datatype oldtype, MPI_Datatype * newtype) int MPI_Unpack(void * inbuf, int insize, int * position, void * outbuf, int outcount, MPI_Datatype datatype, MPI_Comm comm) int MPI_Wait(MPI_Request * request, MPI_Status * status) MPI int MPI_Waitall(int count, MPI_Request * array_of_requests, MPI_Status * array_of_status) 222
240 int MPI_Waitany(int count, MPI_Request *array_of_requests, int *index,mpi_status *status) int MPI_Waitsome(int incount, MPI_Request * array_pf_requests, int * outcount, int * array_of_indices, MPI_Status * array_of_statuses) double MPI_Wtick(void) MPI_Wtime double MPI_Wtime(void) 18.2 MPI-1 Fortran MPI_Abort(comm, errorcode, ierror) integer comm, errorcode, ierror MPI MPI MPI_Address(location, address, eerror) <type>location integer address, ierror MPI_Get_address MPI_Allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, ierror ) <type> sendbuf(*), recvbuf(*) integer sendcount, sendtype, recvcount, recvtype, comm, ierror MPI_Gather MPI_Allgatherv(sendbuf, sendcount, sendtype, recvbuf, recbcounts, displs, recvtype, comm, ierror) <type>sendbuf(*), recvbuf(*) integer sendcount, sendtype, recvcounts(*), displs(*), recvtype, comm, ierror MPI_Gatherv MPI_Allreduce(sendbuf, recvbuf, count, datatype, op, comm, ierror) <type> sendbuf(*), recvbuf(*) integer count, datatype, op, comm, ierror MPI_Reduce MPI_Alltoall(sendbuf, sendcount, sendtyupe, recvbuf, recvcount, recvtype, comm, ierror) <type> sendbuf(*), recvbuf(*) integer sendcount, sendtype, recvcount, recvtype, comm, ierror MPI_Alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, ierror) <type> sendbuf(*), recvbuf(*) integer sendcounts(*), sdispls(*), sendtype, recvcounts(*), rdispls(*), recvtype, comm, ierror, MPI_Attr_delete(comm, keyval, ierror) 223
241 integer comm, keyval, ierror MPI_Comm_delete_attr MPI_Attr_get(comm, keyval, attribute_val, flag, ierror) integer comm, keyval, attribute_val, ierror Logical flag MPI_Comm_get_attr MPI_Attr_put(comm, keyval, attribute_val, ierror) integer comm, keyval, attribute_val, ierror MPI_Comm_set_attr MPI_Barrier(comm, ierror) integer comm, ierror MPI_Bcast(buffer, count, datatype, root, comm, ierror) <type> buffer(*) integer count, datatype, root, comm, ierror root MPI_Bsend(buf, count, datatype, dest, tag, comm, ierror ) <type> buf(*) integer cont, datatype, dest, tag, comm, ierror MPI_Bsend_init(buf, cont, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Biffer_attch(buffer, size, ierror) <type> buffer(*) integer size, ierror MPI_Biffer_detach(buffer, size, ierror) <type> buffer(*) integer size, ierror MPI_Cancel(request, ierror) integer request, ierror MPI_Cart_coords(comm, rank, maxdims, coords, ierror) integer comm, rank, maxdims, coords(*), ierror MPI_Cart_creat(comm_old, ndims, dims, periods, reorder, comm_cart, ierror) integer comm_old, ndims, dims(*), comm_cart, ierror Logical periods(*), reorder MPI_Cart_get(comm, maxdims, dims, periods, coords, ierror) 224
242 integer comm, maxdims, dims(*), coords(*), ierror Logical periods(*) MPI_Cart_map(comm, ndims, dims, periods, newrank, ierror) integer comm, ndims, dims(*), newrank, ierror Logical periods(*) MPI_Cart_rank(comm, coords, rank, ierror) integer comm, coords(*), rank, ierror MPI_Cart_shift(comm, direction, disp, rank_source, rank_dest, ierror MPI_Cart_sub(comm, remain_dims, newcomm, ierror) integer comm, newcomm, ierror Logical remain_dims(*) MPI_Cartdim_get(comm, ndism, ierror) integer comm, ndims, ierror MPI_Comm_compare(comm1, comm2, result, ierror) integer comm, group, newcomm, ierror MPI_Comm_creat(comm, group, newcomm, ierror) integer comm, group, newcomm, ierror MPI_Comm_dup(comm, newcomm, ierror) integer comm, newcomm, ierror MPI_Comm_free(comm, ierror) integer comm, ierror MPI_Comm_group(comm, group, ierror) integer comm, group, ierror MPI_Comm_rank(comm, rank, ierror) integer comm, rank, ierror MPI_comm_remote_group(comm, group, ierror) integer comm, group, ierror MPI_comm_remote_size(comm, size, ierror) integer comm, size, ierror MPI_Comm_set_attr(comm, keyval, attribute_val, ierror) 225
243 integer comm, keyval, ierror integer (kind=mpi_address_kind) attribute_val MPI_Comm_size(comm, size, ierror) integer comm, size, ierror MPI_Comm_split(comm, color, key, newcomm, ierror) integer comm, color, key, newcomm, ierror MPI_Comm_test_inter(comm, flag, ierror) integer comm, ierror Logical flag MPI_Dims_create(nnodes, ndims, dims, ierror) integer nnodes, ndims, dims(*), ierror MPI_Errhandler_create(function, errhandler, ierror) External function integer errhandler, ierror MPI MPI_Comm_create_errhandler MPI_Errhandler_free(comm, errhandler, ierror) integer comm, errhandler, ierror MPI MPI_Errhandler_get(comm, errhandler, ierror) integer errhandler, ierror MPI_Comm_get_errhandler MPI_Errhandler_set(comm, errhandler, ierror) integer comm, errhandler, ierror MPI MPI_Comm_set_errhandler MPI_Error_class(errorcode, errorclass, ierror) integer errorcode, errorclass, ierror MPI_Error_string(errorcode, string, resultlen, ierror) integer errorcode, resultlem, ierror character *(MPI_MAX_ERROR_STRING) string MPI_Finalize(ierror) integer ierror MPI MPI_Gather(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, ierror) <type> sendbuf(*), recvbuf(*) integer sendcount, sendtype, recvcount, recvtype, root, comm, ierror 226
244 MPI_Gatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, ierror) <type> sendbuf(*), recvbuf(*) integer sendcount, sendtype, recvcounts(*), displs(*), recvtype, root, comm, ierror MPI_Get_count(status, datatype, count, ierror) integer status(*), datatype, count, ierror MPI_Get_elements(status, datatype, elements, ierror) integer status(*), datatype, elements, ierror MPI_Get_processor_name(name, resultlen, ierror) character * (MPI_MAX_PROCESSOR_NAME) name integer resultlen, ierror MPI_Get_version(version, subversion, ierror) integer version, subversion, ierror MPI MPI_Graph_create(comm_old, nnodes, index, edges, reorder, comm_graph, ierror) integer comm_old, nnodes, index(*), edges(*), comm_graph, ierror Logical reorder MPI_Graph_get(comm, maxindex, maxedges, index, edges, ierror) integer comm, maxindex, maxedges, index(*), edges(*), error MPI_Graph_map(comm, nnodes, index, edges, newrank, error) integer comm, nnodes, index(*), edges(*), newrank, error MPI_Graph_neighbors_count(comm, rank, nneighbors, ierror) integer comm, rank, nneighbors, ierror MPI_Graph_neighbors(comm, rank, maxneighbors, neighbors, ierror) integer comm, rank, maxneighbors, neighbors(*), ierror MPI_Graphdims_Get(comm, nnodes, nedges, ierror) integer comm, nnodes, nedges, ierror MPI_Group_compare(group1, group2, result, ierror) integer group1, group2, result, ierror MPI_Group_difference(group1, group2, newgroup, ierror) integer group1, group2, newgroup, ierror MPI_Gropu_excl(gropu, n, ranks, newgroup, ierror) 227
245 integer group, n, ranks(*), newgroup, ierror, MPI_Group_free(group, ierror) integer group, ierror MPI_Group_incl(group, n, ranks, newgroup, ierror) integer group, n, ranks(*), newgroup, ierror, MPI_Group_intersection(group1, group2, newgroup, ierror) integer group1, group2, newgroup, ierror MPI_Group_range_excl(group, n, ranges, newgroup, ierror) integer group, n, ranges(3, *), newgroup, ierror,, MPI_Group_range_incl(group, n, ranges, newgroup, ierror) integer group, n, ranges(3, *), newgroup, ierror,, MPI_Group_rank(group, rank, ierror) integer group, rank, ierror MPI_Group_size(group, size, ierror) integer group, size, ierror MPI_Group_translate_ranks(group1, n, ranks1, group2, ranks2, ierror) integer group1, n, ranks1(*), group2, ranks2(*), ierror MPI_Group_union(group1, group2, newgroup, ierror) integer group1, group2, newgroup, ierror MPI_Ibsend(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Init(ierror) integer ierror MPI MPI_Initialized(flag, ierror) logical flag integer ierror MPI_Init MPI_Intercomm_create(local_comm, local_leader, peer_comm, remote_leader, tag, newintercomm, ierror) integer local_comm, local_leader, peer_comm, remote_leader, tag, newintercomm, ierror 228
246 MPI_Intercomm_merge(intercomm, high, intracomm, ierror) integer intercomm, intracomm, ierror logical high MPI_Iprobe(source, tag, comm, flag, status, ierror) integer source, tag, comm, status(*), ierror MPI_Irecv(buf, count, datatype, source, tag, comm, request, ierror) <type> buf(*) integer count, datatype, source, tag, comm, request, ierror MPI_Irsend(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Isend(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Issend(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Keyval_create(copy_fn, delete_fn, keyval, extra_state, ierror) external copy_fn, delete_fn integer keyval, extra_state, ierror, MPI_Comm_create_keyval MPI_Keyval_free(keyval, ierror) integer keyval, ierror MPI_Op_create(function, commute, op, ierror) exterval function logical commute integer op, ierror MPI_Op_free(op, ierror) integer op, ierror MPI_Pack(inbuf, incount, datatype, outbuf, outcount, position, comm, ierror) <type>inbuf(*), outbuf(*) integer incount, datatype, outcount, position, comm, ierror, MPI_Pack_size(incount, datatype, size, ierror) 229
247 integer incount, datatype, size, ierror MPI_Pcontrol(level) integer level MPI_Probe(cource, tag, comm, status, ierror) integer source, tag, comm, status(*), ierror MPI_Recv(buf, count, datatype, source, tag, comm, status, ierror) <type> buf(*) integer count, datatype, source, tag, comm, status(*), ierror MPI_Recv_init(buf, count, datatype, source, tag, comm, request, ierror) <type> buf(*) integer count, datatype, source, tag, comm, request, ierror MPI_Reduce(sendbuf, recvbuf, count, datatype, op, root, comm, ierror) <type> sendbuf(*), recvbuf(*) integer count, datatype, op, root, comm, ierror root, MPI_Reduce_scatter(sendbuf, recvbuf, recvcounts, datatype, op, comm, ierror) <type> sendbuf(*), recvbuf(*) integer recvcounts(*), datatype, op, comm, ierror MPI_Request_free(request, ierror) integer request, ierror MPI_Rsend(buf, count, datatype, dest, tag, comm, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, ierror MPI_Rsend_init(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Scan(sendbuf, recvbuf, count, datatype, op, comm, ierror) <type> sendbuf(*), recvbuf(*) integer count, datatype, ip, comm, ierror MPI_Scatter(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, ierror) <type> sendbuf(*), recvbuf(*) integer sendcount, sendtype, recvcount, recvtype, root, comm, ierror MPI_Scatterv(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, 230
248 ierror) <type> sendbuf(*), recvbuf(*) integer sendcounts(*), displs(*), sendtype, recvcount, recvtype, root, comm, ierror MPI_Send(buf, count, datatype, dest, tag, comm, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, ierror MPI_Send_init(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Sendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtyep, source, recvtag, comm, status, ierror) <type> sendbuf(*), recvbuf(*) integer sendcount, sendtype, dest, sendtag, recvcount, recvtype, source, recvtag, comm, status(*), ierror MPI_Sendrecv_replace(buf, count, datatype, dest, sendtag, source, recvtag, comm, status, ierror) <type> buf(*) integer count, datatype, dest, sendtag, source, recvtag, comm, status(*), ierror MPI_Ssend(buf, count, datatype, dest, tag, comm, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, ierror MPI_Ssend_init(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Start(request, ierror) integer request, ierror MPI_Startall(count, array_of_requests, ierror) integer count, array_of_requests(*), ierror MPI_Test(request, flag, status, ierror) integer request, status(*), ierror logical flag MPI_Testall(count, array_of_requests, flag, array_of_statuses, ierror) integer count, array_of_request(*), 231
249 array_of_statuses(mpi_status_size, *), ierror logical flag MPI_Testany(count, array_of_request, index, flag, status, ierror) integer count, array_of_requests(*), index, status(*), ierror logical flag MPI_Testsome(incount, array_of_requests, outcount, array_of_indices, array_of_statuses, ierror) integer incount, array_of_requests(*), outcount, array_of_indices(*), array_of_statuses(mpi_status_size, *), ierror MPI_Test_cancelled(status, flag, ierror) integer status(*), ierror MPI_Topo_test(comm, top_type, ierror) integer comm, top_type, ierror MPI_Type_commit(datatype, ierror) integer datatype, ierror MPI_Type_contiguous(count, oldtype, newtype, ierror) integer count, oldtype, newtype, ierror MPI_Type_extent(datatype, extent, ierror) integer datatype, extent, ierror, MPI_type_get_extent MPI_Type_free(datatype, ierror) integer datatype, ierror MPI_Type_hindexed(count, array_of_blocklenghths, array_of_displacements, oldtype, newtype, ierror) integer count, array_of_blocklengths(*), array_of_displacements(*), oldtype, newtype, ierror,, MPI_type_create_hindexed MPI_Type_hvector(count, blocklength, stride, oldtype, newtype, ierror) integer count, blocklength, stride, oldtype, newtype, ierror,, MPI_type_create_hvector MPI_Type_indexed(count, array_of_blocklengths, array_of_displacements, oldtype, newtype, ierror) integer count, array_of_blocklengths(*), array_of_displacements(*), oldtype, newtype, ierror 232
250 MPI_Type_lb(datatype, displacement, ierror) integer datatype, displacement, ierror, MPI_type_get_extent MPI_Type_size(datatype, size, ierror) integer datatype, size, ierror, MPI_Type_struct(count, array_of_blocklengths, array_of_displacements, array_of_types, newtype, ierror) integer count, array_of_blocklengths(*), array_of_displacements(*) array_of_type(*), newtype, ierror, MPI_type_create_struct int MPI_Type_ub(MPI_Datatype datatype, MPI_Aint * displacement), MPI_type_get_extent MPI_Type_ub(datatype, displacement, ierror) integer datatype, displacement, ierror MPI_Type_vector(count, blocklength, stride, oldtype, newtype, ierror) integer count, blocklength, stride, oldtype, newtype, ierror MPI_Unpack(inbuf, insize, position, outbuf, outcount, datatype, comm, ierror) <type> inbuf(*), outbuf(*) integer insize, position, outcount, datatype, comm, ierror MPI_Wait(request, status, ierror) integer request, status(*), ierror MPI MPI_Waitall(count, array_of_requests, array_of_statuses, ierror) integer count, array_of_requests(*), array_of_statuses(mpi_status_size, *), ierror MPI_Waitany(count, array_of_request, index, status, ierror) integer count, array_of_requests(*), index, status(*), ierror MPI_Waitsome(incount, array_of_requests, outcount, array_of_indices, array_of_statuses, ierror) integer incount, array_of_requests(*), outcount, array_of_indices(*), array_of_statuses(mpi_status_size, *), ierror MPI_Wtick() MPI_Wtime MPI_Wtime() 233
251 18.3 MPI-2 C int MPI_Accumulate(void * origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Op op, MPI_Win win) int MPI_Add_error_class(int * errorclass) int MPI_Add_error_code(int errorclass, int * error) int MPI_Add_error_string(int errorcode, char * string) int MPI_Alloc_mem(MPI_Aint size, MPI_Info info, void * baseptr) int MPI_Alltoallw(void * sendbuf, int sendcounts[], int sdispls[], MPI_Datatype sendtypes[], void * recvbuf, int recvcounts[], int rdispls[], MPI_Datatype recvtypes[], MPI_Comm comm) int MPI_Close_port(char * port_name) int MPI_Comm_accept(char * port_name, MPI_Info info, int root, MPI_Comm comm, MPI_Comm * newcomm) MPI_Fint MPI_Comm_c2f(MPI_Comm comm) C Fortran int MPI_Comm_call_errhandler(MPI_Comm comm, int error) int MPI_Comm_connect(char * portname, MPI_Info info, int root, MPI_Comm comm, MPI_Comm * newcomm) int MPI_Comm_create_errhandler(MPI_Comm_errhandler_fn *function, MPI_Errhandler *errhandler) int MPI_Comm_create_keyval(MPI_Comm_copy_attr_function *comm_copy_attr_fn, MPI_Comm_delete_attr_function *comm_delete_attr_fn, int *comm_keyval, void *extra_state) int MPI_Comm_delete_attr(MPI_Comm comm, int comm_keyval) int MPI_Comm_disconnect(MPI_Comm *comm) MPI_Comm MPI_Comm_f2c(MPI_Fint comm) Fortran C int MPI_Comm_free_keyval(int *comm_keyval) 234
252 MPI_Comm_create_keyval int MPI_Comm_get_attr(MPI_Comm comm, int comm_keyval, void attribute_val, int *flag) int MPI_Comm_get_errhandler(MPI_comm comm, MPI_Errhandler *errhandler) int MPI_Comm_get_name(MPI_Comm comm, char *comm_name, int *resultlen) int MPI_Comm_get_parent(MPI_Comm *parent) int MPI_Comm_join(int fd, MPI_Comm *intercom) MPI int MPI_Comm_set_attr(MPI_Comm comm, int comm_keyval, void *attribute_val) int MPI_Comm_set_errhandler(MPI_Comm comm, MPI_Errhandler errhandler) int MPI_Comm_set_name(MPI_Comm comm, char *comm_name) int MPI_Comm_spawn(char *command, char *argv[], int maxprocs, MPI_Info info, int root, MPI_Comm comm, MPI_Comm *intercom, int array_of_errcodes[]) MPI int MPI_Comm_spawn_multiple(int count, char *array_of_commands[], Char **array_of_argv[], int array_of_maxprocs[], MPI_Info array_of_info[], int root, MPI_Comm comm, MPI_Comm *intercom, int array_of_errcodes[]) MPI int MPI_Exscan(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) MPI_Scan MPI_Fint MPI_File_c2f(MPI_File file) C Fortran int MPI_File_call_errhandler(MPI_File fh, int error) int MPI_File_close(MPI_File *fh) int MPI_File_create_errhanlder(MPI_File_errhandler_fn *function, MPI_Errhandler *errhandler) int MPI_File_delete(char *filename, MPI_Info info) MPI_File MPI_File_f2c(MPI_Fint file) Fortran C int MPI_File_get_amode(MPI_File fh, int *amode) int MPI_File_get_atomicity(MPI_File fh, int *flag) fh flag int MPI_File_get_byte_offset(MPI_File fh, MPI_Offset offset, MPI_Offset *disp) 235
253 int MPI_File_get_errhandler(MPI_File file, MPI_Errhandler *errhandler) int MPI_File_get_group(MPI_file fh, MPI_Group *group) int MPI_File_get_info(MPI_File fh, MPI_Info *info_used) INFO int MPI_File_get_position(MPI_File fh, MPI_Offset *offset) int MPI_File_get_position_shared(MPI_File fh, MPI_Offset *offset) int MPI_File_get_size(MPI_File fh, MPI_Offset *size) int MPI_File_get_type_extent(MPI_File fh, MPI_Datatype datatype, MPI_Aint *extent) int MPI_File_get_view(MPI_File fh, MPI_Offset *disp, MPI_Datatype *etype, MPI_Datatype *filetype, char *datarep) int MPI_File_iread(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Request *request) int MPI_File_iread_at(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Request *request) int MPI_File_iread_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Request *request) int MPI_File_iwrite(MPI_File fh, void *buf, int count, MPI_Datatype, MPI_Request *request) int MPI_File_iwrite_at(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Request *request) int MPI_File_iwrite_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Request *request) int MPI_File_open(MPI_Comm comm, char *filename, int amode, MPI_Info info, MPI_File *fh) int MPI_File_preallocate(MPI_File fh, MPI_Offset size) int MPI_File_read(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_File_read_all(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) MPI_FILE_READ 236
254 int MPI_File_read_all_begin(MPI_File fh, void *buf, int count, MPI_Datatype datatype) int MPI_File_read_all_end(MPI_File fh, void *buf, MPI_Status *status) int MPI_File_read_at(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_File_read_at_all(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) MPI_FILE_READ_AT int MPI_File_read_at_all_begin(MPI_File fh, MPI_Offset offset, Void *buf, int count, MPI_Datatype datatype) int MPI_file_read_at_all_end(MPI_File fh, void *buf, MPI_Status *status) int MPI_File_read_ordered(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) MPI_FILE_READ_SHARED int MPI_File_read_ordered_begin(MPI_File fh, void *buf, int count, MPI_datatype datatype) int MPI_File_read_ordered_end(MPI_File fh, void *buf, MPI_status *status) int MPI_File_read_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_file_seek(MPI_File fh, MPI_Offset offset, int whence) int MPI_file_seek_shared(MPI_File fh, MPI_Offset offset, int whence) int MPI_File_set_atomicity(MPI_File fh, int flag) int MPI_File_set_errhandler(MPI_File file, MPI_Errhandler errhandler) int MPI_File_set_info(MPI_File fh, MPI_Info info) INFO int MPI_File_set_size(MPI_File fh, MPI_Offset size) int MPI_File_set_view(MPI_File fh, MPI_Offset disp, MPOI_Datatype etype, MPI_Datatype filetype, char *datarep, MPI_info info) int MPI_File_sync(MPI_File fh) 237
255 int MPI_File_write(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_File_write_all(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) MPI_File_write int MPI_File_write_all_begin(MPI_File fh, void *buf, int count, MPI_Datatype datatype) int MPI_File_write_all_end(MPI_file fh, void *buf, MPI_Status *status) int MPI_File_write_at(MPI_file fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_File_write_at_all(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_File_write_at_all_begin(MPI_File fh, MPI_Offset offset, Void *buf, int count, MPI_Datatype datatype) int MPI_File_write_at_all_end(MPI_File fh, void *buf, MPI_Status *status) int MPI_File_write_ordered(MPI_file fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) MPI_File_write_shared int MPI_File_write_ordered_begin(MPI_File fh, void *buf, int count, MPI_Datatype datatype) int MPI_File_write_ordered_end(MPI_File fh, viod *buf, MPI_Status *status) int MPI_File_write_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_Finalized(int *flag) MPI_Finalize int MPI_Free_mem(void *base) MPI_Alloc_mem int MPI_Get(void *origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Win win) int MPI_Get_address(void *location, MPI_Aint *address) int MPI_Grequest_complete(MPI_Request request) MPI int MPI_Grequest_start(MPI_Grequest_query_function *query_fn, MPI_Grequest_free_function 238
256 *free_fn, MPI_Grequest_cancel_function *cancel_fn, void *extra_state, MPI_Request *request) MPI_Fint MPI_Group_c2f(MPI_Group group) C Fortran MPI_Group MPI_Group_f2c(MPI_Fint group) Fortran C MPI_Fint MPI_Info_c2f(MPI_Info info) C Fortran int MPI_Info_create(MPI_Info *info) INFO int MPI_Info_delete(MPI_Info info, char *key) INFO < > int MPI_Info_dup(MPI_Info info, MPI_Info *newinfo) INFO MPI_Info MPI_Info_f2c(MPI_Fint info) Fortran INFO C int MPI_Info_free(MPI_Info *info) INFO int MPI_Info_get(MPI_Info info, char *key, int valuelen, char *value, int *flag) int MPI_Info_get_nkeys(MPI_Info info, int *nkeys) INFO int MPI_Info_get_nthkey(MPI_Info info, int n, char *key) INFO n int MPI_Info_get_valuelen(MPI_Info info, char *key, int *valuelen, int *flag) int MPI_Info_set(MPI_Info info, char *key, char *value) INFO <, > int MPI_Init_thread(int *argc, char *((*argv)[]), int required, int *provided) MPI MPI int MPI_Is_thread_main(int *flag) int MPI_Lookup_name(char *service_name, MPI_Info info, Char *prot_name) MPI_Fint MPI_Op_c2f(MPI_Op op) C Fortran MPI_Op MPI_Op_f2c(MPI_Fint op) Fortran C int MPI_Open_port(MPI_Info info, char *port_name) int MPI_Pack_external(char *datarep, void *inbuf, int incount, MPI_Datatype datatype, void *outbuf, MPI_Aint outsize, MPI_Aint *position) 239
257 int MPI_Pack_external_size(char *datarep, int incount, MPI_Datatype datatype, MPI_Aint *size) int MPI_Publish_name(char *service_name, MPI_Info info, Char *port_name) int MPI_Put(void *origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Win win) int MPI_Query_thread(int *provided) int MPI_Register_datarep(char *datarep, MPI_Datarep_conversion_function *read_conversion_fn, MPI_Datarep_conversion_function *write_conversion_fn, MPI_Datarep_extent_function *dtype_file_extent_fn, Void *extra_state) MPI MPI_Fint MPI_Request_c2f(MPI_Request request) C Fortran MPI_Request MPI_Request_f2c(MPI_Fint request) Fortran C int MPI_Request_get_status(MPI_Request request, int *flag, MPI_Status *status) int MPI_Status_c2f(MPI_Status *c_status, MPI_Fint *f_status) C Fortran int MPI_Status_f2c(MPI_Fint *f_status, MPI_Status *c_status) Fortran C int MPI_Status_set_cancelled(MPI_Status *status, int flag) MPI_Test_cancelled int MPI_Status_set_elements(MPI_Status *status, MPI_Datatype datatype, int count) MPI_Get_elements MPI_Fint MPI_Type_c2f(MPI_Datatype datatype) C Fortran int MPI_Type_create_darray(int size, int rank, int ndims, int array_of_gsizes[], int array_of_distribs[], int array_of_dargs[], int array_of_psizes[], int order, MPI_Datatype oldtype, MPI_Datatype *newtype) int MPI_type_create_f90_complex(int p, int r, MPI_Datatype *newtype) MPI Fortran 90 int MPI_Type_create_f90_integer(int r, MPI_Datatype *newtype) MPI Fortran 90 int MPI_type_create_f90_real(int p, int r, MPI_Datatype *newtype) MPI Fortran 90 int MPI_Type_create_hindexed(int count, int array_of_blocklengths[], MPI_Ainyt array_of_displacements[], MPI_Datatype oldtype, MPI_Datatype *newtype) 240
258 int MPI_Type_create_hvector(int count, int blocklength, MPI_Aint stride, MPI_Datatype oldtype, MPI_Datatype *newtype) int MPI_Type_create_indexed_block(int count, int blocklength, int array_of_displacements[], MPI_Datatype oldtype, MPI_Datatype *newtype) int MPI_Type_create_keyval(MPI_Type_copy_attr_function *type_copy_attr_fn, MPI_Type_delete_attr_function *type_delete_attr_fn, int *type_keyval, void *extra_state) int MPI_Type_create_resized(MPI_Datatype oldtype, MPI_Aint lb, MPI_Aint extent, MPI_Datatype *newtype) int MPI_Type_create_struct(int count, int array_of_blocklengths[], MPI_Aint array_of_displacements[], MPI_Datatype array_of_types[], MPI_Datatype *newtype) int MPI_type_create_subarray(int ndims, int array_of_sizes[], int array_of_subsizes[], int array_of_starts[], int order, MPI_Datatype oldtype, MPI_Datatype *newtype) int MPI_type_delete_attr(MPI_Datatype type, int type_keyval) int MPI_Type_dup(MPI_Datatype type, MPI_Datatype *newtype) MPI_datatype MPI_Type_f2c(MPI_Fint datatype) Fortran C int MPI_Type_free_keyval(int *type_keyval) MPI_Type_create_keyval int MPI_Type_get_attr(MPI_Datatype type, int type_keyval, Void *attribute_val, int *flag) int MPI_type_get_contents(MPI_Datatype datatype, int max_integers, int max_addresses, int max_datatypes, int array_of_integers[], MPI_Aint array_of_addresses[], MPI_Datatype array_of_datatypes[]) int MPI_Type_get_envelope(MPI_Datatype datatype, int *num_integers, int *num_addresses, int *num_datatypes, int *combiner) int MPI_Type_get_extent(MPI_Datatype datatype, MPI_Aint *lb, MPI_Aint *extent) int MPI_Type_get_name(MPI(_Datatype type, char *type_name, int *resultlen) int MPI_Type_get_true_extent(MPI_Datatype datatype, MPI_Aint *true_lb, MPI_Aint *true_extent) int MPI_Type_match_size(int typeclass, int size, MPI_Datatype *type) 241
259 MPI int MPI_Type_set_attr(MPI_Datatype type, int type_keyval, void *attribute_val) int MPI_Type_set_name(MPI_Datatype type, char *type_name) int MPI_Unpack_external(char *datarep, void *inbuf, MPI_Aint insize, MPI_Aint *position, void *outbuf, int outocunt, MPI_Datatype datatype) int MPI_Unpublish_name(char *service_name, MPI_Info info, Char *port_name) MPI_Fint MPI_Win_c2f(MPI_Win win) C Fortran int MPI_Win_call_errhandler(MPI_Win win, int error) int MPI_Win_complete(MPI_Win win) MPI_Win_start RMA int MPI_Win_create(void *base, MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, MPI_Win *win) int MPI_Win_create_errhandler(MPI_Win_errhandler_fn *function, MPI_Errhandler *errhandler) int MPI_Win_create_keyval(MPI_Win_copy_attr_function *win_copy_attr_fn, MPI_Win_delete_attr_function *win_delete_attr_fn, int *win_keyval, void *extra_state) int MPI_Win_delete_attr(MPI_Win win, int win_keyval) MPI_Win MPI_Win_f2c(MPI_Fint win) Fortran C int MPI_Win_fence(int assert, MPI_Win win) RMA int MPI_Win_free(MPI_Win *win) int MPI_Win_free_keyval(int *win_keyval) MPI_Win_create_keyval int MPI_Win_get_attr(MPI_Win win, int win_keyval, void *attribute_val, int *flag) int MPI_Win_get_errhandler(MPI_Win win, MPI_Errhandler *errhandler) int MPI_Win_get_group(MPI_Win win, MPI_Group *group) int MPI_Win_get_name(MPI_Win win, char *win_name, int *resultlen) int MPI_Win_lock(int lock_type, int rank, int assert, MPI_Win win) 242
260 int MPI_Win_post(MPI_Group group, int assert, MPI_Win win) int MPI_Win_set_attr(MPI_Win win, int win_keyval, void *attribute_val) int MPI_Win_set_errhandler(MPI_Win win, MPI_Errhandler errhandler) int MPI_Win_set_name(MPI_Win win, char *win_name) int MPI_Win_start(MPI_Group group, int assert, MPI_Win win) MPI_Win_post int MPI_Win_test(MPI_Win win, int *flag) RMA int MPI_Win_unlock(int rank, MPI_Win win) int MPI_Win_wait(MPI_Win win) MPI_Win_post RMA 18.4 MPI-2 Fortran MPI_Accumulate(origin_addr, origin_count, origin_datatype, Target_rank, target_disp, target_count, target_datatype, op, Win, ierror) <type>origin_addr(*) integer(kind=mpi_address_kind) target_disp integer origin_count, origin_datatype, target_rank, target_count, target_datatype, op, win, ierror MPI_Add_error_class(errorclass, ierror) integer errorclass, ierror MPI_Add_error_code(errorclass, errorcode, ierror) integer errorclass, errorcode, ierror MPI_Add_error_string(errorcode, string, ierror) integer errorcode, ierror character*(*) string MPI_Alloc_mem(size, info, baseptr, ierror) integer info, ierror integer(kind=mpi_address_kind) size, baseptr MPI_Alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, Recvcounts, rdispls, recvtypes, comm, ierror) <type> sendbuf(*), recvbuf(*) integer sendcounts(*), sdispls(*), sendtypes(*), recvcounts(*), 243
261 rdispls(*), recvtypes(*), comm, ierror MPI_Close_port(port_name, ierror) character*(*) port_name integer ierror MPI_Comm_accept(port_name, info, root, comm, newcomm, ierror) character*(*) port_name integer info, root, comm, newcomm, ierror MPI_Comm_call_errhandler(comm, errorcode, ierror) integer comm, errorcode, ierror MPI_Comm_connect(port_name, info, root, comm, newcomm, ierror) character*(*) port_name integer info, root, comm, newcomm, ierror MPI_Comm_create_errhandler(function, errhandler, ierror) External function integer errhandler, ierror MPI_Comm_create_keyval(comm_copy_attr_fn, comm_delete_attr_fn, Comm_keyval, extra_state, ierror) External comm_copy_attr_fn, comm_delete_attr_fn integer comm_keyval, ierror integer(kind=mpi_address_kind) extra_state MPI_Comm_delete_attr(comm, comm_keyval, ierror) integer comm, comm_keyval_ierror MPI_Comm_disconnect(comm, ierror) integer comm, ierror MPI_Comm_free_keyval(comm_keyval, ierror) integer comm_keyval, ierror MPI_Comm_create_keyval MPI_Comm_get_attr(comm, comm_keyval, attribute_val, flag, ierror) integer comm, comm_keyval, ierror integer(kind=mpi_address_kind) attribute_val Logical flag MPI_Comm_get_errhandler(comm, errhandler, ierror) integer comm, errhandler, ierror 244
262 MPI_Comm_get_name(comm, comm_name, resultlen, ierror) integer comm, resultlen, ierror character*(*) comm_name MPI_Comm_get_parent(parent, ierror) integer parent, ierror MPI_Comm_join(fd, intercom, ierror) integer fd, intercom, ierror MPI MPI_Comm_set_attr(com, comm_keyval, attribute_val, ierror) integer comm, comm_keyval, ierror integer(kind=mpi_address_kind) attribute_val MPI_Comm_set_errhandler(comm, errhandler, ierror) integer comm, errhandler, ierror MPI_Comm_set_name(comm, comm_name, ierror) integer comm, ierror character*(*) comm_name MPI_Comm_spawn(command, argv, maxprocs, info, root, comm, intercom, array_of_errcodes, ierror) character*(*) command, argv(*) integer info, maxprocs, root, comm, intercomm, array_of_errcodes(*), ierror MPI MPI_Comm_spwan_multiple(count, array_of_commands, array_of_argv, aray_of_maxprocs, array_of_info, root, comm, intercomm, array_of_errcodes, ierror) integer count, array_of_info(*), array_of_maxprocs(*), root, comm, intercomm, array_of_errcodes(*), ierror character*(*) array_of_commands(*), array_of_argv(count, *) MPI MPI_Exscan(sendbuf, recvbuf, count, datatype, op, comm, ierror) <type> sendbuf(*), recvbuf(*) integer count, datatype, op, comm, ierror MPI_Scan MPI_File_call_errhandler(fh, errorcode, ierror) integer fh, errorcode, ierror MPI_File_close(fh, ierror) integer fh, ierror MPI_File_create_errhandler(function, errhandler, ierror) External function 245
263 integer errhandler, ierror MPI_File_delete(filename, info, ierror) Character*(*) filename integer info, ierror MPI_File_get_amode(fh, amode, ierror) integer fh, amode, ierror MPI_File_get_atomicity(fh, flag, ierror) integer fh, ferror Logical flag fh flag MPI_File_get_byte_offset(fh, offset, disp, ierror) integer fh, ierror integer(kind=mpi_offset_kind) offset, disp MPI_File_get_errhandler(file, errhandler, ierror) integer file, errhandler, ierror MPI_File_get_group(fh, group, ierror) integer fh, group, ierror MPI_File_get_info(fh, info_used, ierror) integer fh, info_used, ierror INFO MPI_File_get_position(fh, offset, ierror) integer fh, ierror integer(kind=mpi_offset_kind) offset MPI_File_get_position_shared(fh, offset, ierror) integer fh, ierror integer(kind=mpi_offset_kind) offset MPI_File_get_size(fh, size, ierror) integer fh, ierror integer(kind=mpi_offset_kind) size MPI_File_get_type_extent(fh, datatype, extent, ierror) integer fh, datatype, ierror integer(kind=mpi_address_kind) extent MPI_File_get_view(fh, disp, etype, filetype, datarep, ierror) integer fh, etype, filetype, ierror 246
264 Character*(*) datarep, integer(kind=mpi_offset_kind) disp MPI_File_iread(fh, buf, count, datatype, request, ierror) <type> buf(*) integer fh, count, datatype, request, ierror MPI_File_iread_at(fh, offset, buf, count, datatype, request, ierror) <type> buf(*) integer fh, count, datatype, request, ierror integer(kind=mpi_offset_kind) offset MPI_File_iread_shared(fh, buf, count, datatype, request, ierror) <type> buf(*) integer fh, count, datatype, request, ierror MPI_File_iwrite(fh, buf, count, datatype, request, ierror) <type> buf(*) integer fh, count, datatype, request, ierror MPI_File_iwrite_at(fh, offset, buf, count, datatype, request, ierror) <type> buf(*) integer fh, count, datatype, request, ierror integer(kind=mpi_offset_kind) offset MPI_File_iwrite_share(fh, buf, count, datatype, request, ierror) <type> buf(*) integer fh, count, datatype, request, ierror MPI_File_open(comm., filename, amode, info, fh, ierror) Character*(*) filename integer comm., amode, info, fh, ierror MPI_File_preallocate(fh, size, ierror) integer fh, ierror integer(kind=mpi_offset_kind) size MPI_File_read(fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_File_read_all(fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_FILE_READ 247
265 MPI_File_read_all_begin(fh, buf, count, datatype, ierror) <type> buf(*) integer fh, count, datatype, ierror MPI_File_read_all_end(fh, buf, status, ierror) <type> buf(*) integer fh, status(mpi_status_size), ierror MPI_File_read_at(fh, offset, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror integer(kind=mpi_offset_kind) offset MPI_File_read_at_all(fh, offset, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror integer(kind=mpi_offset_kind) offset MPI_FILE_READ_AT MPI_File_read_at_all_begin(fh, offset, buf, count, datatype, ierror) <type> buf(*) integer fh, count, datatype, ierror integer(kind=mpi_offset_kind) offset MPI_File_read_at_all_end(fh, buf, status, ierror) <type> buf(*) integer fh, status(mpi_status_size), ierror MPI_File_read_ordered(fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_FILE_READ_SHARED MPI_File_read_ordered_begin(fh, buf, count, datatype, ierror) <type> buf(*) integer fh, count, datatype, ierror MPI_File_read_ordered_end(fh, buf, status, ierror) <type> buf(*) integer fh, status(mpi_status_size), ierror MPI_File_read_shared (fh, buf, count, datatype, status, ierror) <type> buf(*) 248
266 integer fh, count, datatype, status(mpi_status_size), ierror MPI_File_seek(fh, offset, whence, ierror) integer fh, whence, ierror integer(kind=mpi_offset_kind) offset MPI_File_seek_shared(fh, offset, whence, ierror) integer fh, whence, ierror integer(kind=mpi_offset_kind) offset MPI_File_set_atomicity(fh, flag, ierror) integer fh, ierror logical flag MPI_File_set_errhandler(file, errhandler, ierror) integer file, errhandler, ierror MPI_File_set_info(fh, info, ierror) integer fh, info, ierror INFO MPI_File_set_size(fh, size, ierror) integer fh, ierror integer(kind=mpi_offset_kind) size MPI_File_set_view(fh, disp, etype, filetype, datarep, info, ierror) integer fh, etype, filetype, info, ierror character*(*) datarep integer(kind=mpi_offset_kind) disp MPI_File_sync(fh, ierror) integer fh, ierror MPI_File_write(fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_File_write_all(fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_File_write MPI_File_write_all_begin(fh, buf, count, datatype, ierror) <type> buf(*) integer fh, count, datatype, ierror 249
267 MPI_File_write_all_end(fh, buf, status, ierror) <type> buf(*) integer fh, status(mpi_status_size), ierror MPI_File_write_at(fh, offset, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror integer(kind=mpi_offset_kind) offset MPI_File_write_at_all(fh, offset, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror integer(kind=mpi_offset_kind) offset MPI_File_write_at_all_begin(fh, offset, buf, count, datatype, ierror) <type> buf(*) integer fh, count, datatype, ierror integer(kind=mpi_offset_kind) offset MPI_File_write_at_all_end(fh, buf, status, ierror) <type> buf(*) integer fh, status(mpi_status_size), ierror MPI_File_write_ordered (fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_File_write_shared MPI_File_write_ordered _begin(fh, buf, count, datatype, ierror) <type> buf(*) integer fh, count, datatype, ierror MPI_File_write_ordered _end(fh, buf, status, ierror) <type> buf(*) integer fh, status(mpi_status_size), ierror MPI_File_write_shared (fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_Finalized(flag, ierror) logical flag integer ierror MPI_Finalize 250
268 MPI_Free_mem(base, ierror) <type> base(*) integer ierror MPI_Alloc_mem MPI_Get(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, ierror) <type> origin_addr(*) integer(kind=mpi_address_kind) target_disp integer origin_count, origin_datatype, target_rank, target_count, target_datatype, win, ierror MPI_Get_address(location, address, ierror) <type> location(*) integer ierror integer(kind=mpi_address_kind) address MPI_Grequest_complete(request, ierror) integer request, ierror MPI MPI_Grequest_start(query_fn, free_fn, cancel_fn, extra_state, request, ierror) integer request, ierror external query_fn, free_fn, cancel_fn integer (kind=mpi_address_kind) extra_state MPI_Info_create(info, ierror) integer info, ierror INFO MPI_Info_delete(info, key, ierror) integer info, ierror integer info, ierror character*(*) key INFO ( ) MPI_Info_dup(info, newinfo, ierror) integer info, newinfo, ierror INFO MPI_Info_free(info, ierror) integer info, ierror INFO MPI_Info_get(info, key, valuelen, value, falg, ierror) integer info, valuelen, ierror character*(*) key, value logical flag MPI_Info_get_nkeys(info, nkeys, ierror) 251
269 integer info, nkeys, ierror INFO MPI_Info_get_nthkey(info, n, key, ierror) integer info, n, ierror character*(*) key INFO n MPI_Info_get_valuelen(info, key, valuelen, falg, ierror) integer info, valuelen, ierror logical flag character*(*) key MPI_Info_set(info, key, value, ierror) integer info, ierror character*(*) key, value INFO (, ) MPI_Init_thread(required, provided, ierror) integer reuqired, provided, ierror MPI MPI MPI_Is_thread_main(flag, ierror) logical flag integer ierror MPI_Lookup_name(service_name, info, port_name, ierror) character*(*) service_name, port_name integer info, ierror MPI_Open_port(info, port_name, ierror) character*(*) port_name integer info, ierror MPI_Pack_external(datarep, inbuf, incount, datatype, outbuf, outsize, Position, ierror) integer incount, datatype, ierror integer(kind=mpi_address_kind) outsize, position character*(*) datarep <type> inbuf(*), outbuf(*) MPI_Pack_external_size(datarep, incount, datatype, size, ierror) integer incount, datatype, ierror integer(kind=mpi_address_kind) size character*(*) datarep MPI_Publish_name(service_name, info, port_name, ierror) integer info, ierror character*(*) service_name, port_name 252
270 MPI_Put(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, ierror) <type> origin_addr(*) integer(kind=mpi_address_kind) target_disp integer origin_count, origin_datatype, target_rank, target_count, target_datatype, win, ierror MPI_Query_thread(provide, ierror) integer provided, ierror MPI_Register_datarep(datarep, read_conversion_fn, write_conversion_fn, dtype_file_extent_fn, extra_state, ierror) character*(*) datarep external read_conversion_fn, write_conversion_fn, dtype_file_extent_fn integer(kind=mpi_address_kind) extra_state integer ierror MPI MPI_Request_get_status(request, falg, status, ierror) integer request, status(mpi_status_size), ierror logical flag MPI_Sizeof(x, size, ierror) <type> x integer size, ierror MPI_Status_set_cancelled(status, flag, ierror) integer status(mpi_status_size), ierror logical falg MPI_Test_cancelled MPI_Status_set_elements(status, datatype, count, ierror) integer status(mpi_status_size), datatype, count, ierror MPI_Get_elements MPI_Type_create_darray(size, rank, ndims, array_of_gsizes, array_of_distribs, array_of_dargs, array_of_psizes, order, oldtype, newtype, ierror) integer size, rank, ndims, array_of_gsizes(*), array_of_distribs(*), array_of_dargs(*), array_of_psizes(*), order, oldtype, newtype, ierror MPI_type_create_f90_complex(p, r, newtype, ierror) integer p, r, newtype, ierror MPI Fortran 90 MPI_Type_create_f90_integer(r, newtype, ierror) integer r, newtype, ierror 253
271 MPI Fortran 90 MPI_Type_create_f90_real(p, r, newtype, ierror) integer p, r, newtype, ierror MPI Fortran 90 MPI_Type_create_hindexed(count, array_of_blocklengths, array_of_dispalcements, oldtype, newtype, ierror) integer count, array_of_blocklengths(*), oldtype, newtype, ierror integer(kind=mpi_address_kind) array_of_displacements(*) MPI_Type_create_hvector(count, blocklength, stide, oldtype, newtype, ierror) integer count, blocklength, oldtype, newtype, ierror integer(kind=mpi_address_kind) stride MPI_Type_create_indexed_block(count, blocklength, array_of_displacements, oldtype, newtype, ierror) integer count, blocklength, array_of_displacements(*), oldtype, newtype, ierror MPI_Type_create_keyval(type_copy_attr_fn, type_delete_attr_fn, type_keyval, extra_state, ierror) external type_copy_attr_fn, type_delete_attr_fn integer type_keyval, ierror integer(kind=mpi_address_kind) extra_state MPI_Type_create_resized(oldtype, lb, extent, newtype, ierror) integer oldtype, newtype, ierror integer(kind=mpi_address_kind) lb, extent MPI_Type_create_struct(count, array_of_blocklengths, array_of_displacements, array_of_types, newtyep, ierror) integer count, array_of_blocklengths(*), array_of_types(*), newtype, ierror integer(kind=mpi_address_kind) array_of_displacements(*) MPI_Type_create_subarray(ndims, array_of_sizes, array_of_subsizes, array_of_starts, order, oldtype, newtype, ierror) integer ndims, array_of_sizes(*), array_of_subsizes(*), array_of_starts(*), order, oldtype, newtype, ierror MPI_type_delete_attr(type, type_keyval, ierror) integer type, tyep_keyval, ierror MPI_Type_dup(type, newtype, ierror) integer type, newtype, ierror 254
272 MPI_Type_free_keyval(type_keyval, ierror) integer type_keyval, ierror MPI_Type_create_keyval MPI_Type_get_attr(type, type_keyval, attribute_val, flag, ierror) integer type, type_keyval, ierror integer(kind=mpi_address_kind) attribute_val logical flag MPI_Type_get_contents(datatype, max_integers, max_addresses, max_datatypes, array_of_integers, array_of_addresses, array_of_datatypes, ierror) integer datatype, max_integers, max_addresses, max_datatypes, array_of_integers(*), array_of_datatypes(*), ierror integer(kind=mpi_address_kind) array_of_addresses(*) MPI_Type_get_envelope(datatype, num_integers, num_addresses, num_datatypes, combiner, ierror) integer datatype, num_integers, num_addresses, num_datatypes, combiner, ierror MPI_Type_get_extent(datatype, lb, extent, ierror) integer datatype, ierror integer(kind=mpi_address_kind) lb, extent MPI_Type_get_name(type, type_name, resultlen, ierror) integer tyep, resultlen, ierror character*(*) type_name MPI_Type_get_true_extent(datatype, true_lb, true_extent, ierror) integer datatype, ierror integer(kind=mpi_address_kind) true_lb, true_extent MPI_Type_match_size(typeclass, size, type, ierror) integer typeclass, size, type, ierror MPI MPI_type_set_attr(type, type_keyval, attribute_val, ierror) integer type, type_keyval, ierror integer(kind=mpi_address_kind) attribute_val MPI_Type_set_name(type, type_name, ierror) integer type, ierror character*(*) type_name MPI_Unpack_external(datarep, inbuf, insize, position, outbuf, outcount, datatype, ierror) 255
273 integer outcount, datatype, ierror integer(kind=mpi_address_kind) insize, position character*(*) datarep <type> inbuf(*), outbuf(*) MPI_Unpublish_name(service_name, info, port_name, ierror) integer info, ierror character*(*) service_name, port_name MPI_Win_call_errhandler(win, errorcode, ierror) integer win, errorcode, ierror MPI_Win_complete(win, ierror) integer win, ierror MPI_Win_start RMA MPI_Win_create(base, size, disp_unit, info, comm, win, ierror) <type> base(*) integer(kind=mpi_address_kind) size integer disp_unit, info, comm, win, ierror MPI_Win_create_errhanlder(function, errhanlder, ierror) external function integer errhandler, ierror MPI_Win_create_keyval(win_copy_attr_fn, win_delete_attr_fn, win_keyval, extra_state, ierror) external win_copy_attr_fn, win_delete_attr_fn integer win_keyval, ierror integer(kind=mpi_address_kind) extra_state MPI_Win_delete_attr(win, win_keyval, ierror) integer win, win_keyval, ierror MPI_Win_fence(assert, win, ierror) integer assert, win, ierror RMA MPI_Win_free(win, ierror) integer win, ierror MPI_Win_free_keyval(win_keyval, ierror) integer win_keyval, ierror MPI_Win_create_keyval MPI_Win_get_attr(win, win_keyval, attribute_val, flag, ierror) integer win, win_keyval, ierror 256
274 integer(kind=mpi_address_kind) attribute_val logical flag MPI_Win_get_errhandler(win, errhandler, ierror) integer iwn, errhandler, ierror MPI_Win_get_group(win, group, ierror) integer win, group, irror MPI_Win_get_name(win, win_name, resultlen, ierror) integer win, resultlen, irror character*(*) win_name MPI_Win_lock(lock_type, rank, assert, win, ierror) integer lock_type, rank, assert, win, ierror MPI_Win_post(group, assert, win, ierror) integer group, assert, win, ierror MPI_Win_set_attr(win, win_keyval, attribute_val, ierror) integer win, win_keyval, ierror integer(kind=mpi_address_kind) attribute_val MPI_Win_set_errhandler(win, errhandler, ierror) integer win, errhandler, ierror MPI_Win_set_name(win, win_name, ierror) integer win, ierror character*(*) win_name MPI_Win_start(group, assert, win, ierror) integer group, assert, win, ierror MPI_Win_post MPI_Win_test(win, falg, ierror) integer win, ierror logical flag RMA MPI_Win_unlock(rank, win, ierror) integer rank, win, ierror MPI_Win_wait(win, ierror) integer win, ierror MPI_Win_post RMA 257
275
276 MPI MPI-2 MPI MPIF 1994 MPI MPI MPI MPI-2 MPI-2 MPI-1 I/O MPI 259
277 19 MPI-1 MPI MPI MPI MPI-1 MPI MPI_Init PVM / MPI MPI-1 2 MPI
278 ROOT ROOT ROOT ROOT ROOT ROOT ROOT ROOT 261
279 ROOT MPI_ROOT ROOT ROOT MPI_PROC_NULL ROOT socket 19.2 MPI MPI-2 MPI_COMM_SPAWN(command, argv,maxprocs,info,root,comm,intercomm,array_of_errcodes) IN command IN argv command IN maxprocs MPI IN info IN root IN comm OUT intercomm OUT array_of_errcodes Int MPI_Comm_spawn(char * command, char ** argv, int maxprocs, MPI_Info info, int root, MPI_Comm comm, MPI_Comm * intercomm, int * array_of_errcodes) MPI_COMM_SPAWN(COMMAND, ARGV, MAXPROCS, INFO, ROOT, COMM, INTERCOMM, ARRAY_OF_ERRCODES, IERROR) INTEGER INFO, MAXPROCS, ROOT, COMM, INTERCOMM ARRAY_OF_ERRCODES(*), IERROR MPI 129 MPI_COMM_SPAWN MPI MPI_COMM_SPAWN MPI_COMM_SPAWN command argv maxprocs info MPI ROOT ROOT comm intercomm array_of_errcodes 262
280 MPI_INIT MPI_COMM_SPAWN MPI_INIT MPI_COMM_GET_PARENT MPI_COMM_GET_PARENT(parent) OUT parent int MPI_Comm_get_parent(MPI_Comm * parent) MPI_COMM_GET_PARENT(PARENT, IERROR) INTEGER PARENT, IERROR MPI 130 MPI_COMM_GET_PARENT MPI_COMM_GET_PARENT MPI_COMM_SPAWN MPI_COMM_SPAWN_MULTIPLE MPI_COMM_SPAWN MPI_COMM_SPAWN_MULTIPLE MPI_COMM_SPAWN_MULTIPLE 263
281 MPI_COMM_SPAWN_MULTIPLE(count, array_of_commands, array_of_argv, array_of_max_maxprocs,array_of_info, root, vcomm,intercomm) IN count IN array_of_commands IN array_of_maxprocs IN array_of_info IN root IN comm OUT intercomm OUT array_of_errcodes int MPI_Comm_spawn_multiple(int count, char ** array_of_commands, char *** array_of_argv, int * array_of_maxprocs, MPI_Info * array_of_info, int root, MPI_Comm comm, MPI_Comm * intercomm, int * array_of_errcodes) MPI_COMM_SPAWN_MULTIPLE(COUNT, ARRAY_OF_COMMANDS, ARRAY_OF_ARGV, ARRAY_OF_MAXPROCS, ARRAY_OF_INFO, ROOT, COMM, INTERCOMM, ARRAY_OF_ERRCODES, IERROR) INTEGER COUNT, ARRAY_OF_MAXPROCS(*), ARRAY_OF_INFO(*),ROOT,COMM INTERCOMM, ARRAY_OF_ERRCODES, IERR CHARACTER *(*) ARRAY_OF_COMMANDS(*), ARRAY_OF_ARGV(COUNT, *) MPI 131 MPI_COMM_SPAWN_MULTIPLE MPI_Comm_spawn MPI_Comm_spawn_multiple 19.3 MPI / 264
282 MPI_OPEN_PORT(info, port_name) IN info OUT port_name int MPI_Open_prot(MPI_Info info, char * port_name) MPI_OPEN_PORT(INFO, PORT_NAME, IERROR) CHARACTER *(*) PORT_NAME INTEGER INFOR, IERROR MPI 132 MPI_OPEN_PORT OUT newcomm MPI_COMM_ACCEPT(port_name, info, root, comm, newcomm) IN port_name IN info IN root IN comm int MPI_Comm_accept(char * port_name, MPI_Info info, int root, MPI_Comm comm, MPI_Comm * newcomm, ) MPI_COMM_ACCEPT(PORT_NAME, INFO, ROOT, COMM, NEWCOMM, IERROR) CHARACTER *(*) PORT_NAME INTEGER INFO, ROOT, COMM,NEWCOMM,IERROR MPI 133 MPI_COMM_ACCEPT port_name info newcomm MPI_CLOSE_PORT(port_name) IN port_name int MPI_Close_port(char * port_name) MPI_CLOSE_PORT(PORT_NAME, IERROR) CHARACTER *(*) PORT_NAME INTEGER IERROR MPI 134 MPI_CLOSE_PORT MPI_CLOSE_PORT port_name 265
283 MPI_COMM_CONNECT(port_name, info, root, comm, newcomm) IN port_name IN info IN root IN comm OUT newcomm int MPI_Comm_connect(char * port_name, MPI_Info info, int root, MPI_Comm comm, MPI_Comm * newcomm) MPI_COMM_CONNECT(PORT_NAME, INFO, ROOT, COMM, NEWCOMM, IERROR) CHARACTER *(*) PORT_NAME INTEGER INFO, ROOT, COMM, NEWCOMM, IERROR MPI 135 MPI_COMM_CONNECT MPI_COMM_CONNECT port_name port_name info root newcomm MPI_COMM_DISCONNECT MPI_COMM_DISCONNECT(comm) INOUT comm int MPI_Comm_disconnect(MPI_Comm * comm) MPI_COMM_DISCONNECT(COMM, IERROR) INTEGER COMM, IERROR MPI 136 MPI_COMM_DISCONNECT comm / / 266
284 MPI_PUBLISH_NAME(service_name, info, port_name) IN service_name IN info IN port_name int MPI_Publish_name(char * service_name, MPI_Info info, char * prot_name) MPI_PUBLISH_NAME(SERVICE_NAME, INFO, PORT_NAME, IERROR) INTEGER INFO, IERROR CHARACTER *(*) SERVICE_NAME, PORT_NAME MPI_PUBLISH_NAME MPI 137 MPI_PUBLISH_NAME MPI_LOOKUP_NAME(service_name, info, port_name) IN service_name IN info OUT port_name int MPI_Lookup_name(char * service_name, MPI_Info info, char * port_name) MPI_LOOKUP_NAME(SERVICE_NAME, INFO, PORT_NAME, IERROR) CHARACTER *(*) SERVICE_NAME, PORT_NAME INTEGER INFO, IERROR MPI 138 MPI_LOOKUP_NAME MPI_UNPUBLISH_NAME(service_name, info, port_name) IN service_name IN info IN port_name int MPI_Unpublish_name(char * service_name, MPI_Info info, char * port_name) MPI_UNPUBLISH_NAME(SERVICE_NAME, INFO, PORT_NAME, IERROR) INTEGER INFO, IERROR CHARACTER *(*) SERVICE_NAME, PORT_NAME MPI 139 MPI_UNPUBLISH_NAME 267
285 19.4 socket MPI socket MPI_COMM_JOIN(fd, intercomm) IN fd socket OUT intercomm socket int MPI_Comm_join(int fd, MPI_Comm * intercomm) MPI_COMM_JOIN(FD, INTERCOMM,IERROR) INTEGER FD, INTERCOMM, IERROR MPI 140 MPI_COMM_JOIN MPI_COMM_JOIN socket socket MPI 19.5 socket 268
286 MPI-2 MPI MPI-2 MPI MPI-2 MPI-2 MPI-2 MPI-2 MPI-2 1 fence 2 MPI_WIN_START MPI_WIN_COMPLETE MPI_WIN_POST MPI_WIN_WAIT MPI_WIN_POST MPI_WIN_WAIT MPI_WIN_START MPI_WIN_POST MPI_WIN_COMPLETE 3
287 MPI_WIN_CREATE(base, size, disp_unit, info, comm,win) IN base IN size IN disp_unit IN info IN comm OUT win int MPI_Win_create(void * base, MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, MPI_Win * win) MPI_WIN_CREATE(BASE, SIZE, DISP_UNIT, INFO, COMM, WIN, IERROR) <type> BASE(*) INTEGER (KIND=MPI_ADDRESS_KIND) SIZE INTEGER DISP_UNIT, INFO, COMM, WIN, IERROR MPI 141 MPI_WIN_CREATE MPI_WIN_CREATE base size disp_unit info size=0 comm MPI_WIN_FREE(win) INOUT win int MPI_Win_free(MPI_Win * win) MPI_WIN_FREE(WIN, IERROR) INTEGER WIN, IERROR MPI 142 MPI_WIN_FREE MPI_WIN_CREAT MPI_WIN_FREE 270
288 MPI_WIN_NULL N MPI_PUT(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win) IN origin_addr IN origin_count IN origin_datatype IN target_rank IN target_disp IN target_count IN target_datatype IN win int MPI_Put(void * origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Win win) MPI_PUT(ORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK, TARGET_DISP, TARGET_COUNT, TARGET_DATATYPE, WIN, IERROR) <type> ORIGIN_ADDR(*) INTEGER (KIND=MPI_ADDRESS_KIND) TARGET_DISP INTEGER ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK, TARGET_COUNT, TARGET_DATATYPE, WIN, IERROR MPI 143 MPI_PUT 271
289 MPI_PUT origin_addr origin_datatype origin_count target_rank target_disp target_address=base+target_disp*disp_unit, target_count target_datatype base disp_unit 3 type1 MPI_PUT(buf,3,type1,j,4,3,type1,win) buf i 4 3 type j 82 MPI_PUT MPI_GET(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win) OUT origin_addr IN origin_count IN origin_datatype IN target_rank IN target_disp IN target_count IN target_datatype IN win int MPI_Get(void *origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Win win) MPI_GET(ORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK) <type> ORIGIN_ADD(*) INTEGER (KIND=MPI_ADDRESS_KIND) TARGET_DISP INTEGER ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK, TARGET_COUNT, TARGET_DATATYPE, WIN, IERROR MPI 144 MPI_GET 272
290 MPI_GET MPI_PUT target_rank target_disp target_count target_datatype origin_add origin_count origin_datatype 3 type1 MPI_GET(buf,3,type1,j,4,3,type1,win) buf i 4 3 type j 83 MPI_GET MPI_ACCUMULATE 84 origin_addr origin_count origin_datatype target_rank target_disp target_count target_datatype op 273
291 MPI_ACCUMULATE(origin_addr, origin_count, origin_datatype, target_rank, target_disp target_count, target_datatype, op, win) IN origin_addr IN origin_count IN origin_datatype IN target_rank IN target_disp IN target_count IN target_datatype IN op IN win int MPI_Accumulate(void * origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Op op, MPI_Win win) MPI_ACCUMULATE(ORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK, TARGET_DISP, TARGET_COUNT, TARGET_DATATYPE, OP, WIN, IERROR) <type> ORIGIN_ADDR(*) INTEGER (KIND=MPI_ADDRESS_KIND) TARGET_DISP INTEGER ORIGIN_ADDR, ORIGIN_DATATYPE, TARGET_RANK, TARGET_COUNT, TARGET_DATATYPE, OP, WIN, IERROR MPI 145 MPI_ACCUMULATE 3 type1 3 type1 buf MPI_SUM MPI_ACCUMULATE(buf, 3, type1, j, 4, 3, type1, MPI_SUM, win) 85 MPI_ACCUMULATE 274
292 MPI_WIN_GET_GROUP(win, group) IN win OUT group int MPI_Win_get_group(MPI_Win win, MPI_Group * group) MPI_WIN_GET_GROUP(WIN,GROUP, IERROR) INTEGER WIN, GROUP, IERROR MPI 146 MPI_WIN_GET_GROUP MPI_WIN_GET_GROUP win group win MPI_WIN_FENCE(assert, win) IN assert IN win int MPI_Win_fence(int assert, MPI_Win win) MPI_WIN_FENCE(ASSERT,WIN, IERROR) INTEGER ASSERT, WIN, IERROR MPI 147 MPI_WIN_FENCE MPI_WIN_FENCE win 275
293 0 1 N-1 MPI_WIN_FENCE MPI_WIN_FENCE MPI_WIN_FENCE MPI_GET MPI_PUT MPI_WIN_FENCE MPI_WIN_FENCE MPI_WIN_FENCE 86 MPI_WIN_FENCE MPI_WIN_FENCE MPI_WIN_FENCE MPI_WIN_FENCE MPI_WIN_POST MPI_WIN_START MPI_WIN_PUT MPI_WIN_COMPLETE MPI_WIN_WAIT
294 MPI_WIN_START(group, assert, win) IN group IN assert IN win int MPI_Win_start(MPI_Group group, int assert, MPI_Win win) MPI_WIN_START(GROUP, ASSERT, WIN, IERROR) INTEGER GROUP, ASSERT, WIN, IERROR MPI 148 MPI_WIN_START MPI_WIN_START group assert MPI_WIN_COMPLETE(win) IN win int MPI_Win_complete(MPI_Win win) MPI_WIN_COMPLETE(WIN, IERROR) INTEGER WIN, IERROR MPI 149 MPI_WIN_COMPLETE MPI_WIN_START MPI_WIN_COMPLETE MPI_WIN_POST(group, assert, win) IN group IN assert IN win int MPI_Win_post(MPI_Group group, int assert, MPI_Win win) MPI_WIN_POST(GROUP, ASSERT, WIN) INTEGER GROUP, ASSER5T, WIN, IERROR MPI 150 MPI_WIN_POST MPI_WIN_POST MPI_WIN_POST MPI_WIN_START 277
295 MPI_WIN_WAIT(win) IN win int MPI_Win_wait(MPI_Win win) MPI_WIN_WAIT(WIN, IERROR) INTEGER WIN, IERROR MPI 151 MPI_WIN_WAIT MPI_WIN_WAIT MPI_WIN_POST MPI_WIN_COMPLETE MPI_WIN_TEST(win,flag) IN win OUT flag int MPI_Win_test(MPI_Win win, int * flag) MPI_WIN_TEST(WIN,FLAG,IERROR) INTEGER WIN, IERROR LOGICAL FLAG MPI 152 MPI_WIN_TEST MPI_WIN_TEST flag=true MPI_WIN_WAIT flag=false MPI_WIN_WAIT
296 i j k MPI_WIN_LOCK 1 j j MPI_WIN_PUT 2 1 MPI_WIN_UNLOCK j 2 MPI_WIN_LOCK j MPI_WIN_GET MPI_WIN_UNLOCK j 88 MPI_WIN_LOCK(lock_type, rank, assert, win) IN lock_type IN rank IN assert IN win int MPI_Win_lock(int lock_type, int rank, int assert, MPI_Win win) MPI_WIN_LOCK(LOCK_TYPE, RANK, ASSERT, WIN, IERROR) INTEGER LOCK_TYPE, RANK, ASSERT, WIN, IERROR MPI 153 MPI_WIN_LOCK MPI_WIN_LOCK 279
297 MPI_WIN_UNLOCK(rank, win) IN rank IN win int MPI_Win_unlock(int rank, MPI_Win win) MPI_WIN_UNLOCK(RANK, WIN, IERROR) INTEGER RANK, WIN, IERROR MPI 154 MPI_WIN_UNLOCK MPI_WIN_UNLOCK rank MPI_WIN_LOCK 20.4 MPI-2 280
298 I/O MPI-1 I/O I/O I/O MPI-2 I/O 21.1 MPI-2 I/O MPI MPI MPI_WAIT MPI-2 MPI MPI MPI
299 20 I/O READ_AT WRITE_AT IREAD_AT IWRITE_AT READ WRITE IREAD IWRITE READ_SHARED WRITE_SHARED IREAD_SHARED IWRITE_SHARED READ_AT_ALL WRITE_AT_ALL READ_AT_ALL_BEGIN READ_AT_ALL_END WRITE_AT_ALL_BEGIN WRITE_AT_ALL_END READ_ALL WRITE_ALL READ_ALL_BEGIN READ_ALL_END WRITE_ALL_BEGIN WRITE_ALL_END READ_ORDERED WRITE_ORDERED READ_ORDERED_BEGIN READ_ORDERED_END WRITE_ORDERED_BEGIN WRITE_ORDERED_END 21.2 MPI_FILE_OPEN(comm, filename, amode, info, fh) IN comm IN filename IN amode IN info OUT fh int MPI_File_open(MPI_Comm comm, char * filename, int amode, MPI_Info info, MPI_File * fh) MPI_FILE_OPEN(COMM,FILENAME, AMODE, INFO, FH,IERROR) CHARACTER *(*) FILENAME INTEGER COMM, AMODE, INFO, FH, IERROR MPI 155 MPI_FILE_OPEN MPI_FILE_OPEN comm 282
300 filename filename amode info fh fh MPI_MODE_RDONLY MPI_MODE_RDWR MPI_MODE_WRONLY MPI_MODE_CREATE MPI_MODE_EXCL MPI_MODE_DELETE_ON_CLOSE MPI_MODE_UNIQUE_OPEN MPI_MODE_SEQUENTIAL MPI_MODE_APPEND MPI_FILE_CLOSE(fh) INOUT fh int MPI_File_close(MPI_File * fh) MPI_FILE_CLOSE(FH,IERROR) INTEGER FH, IERROR MPI 156 MPI_FILE_CLOSE MPI_FILE_CLOSE fh fh MPI_FILE_DELETE(filename, info) IN filename IN info int MPI_File_delete(char * filename, MPI_Info info) MPI_FILE_DELETE(FILENAME, INFO, IERROR) CHARACTER *(*) FILENAME INTEGER INFO, IERROR MPI 157 MPI_FILE_DELETE MPI_FILE_DELETE filename 283
301 MPI_FILE_SET_SIZE(fh,size) INOUT fh IN size int MPI_File_set_size(MPI_File fh, MPI_Offset size) MPI_FILE_SET_SIZE(FH, SIZE, IERROR) INTEGER FH, IERROR INTEGER (KIND=MPI_OFFSET_KIND) SIZE MPI 158 MPI_FILE_SET_SIZE MPI_FILE_SET_SIZE fh size size MPI_FILE_PREALLOCATE(fh, size) INOUT fh IN size int MPI_File_preallocate(MPI_File fh, MPI_Offset size) MPI_FILE_PREALLOCATE(FH, SIZE, IERROR) INTEGER FH, IERROR INTEGER (KIND=MPI_OFFSET_KIND) SIZE MPI 159 MPI_FILE_PREALLOCATE MPI_FILE_PREALLOCATE fh size size size size size MPI_FILE_GET_SIZE(fh,size) IN fh OUT size int MPI_File_get_size(MPI_File fh, MPI_Offset * size) MPI_FILE_GET_SIZE(FH, SIZE, IERROR) INTEGER FH, IERROR INTEGER (KIND=MPI_OFFSET_KIND) SIZE MPI 160 MPI_FILE_GET_SIZE MPI_FILE_GET_SIZE 284
302 MPI_FILE_GET_GROUP(fh,group) IN fh OUT group int MPI_File_get_group(MPI_File fh, MPI_Group * group) MPI_FILE_GET_GROUP( FH, GROUP, IERROR) INTEGER FH, GROUP, IERROR MPI 161 MPI_FILE_GET_GROUP MPI_FILE_GET_GROUP fh group comm MPI_FILE_GET_AMODE(fh, amode) IN fh OUT amode int MPI_File_get_amode(MPI_File fh, int * amode) MPI_FILE_GET_AMODE(FH, AMODE, IERROR) INTEGER FH, AMODE, IERROR MPI 162 MPI_FILE_GET_AMODE MPI_FILE_GET_AMODE fh amode MPI_FILE_SET_INFO(fh, info) INOUT fh IN info int MPI_File_set_info(MPI_File fh, MPI_Info info) MPI_FILE_SET_INFO(FH, INFO, IERROR) INTEGER FH, INFO, IERROR MPI 163 MPI_FILE_SET_INFO MPI_FILE_SET_INFO fh fh MPI_FILE_GET_INFO(fh, info_used) IN fh OUT info_used int MPI_File_get_info(MPI_file fh, MPI_Info * info_used) MPI_FILE_GET_INFO(FH, INFO_USED,IERROR) INTEGER FH,INFO_USED, IERROR MPI 164 MPI_FILE_GET_INFO 285
303 MPI_FILE_GET_INFO fh info_used 21.3 SEEK MPI_FILE_READ_AT(fh, offset,buf,count,datatype,status) IN fh IN offset OUT buf IN count IN datatype OUT status int MPI_File_read_at(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype, MPI_Statye * status) MPI_FILE_READ_AT(FH, OFFSET, BUF, COUNT,DATATYPE,STATUS, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR INTEGER(KIND=MPI_OFFSET_KIND) OFFSET MPI 165 MPI_FILE_READ_AT MPI_FILE_READ_AT fh offset count datatype buf status 286
304 type1 offset=100 MPI_FILE_READ_AT(fh,100,buf,5,type1,status) buf 89 MPI_FILE_READ_AT MPI_FILE_WRITE_AT(fh, offset, buf, count, datatype,status) INOUT fh IN offset IN buf IN count IN datatype OUT status int MPI_File_write_at(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_WRITE_AT(FH, OFFSET, BUF, COUNT, DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 166 MPI_FILE_WRITE_AT MPI_FILE_WRITE_AT MPI_FILE_READ_AT fh offset buf count datatype status 287
305 type1 offset=100 MPI_FILE_WRITE_AT(fh,100,buf,5,type1,status) buf 90 MPI_FILE_WRITE_AT MPI_FILE_READ_AT_ALL(fh, offset,but,count,datatype,status) IN fh IN offset OUT buf IN count IN datatype OUT status int MPI_File_read_at_all(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_READ_AT_ALL(FH, OFFSET,BUF,COUNT,DATATYPE,STATUS, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 167 MPI_FILE_READ_AT_ALL MPI_FILE_READ_AT_ALL fh MPI_FILE_READ_AT 288
306 MPI_FILE_WRITE_AT_ALL(fh, offset, buf, count, datatype, status) INOUT fh IN offset IN buf IN count IN datatype OUT status int MPI_File_write_at_all(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_WRITE_AT_ALL(FH, OFFSET, BUF, COUNT, DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 168 MPI_FILE_WRITE_AT_ALL MPI_FILE_WRITE_AT_ALL MPI_FILE_READ_AT_ALL fh MPI_FILE_WRITE_AT MPI_FILE_IREAD_AT MPI_FILE_READ_AT fh offset count datatype buf request request MPI_WAIT MPI_TEST 289
307 MPI_FILE_IREAD_AT(fh, offset,buf, count, datatype, request) IN fh IN offset OUT buf IN count IN datatype OUT request int MPI_File_iread_at(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype, MPI_Request * request) MPI_FILE_IREAD_AT(FH,OFFSET,BUF,COUNT,DATATYPE,REQUEST,IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 169 MPI_FILE_IREAD_AT MPI_FILE_IWRITE_AT(fh, offset,buf, count, datatype, request) INOUT fh IN offset IN buf IN count IN datatype OUT request int MPI_File_iwrite_at(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype, MPI_Request * request) MPI_FILE_IWRITE_AT(FH,OFFSET,BUF,COUNT,DATATYPE,REQUEST,IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 170 MPI_FILE_IWRITE_AT MPI_FILE_IWRITE_AT MPI_FILE_WRITE_AT fh offset count datatype request buf request MPI_WAIT 290
308 MPI-2 MPI_WAIT MPI_WAIT 0 1 N-1 MPI_FILE_..._BEGIN MPI_FILE_..._BEGIN MPI_FILE_..._BEGIN MPI_FILE_..._END MPI_FILE_..._END MPI_FILE_..._END 91 MPI_FILE_READ_AT_ALL_BEGIN(fh, offset, buf, count, datatype) IN fh IN offset OUT buf IN count IN datatype int MPI_File_read_at_all_begin(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype) MPI_FILE_READ_AT_ALL_BEGIN(FH, OFFSET, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 171 MPI_FILE_READ_AT_ALL_BEGIN MPI_FILE_READ_AT_ALL_BEGIN fh offset count datatype 291
309 buf MPI_FILE_READ_AT_ALL_END MPI_FILE_READ_AT_ALL_END buf MPI_FILE_READ_AT_ALL_END(fh, buf, status) IN fh OUT buf OUT status int MPI_File_read_at_all_end(MPI_File fh, void * buf, MPI_Status *status) MPI_FILE_READ_AT_ALL_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR MPI 172 MPI_FILE_READ_AT_ALL_END MPI_FILE_READ_AT_ALL_END MPI_FILE_READ_AT_ALL_BEGIN fh buf MPI_FILE_READ_AT_ALL_BEGIN buf MPI_FILE_WRITE_AT_ALL_BEGIN(fh, offset, buf, count, datatype) INOUT fh IN offset IN buf IN count IN datatype int MPI_File_write_at_all_begin(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype) MPI_FILE_WRITE_AT_ALL_BEGIN(FH, OFFSET, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 173 MPI_FILE_WRITE_AT_ALL_BEGIN MPI_FILE_WRITE_AT_ALL_BEGIN fh offset buf count datatype MPI_FILE_READ_AT_ALL_BEGIN MPI_FILE_WRITE_AT_ALL_END 292
310 MPI_FILE_WRITE_AT_ALL_END(fh, buf, status) INOUT fh IN buf OUT status int MPI_File_write_at_all_end(MPI_File fh, void * buf, MPI_Status *status) MPI_FILE_WRITE_AT_ALL_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR MPI 174 MPI_FILE_WRITE_AT_ALL_END MPI_FILE_WRITE_AT_ALL_END MPI_FILE_WRITE_AT_ALL_BEGIN buf
311 < > MPI 93 MPI_FILE_SET_VIEW IO fh fh disp disp etype etype filetype etype etype filetype filetype etype filetype disp N filetype
312 MPI_FILE_SET_VIEW(fh, disp,etype,filetype,datarep,info) INOUT fh IN disp IN etype IN filetype IN datarep IN info int MPI_File_set_view(MPI_File fh, MPI_Offset disp, MPI_Datatype etype, MPI_Datatype filetype, char * datarep, MPI_Info info) MPI_FILE_SET_VIEW(FH, DISP, ETYPE, FILETYPE, DATAREP, INFO, IERROR) CHARACTER *(*) DATAREP INTEGER FH, ETYPE, FILETYPE,INFO,IERROR MPI 175 MPI_FILE_SET_VIEW MPI_FILE_SET_VIEW fh fh native internal external32 MPI native native native internal external32 94 internal MPI native external32 external32 295
313 MPI external32 MPI_FILE_GET_VIEW(fh, disp,etype,filetype,datarep) IN fh OUT disp OUT etype OUT filetype OUT datarep int MPI_File_get_view(MPI_File fh, MPI_Offset * disp, MPI_Datatype * etype, MPI_Datatype * filetype, char * datarep) MPI_FILE_GET_VIEW(FH, DISP,ETYPE,FILETYPE,DATAREP,IERROR) INTEGER FH, ETYPE,FILETYPE,IERROR CHARACTER *(*) DATAREP, INTEGER (KIND=MPI_OFFSET_KIND) DISP MPI 176 MPI_FILE_GET_VIEW MPI_FILE_GET_VIEW fh disp etype filetype datarep MPI_FILE_SET_VIEW MPI_FILE_SEEK(fh, offset, whence) INOUT fh IN offset IN whence offset int MPI_File_seek(MPI_File fh, MPI_Offset offset, int whence) MPI_FILE_SEEK(FH,OFFSET,WHENCE,IERROR) INTEGER FH, WHENCE, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 177 MPI_FILE_SEEK MPI_FILE_SEEK fh offset whence whence MPI_SEEK_SET MPI_SEEK_CUR MPI_SEEK_END MPI_SEEK_SET MPI_SEEK_CUR MPI_SEEK_END offset +offset +offset 296
314 MPI_FILE_GET_POSITION(fh, offset) IN fh OUT offset int MPI_File_get_position(MPI_File fh, MPI_Offset * offset) MPI_FILE_GET_POSITION(FH, OFFSET, IERROR) INTEGER FH, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 178 MPI_FILE_GET_POSITION MPI_FILE_GET_POSITION etype =3 95 MPI_FILE_GET_BYTE_OFFSET(fh, offset,disp) IN fh IN offset OUT disp int MPI_File_get_byte_offset(MPI_File fh, MPI_Offset offset, MPI_Offset * disp) MPI_FILE_GET_BYTE_OFFSET(FH, OFFSET, DISP, IERROR) INTEGER FH, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET, DISP MPI 179 MPI_FILE_GET_BYTE_OFFSET MPI_FILE_GET_BYTE_OFFSET offset 297
315 disp offset MPI_FILE_READ(fh, buf,count,datatype,status) INOUT fh OUT buf IN count IN datatype OUT status int MPI_File_read(MPI_file fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_READ(FH, BUF, COUNT,DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH,COUNT,DATATYPE,STATUS,IERROR MPI 180 MPI_FILE_READ MPI_FILE_READ fh datatype count buf status 298
316 MPI_FILE_WRITE(fh, buf,count,datatype,status) INOUT fh IN buf IN count IN datatype OUT status int MPI_File_write(MPI_file fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_WRITE(FH, BUF, COUNT,DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH,COUNT,DATATYPE,STATUS,IERROR MPI_FILE_WRITE status MPI 181 MPI_FILE_WRITE buf count datatype MPI_FILE_WRITE MPI_FILE_READ MPI_FILE_READ_ALL(fh, buf,count,datatype,status) INOUT fh OUT buf IN count IN datatype OUT status int MPI_File_read_all(MPI_file fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_READ_ALL(FH, BUF, COUNT,DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH,COUNT,DATATYPE,STATUS,IERROR MPI 182 MPI_FILE_READ_ALL MPI_FILE_READ_ALL fh MPI_FILE_READ count datatype buf status 299
317 MPI_FILE_WRITE_ALL(fh, buf,count,datatype,status) INOUT fh IN buf IN count IN datatype OUT status int MPI_File_write_all(MPI_file fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_WRITE_ALL(FH, BUF, COUNT,DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH,COUNT,DATATYPE,STATUS,IERROR MPI 183 MPI_FILE_WRITE_ALL MPI_FILE_WRITE_ALL MPI_FILE_READ_ALL fh MPI_FILE_WRITE buf count datatype status MPI_WAIT MPI_FILE_IREAD(fh, buf, count,datatype,request) INOUT fh OUT buf IN count IN datatype OUT request int MPI_File_iread(MPI_File fh, void * buf, int count,datatype datatype, MPI_Request * request) MPI_FILE_IREAD(FH,BUF,COUNT,DATATYPE,REQUEST,IERROR) <type> BUF (*) INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR MPI 184 MPI_FILE_IREAD 300
318 fh count datatype buf request request MPI_WAIT MPI_FILE_IWRITE(fh, buf, count,datatype,request) INOUT fh IN buf IN count IN datatype OUT request int MPI_File_iwrite(MPI_File fh, void * buf, int count,datatype datatype, MPI_Request * request) MPI_FILE_IWRITE(FH,BUF,COUNT,DATATYPE,REQUEST,IERROR) <type> BUF (*) INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR MPI 185 MPI_FILE_IWRITE MPI_FILE_IWRITE fh buf count datatype request request MPI_WAIT MPI-2 MPI_WAIT MPI_FILE_READ_ALL_BEGIN(fh, buf,count,datatype) INOUT fh OUT buf IN count IN datatype int MPI_File_read_all_begin(MPI_File fh, void * buf, int count, MPI_Datatype datatype) MPI_FILE_READ_ALL_BEGIN(FH, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, IERROR MPI 186 MPI_FILE_READ_ALL_BEGIN MPI_FILE_READ_ALL_BEGIN fh 301
319 count datatype buf MPI_FILE_READ_ALL_END MPI_FILE_READ_ALL_END(fh, buf,status) INOUT fh OUT buf OUT status int MPI_File_read_all_end(MPI_File fh, void * buf, MPI_Status * status) MPI_FILE_READ_ALL_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR MPI 187 MPI_FILE_READ_ALL_END MPI_FILE_READ_ALL_END MPI_FILE_READ_ALL_BEGIN MPI_FILE_READ_ALL_END MPI_FILE_WRITE_ALL_BEGIN(fh, buf,count,datatype) INOUT fh IN buf IN count IN datatype int MPI_File_write_all_begin(MPI_File fh, void * buf, int count, MPI_Datatype datatype) MPI_FILE_WRITE_ALL_BEGIN(FH, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, IERROR MPI 188 MPI_FILE_WRITE_ALL_BEGIN MPI_FILE_WRITE_ALL_BEGIN MPI_FILE_READ_ALL_BEGIN fh buf count datatype fh MPI_FILE_WRITE_ALL_END 302
320 MPI_FILE_WRITE_ALL_END(fh, buf,status) INOUT fh IN buf OUT status int MPI_File_write_all_end(MPI_File fh, void * buf, MPI_Status * status) MPI_FILE_WRITE_ALL_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR MPI 189 MPI_FILE_WRITE_ALL_END MPI_FILE_WRITE_ALL_END MPI_FILE_WRITE_ALL_BEGIN MPI_FILE_WRITE_ALL_END 21.5 MPI_FILE_SEEK_SHARED(fh, offset, whence) INOUT fh IN offset IN whence int MPI_File_seek_shared(MPI_File fh, MPI_Offset offset, int whence) MPI_FILE_SEEK_SHARED(FH, OFFSET, WHENCE, IERROR) INTEGER FH, WHENCE, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 190 MPI_FILE_SEEK_SHARED MPI_FILE_SEEK_SHARED MPI_FILE_SEEK MPI_FILE_SEEK MPI_FILE_GET_POSITION_SHARED MPI_FILE_GET_POSITION 303
321 MPI_FILE_GET_POSITION_SHARED(fh, offset) IN fh OUT offset int MPI_File_get_position(MPI_File fh, MPI_Offset * offset) MPI_FILE_GET_POSITION_SHARED(FH, OFFSET, IERROR) INTEGER FH, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 191 MPI_FILE_GET_POSITION_SHARED MPI_FILE_READ_SHARED(fh, buf,count,datatype,status) INOUT fh OUT buf IN count IN datatype OUT status int MPI_File_read_shared(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_READ_SHARED(FH, BUF, COUNT,DATATYPE, STATUS,IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR MPI 192 MPI_FILE_READ_SHARED MPI_FILE_READ_SHARED fh count datatype buf status MPI_FILE_WRITE_SHARED fh buf count datatype status MPI_FILE_WRITE_SHARED MPI_FILE_READ_SHARED 304
322 MPI_FILE_WRITE_SHARED(fh, buf,count,datatype,status) INOUT fh IN buf IN count IN datatype OUT status int MPI_File_write_shared(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_WRITE_SHARED(FH, BUF, COUNT,DATATYPE, STATUS,IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR MPI 193 MPI_FILE_WRITE_SHARED MPI_FILE_READ_ORDERED(fh, buf, count, datatype, status) INOUT fh OUT buf IN count IN datatype OUT status int MPI_File_read_ordered(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_READ_ORDERED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR MPI 194 MPI_FILE_READ_ORDERED MPI_FILE_READ_ORDERED MPI_FILE_READ_SHARED rank N-1 count datatype buf status MPI_FILE_WRITE_ORDERED MPI_FILE_WRITE_SHARED rank N-1 buf count datatype status 305
323 MPI_FILE_WRITE_ORDERED(fh, buf, count, datatype, status) INOUT fh IN buf IN count IN datatype OUT status int MPI_File_write_ordered(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_WRITE_ORDERED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR MPI 195 MPI_FILE_WRITE_ORDERED MPI_FILE_IREAD_SHARED MPI_FILE_READ_SHARED fh count datatype buf MPI_FILE_READ_SHARED request MPI_WAIT MPI_FILE_IREAD_SHARED(fh, buf,count,datatype,request) INOUT fh OUT buf IN count IN datatype OUT request int MPI_File_iread_shared(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Request * request) MPI_FILE_IREAD_SHARED(FH, BUF, COUNT,DATATYPE, REQUEST,IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR MPI 196 MPI_FILE_IREAD_SHARED MPI_FILE_IWRITE_SHARED fh buf count datatype request request MPI_WAIT 306
324 MPI_FILE_IWRITE_SHARED(fh, buf,count,datatype,request) INOUT fh IN buf IN count IN datatype OUT request int MPI_File_iwrite_shared(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Request * request) MPI_FILE_IWRITE_SHARED(FH, BUF, COUNT,DATATYPE, REQUEST,IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR MPI 197 MPI_FILE_IWRITE_SHARED MPI_FILE_READ_ORDERED_BEGIN(fh, buf, count, datatype) INOUT fh OUT buf IN count IN datatype int MPI_File_read_ordered_begin(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_READ_ORDERED_BEGIN(FH, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, IERROR MPI 198 MPI_FILE_READ_ORDERED_BEGIN MPI_FILE_READ_ORDERED_BEGIN fh rank count datatype buf MPI_FILE_READ_ORDERED_END MPI_FILE_READ_ORDERED_END fh buf status buf 307
325 MPI_FILE_READ_ORDERED_END(fh, buf, status) INOUT fh OUT buf OUT status int MPI_File_read_ordered_end(MPI_File fh, void * buf, MPI_Status * status) MPI_FILE_READ_ORDERED_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS, IERROR MPI 199 MPI_FILE_READ_ORDERED_END MPI_FILE_WRITE_ORDERED_BEGIN(fh, buf, count, datatype) INOUT fh IN buf IN count IN datatype int MPI_File_write_ordered_begin(MPI_File fh, void * buf, int count, MPI_Datatype datatype) MPI_FILE_WRITE_ORDERED_BEGIN(FH, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*) INTEGER FH, COUNT,DATATYPE, IERROR MPI 200 MPI_FILE_WRITE_ORDERED_BEGIN MPI_FILE_WRITE_ORDERED_BEGIN fh rank buf count datatype MPI_FILE_WRITE_ORDERED_END MPI_FILE_WRITE_ORDERED_END(fh, buf, status) INOUT fh IN buf OUT status int MPI_File_write_ordered_end(MPI_File fh, void * buf, MPI_Status * status) MPI_FILE_WRITE_ORDERED_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR MPI 201 MPI_FILE_WRITE_ORDERED_END 308
326 MPI_FILE_WRITE_ORDERED_END fh buf status buf MPI_FILE_GET_TYPE_EXTENT(fh, datatype, extent) IN fh IN datatype OUT extent int MPI_File_get_type_extent(MPI_File fh, MPI_Datatype datatype, MPI_Aint * extent) MPI_FILE_GET_TYPE_EXTENT(FH, DATATYPE,EXTENT, IERROR) INTEGER FH, DATATYPE, IERROR INTEGER (KIND=MPI_ADDRESS_KIND) EXTENT MPI 202 MPI_FILE_GET_TYPE_EXTENT MPI_FILE_GET_TYPE_EXTENT fh datatype extent dtype_file_extent_fn MPI_REGISTER_DATAREP(datarep, read_conversion_fn, write_conversion_fn, dtype_file_extent_fn,extra_state) IN datarep IN read_conversion_fn IN write_conversion_fn IN dtype_file_extent_fn IN extra_state int MPI_Register_datarep(char * datarep, MPI_Datarep_conversion_function * read_conversion_fn, MPI_Datarep_conversion_function * write_conversion_fn, MPI_Datarep_extent_function * dtype_file_extent_fn, void * extra_state) MPI_REGISTER_DATAREP(DATAREP,READ_CONVERSION_FN, WRITE_CONVERSION_FN,DTYPE_FILE_EXTENT_FN,EXTRA_STATE,IERROR) EXTERNAL READ_CONVERSION_FN, WRITE_CONVERSION_FN DTYPE_FILE_EXTENT_FN INTEGER (KIND=MPI_ADDRESS_KIND) EXTRA_STATE INTEGER IERROR MPI 203 MPI_REGISTER_DATAREP MPI_REGISTER_DATAREP datarep MPI_FILE_SET_VIEW datarep read_conversion_fn write_conversion_fn dtype_file_extent_fn 309
327 MPI_FILE_SET_ATOMICITY(fh, flag) INOUT fh IN flag int MPI_File_set_atomicity(MPI_File fh, int flag) MPI_FILE_SET_ATOMICITY(FH, FLAG, IERROR) INTEGER FH, IERROR LOGICAL FLAG MPI 204 MPI_FILE_SET_ATOMICITY MPI_FILE_SET_ATOMICITY fh flag=ture flag=false MPI_FILE_GET_ATOMICITY(fh, flag) IN fh OUT flag int MPI_File_set_atomicity(MPI_File fh, int * flag) MPI_FILE_SET_ATOMICITY(FH, FLAG, IERROR) INTEGER FH, IERROR LOGICAL FLAG MPI 205 MPI_FILE_GET_ATOMICITY MPI_FILE_GET_ATOMICITY fh flag MPI_FILE_SET_ATOMICITY flag=true flag=false MPI_FILE_SYNC(fh) INOUT fh int MPI_File_sync(MPI_File fh) MPI_FILE_SYNC(FH, IERROR) INTEGER FH, IERROR MPI 206 MPI_FILE_SYNC MPI_FILE_SYNC fh fh 310
328 21.6 MPI A1(100) 4 P1(4) A1 P1 A1(1:25) A1(26:50) A1(51:75) A1(76:100) A1 P1(1) P1(2) P1(3) P1(4) P1 A1 P1 97 A1(1:100:4) A1(2:100:4) A1(3:100:4) A1(4:100:4) A1 P1(1) P1(2) P1(3) P1(4) P1 98 A1 P1 2 /1,2/9,10/.. /3,4/11,12/.. /5,6/13,14/.. /7,8/15,16/.. A1(1:100:8) A1(3:100:8) A1(5:100:8) A1(7:100:8) A1(2:100:8) A1(4:100:8) A1(6:100:8) A1(8:100:8) A1 P1(1) P1(2) P1(3) P1(4) P
329 MPI_TYPE_CREATE_DARRAY(size,rank,ndims,array_of_gsizes,array_of_distribs, array_of_dargs,array_of_psizes,order,oldtype,newtype) IN size IN rank IN ndims IN array_of_gsize IN array_of_distribs IN array_of_dargs IN array_of_psizes IN older C FORTRAN IN oldtype OUT newtype int MPI_Type_create_darray(int size, int rank, int ndims, int array_of_gsizes[], int array_of_distribs[], int array_of_dargs[], int array_of_psizes[], int order, MPI_Datatype oldtype, MPI_Datatype * newtype) MPI_TYPE_CREATE_DARRAY(SIZE,RANK,NDIMS,ARRAY_OF_GSIZES, ARRAY_OF_DISTRIBS,ARRAY_OF_DARGS, ARRAY_OF_PSIZE,ORDER, ORDER, OLDTYPE, NEWTYPE, IERROR) INTEGER SIZE,RANK,NDIMS, ARRAY_OF_GSIZE(*), ARRAY_OF_DISTRIBS(*), ARRAY_OF_DARGS(*), ARRAY_OF_PSIZES(*), ORDER, OLDTYPE, MPI 207 MPI_TYPE_CREATE_DARRAY MPI_TYPE_CREATE_DARRAY ndims array_of_gsize array_of_distribs array_of_dargs array_of_dargs MPI_DISTRIBUTE_DFLT_DARG size array_of_psizes older FORTRAN MPI_ORDER_FORTRAN C MPI_ORDER_C oldtype newtype m*n 2 6 2*3 MPI_Type_create_darray filetype filetype gsizes[0]=m; gzises[1]=n; distribs[0]=mpi_distribute_block; distribs[1]=mpi_distribute_block; 312
330 dargs[0]=mpi_distribute_dflt_darg; dargs[1]=mpi_distribute_dflt_darg; psizes[0]=2; psizes[1]=3; MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Type_create_darray(6,rank,2,gsizes,distribs,dargs,psizes,MPI_ORDER_C,MPI_FLOAT, &filetype); MPI_Type_commit(&filetype); MPI_File_open(MPI_COMM_WORLD, "datafile",mpi_mode_create MPI_MODE_WRONLY,MPI_INFO_NULL,&fh); MPI_File_set_view(fh,0,MPI_FLOAT,filetype,"native",MPI_INFO_NULL); local_array_size=num_local_rows*num_local_cols; MPI_File_write_all(fh,local_array,local_array_size,MPI_FLOAT,&status); MPI_File_close(&fh); 67 MPI_TYPE_CREATE_SUBARRAY(ndims,array_of_sizes, array_of_subsizes, array_of_starts, order, oldtype, newtype) IN ndims IN array_of_sizes IN array_of_subsizes IN array_of_starts IN order IN oldtype OUT newtype int MPI_Type_create_subarray(int ndims, int array_of_sizes[], int array_of_subsizes[], int array_of_starts[], int order, MPI_Datatype oldtype, MPI_Datatype * newtype) MPI_TYPE_CREAT_SUBARRAY(NDIMS, ARRAY_OF_SIZES, ARRAY_OF_SUBSIZES,ARRAY_OF_STARTS,ORDER,OLDTYPE,NEWTYPE, IERROR) INTEGER NDIMS, ARRAY_OF_SIZES(*), ARRAY_OF_SUBSIZES(*), ARRAY_OF_STARTS(*), ORDER, OLDTYPE, NEWTYPE, IERROR MPI 208 MPI_TYPE_CREATE_SUBARRAY MPI_TYPE_CREATE_SUBARRAY ndims array_of_sizes array_of_subsizes array_of_starts order oldtype newtype m*n 2 6 2*3 m/2 n/3 start_indices[0]=coords[0]*lsizes[0] 313
331 start_indices[1]=coords[1]*lsizes[1] MPI_Type_create_subarray filetype filetype gsizes[0]=m; gsizes[1]=n; psizes[0]=2; psizes[3]=3; lsizes[0]=m/psizes[0]; lsizes[1]=n/psizes[1]; dims[0]=2; dims[1]=3; periods[0]=periods[1]=1; MPI_Cart_create(MPI_COMM_WORLD,2,dims,periods, 0 &comm); MPI_Comm_rank(comm,&rank); MPI_Cart_coords(comm,rank,2,coords); start_indices[0]=coords[0]*lsizes[0]; start_indices[1]=coords[1]*lsizes[1]; MPI_Type_create_subarray(2,gsizes,lsizes,start_indices,MPI_ORDER_C,MPI_FLOAT,&filetype); MPI_Type_commit(&filetype); MPI_File_open(MPI_COMM_WORLD,"datafile",MPI_MODE_CREATE MPI_MODE_WRONLY, MPI_INFO_NULL, &fh); MPI_File_setview(fh,0,MPI_FLOAT,filetype,"native",MPI_INFO_NULL); memsizes[0]=lsizes[0]+8; memsizes[1]=lsizes[1]+8; /* 4 */ start_indices[0]=start_indices[1]=4; MPI_Type_subarray(2,memsizes,lsizes,start_indices,MPI_ORDER_C,MPI_FLOAT,&memtype); /* */ MPI_Type_commit(&memtype); MPI_File_write_all(fh,local_array,1,memtype,&status); MPI_File_close(&fh); I/O MPI I/O 314
332 315 MPI MPI MPIF MPI netlib MPI MPI ANL/MSU MPICH MPICH ftp://ftp.mcs.anl.gov/pub/mpi MPICH MPI MPI MSU MPI LAM MPI MPI MPI comp.parallel.mpi MPI ftp://ftp.mpi-forum.org/pub/docs/ MPI MPI MPI ftp://ftp.mcs.anl.gov/pub/mpi/using/examples MPI ftp://ftp.mcs.anl.gov/pub/mpi/using2/examples MPI MPI
333 [Ado98] Jean-Marc Adamo. Multi-threaded object-oriented MPI-based message passing interface: the ARCH library. Boston : Kluwer Academic, ISBN [Ads97] Jeanne C.Adams. Fortran 95 Handbook [Akn87] Edited by Akinori Yonezawa and Mario Tokoro. Object-oriented concurrent programming. Cambridge, Mass. : MIT Press, ISBN [Akl89] Selim G. Akl. The design and analysis of parallel algorithms. Englewood Cliffs, N.J. : Prentice Hall, ISBN [Alv98] Vassil Alexandrov, Jack Dongarra (eds.). Recent advances in parallel virtual machine and message passing interface : 5th European PVM/MPI User's Group Meeting, Liverpool, UK, September 7-9, 1998 : proceedings. Berlin ; New York : Springer, ISBN [Ans91] Gregory R.Andrews. Concurrent programming : principles and practice. Redwood City, Calif. : Benjamin/Cummings Pub. Co., ISBN [Bab88] Edited by Robert G. Babb. Programming parallel processors. Reading, Mass. : Addison- Wesley Pub. Co., ISBN [Bar92] Barr E. Bauer. Practical parallel programming. San Diego : Academic Press, ISBN [Brc93] Lester, Bruce P. The art of parallel programming.englewood Cliffs, N.J. : Prentice Hall, ISBN [Bus88] Alan Burns. Programming in Occam 2. Wokingham [Berkshire] England Reading, Mass. Addison-Wesley, ISBN [Car90] Nicholas Carriero, David Gelernter. How to write parallel programs : a first course. Cambridge, Mass. : MIT Press, ISBN X. [Chay88] K. Mani Chandy, Jayadev Misra. Parallel program design : a foundation. Reading, Mass. : Addison-Wesley Pub. Co., ISBN [Chs92] Cheese, A. (Andrew). Parallel execution of Parlog. Berlin : Springer-Verlag, ISBN (New York). [Con92] Michael H. Coffin. Parallel programming : a new approach. Summit, NJ : Silicon Press, ISBN [Fok95] Lloyd D.Fosdick... [et al]. An Introduction to High-Performance Scientific Computing [For94] Ian Foster. Designing and building parallel programs : concepts and tools for parallel software engineering. Reading, Mass. : Addison-Wesley, ISBN [Gen88] Narain Gehani, Andrew McGettrick.Concurrent programming. Wokingham, England : Addison-Wesley, ISBN [Gen89] Narain Gehani, William D. Roome. The Concurrent C programming language. Summit, NJ, USA : Silicon Press, ISBN [Gej97] Robert A.van de Geijn. Using PLAPACK: Parallel Linear Algebra Package [Get94] Al Geist... [et al]. PVM: Parallel Virtual Machine-- A Users's Guide and Tutorial for Network Parallel Computing [Gry87] Steve Gregory. Parallel logic programming in PARLOG : the language and its 316
334 implementation. Wokingham, England ; Reading, Mass. : Addison-Wesley Pub. Co., ISBN , [Grp99] William Gropp, Ewing Lusk, Anthony Skjellum. Using MPI : portable parallel programming with the message-passing interface Cambridge, Mass. : MIT Press, nd edition. ISBN [Grp99] William Gropp, Ewing Lusk, Rajeev Thakur. Using MPI-2 : advanced features of the message-passing interface. Cambridge, Mass. : MIT Press, ISBN [Har91] Philip J. Hatcher and Michael J. Quinn. Data-parallel programming on MIMD computers. Cambridge, Mass. : MIT Press, c1991. ISBN [Kol94] Charles H.Koelbel, David B.Loveman...[et al]. The High Performance Fortran Handbook [Pen96] Guy-RenPerrin, Alain Darte, (eds.). The data parallel programming model : foundations, HPF realization, and scientific applications Berlin ; New York : Springer, 1996 ISBN (Berlin : acid-free paper) [Pet87] R.H. Perrott. Parallel programming. Wokingham, England : Addison-Wesley Pub. Co., ISBN [Pos88] Constantine D. Polychronopoulos. Parallel programming and compilers. Boston : Kluwer Academic, ISBN [Ral91] Susann Ragsdale, editor. Parallel programming. New York : McGraw-Hill, ISBN [Sat88] Gary Sabot. The paralation model : architecture-independent parallel programming. Cambridge, Mass. : MIT Press, ISBN [Snr97] Marc Snir, Steve Otto, Steven Huss Lederman, David Walker, Jack Dongarra. MPI: the Complete Reference. the MIT Press, 1997 [Snr98] Marc Snir... [et al.]. MPI--the complete reference. Cambridge, Mass. : MIT Press, nd edtion. ISBN , [Snw92] C.R. Snow. Concurrent programming. New York : Combridge University Press, ISBN [Tik91] Evan Tick. Parallel logic programming. Cambridge, Mass. : MIT Press, ISBN [Wim96] William H. Press... [et al.] Numerical recipes in Fortran 90 : the art of parallel scientific computing. Cambridge [England] ; New York : Cambridge University Press, Edition 2nd ed. ISBN (hardcover). [Wis90] Shirley A. Williams. Programming models for parallel systems. New York : J. Wiley, ISBN [Win96] Edited by Gregory V.Wilson... [et..al ]. Parallel Programming Using C [Win95] Gregory V. Wilson. Practical Parallel Programming.1995 [Yag87] Rong Yang. P-Prolog, a parallel logic programming language. Singapore : World Scientific, ISBN [Yun93] C.K. Yuen... [et al.]. Parallel lisp systems : a study of languages and architectures. London : Chapman & Hall, ISBN
335 Aliased Argument Asynchronous Communication Attributes Bandwidth Blocking Communication Blocking Receive Blocking Send Buffer Buffered Communication Mode Caching of Attributes Cartesian Topology Collective Communication Communication Modes Communication Processor Communicator Context Contiguous Data Datatypes Deadlock Event Graph Topology Group Heterogeneous Computing InterCommunicator IntraCommunicator Latency MPI MPIF Multicomputer NoBlocking Communication NoBlocking Receive NoBlocking Send Node Persistent Requests Physical Topology Point-to-Point Communication Portability Process Processor PVM Rank Ready Ready Communication Mode Message Passing Interface MPI Parallel Virtual Machine 318
336 Reduce Request Object Safe Programs Standard Communication Mode Status Object Subgroup Synchronization Synchronous Communication Mode Thread Topology Type Map Type Signature User-Defined Topology Virtual Shared Memory Virtual Topology 319
337 MPI MPI 1 MPI_INIT...25 MPI 2 MPI_FINALIZE...25 MPI 3 MPI_COMM_RANK...25 MPI 4 MPI_COMM_SIZE...26 MPI 5 MPI_SEND...26 MPI 6 MPI_RECV...27 MPI 7 MPI_WTIME...36 MPI 8 MPI_WTICK...36 MPI 9 MPI_GET_PROCESSOR_NAME...38 MPI 10 MPI_GET_VERSION...38 MPI 11 MPI_INITALIZED...39 MPI 12 MPI_ABORT...40 MPI 13 MPI_SENDRECV...56 MPI 14 MPI_SENDRECV_REPLACE...57 MPI 15 MPI_BSEND...70 MPI 16 MPI_BUFFER_ATTACH...71 MPI 17 MPI_BUFFER_DETACH...71 MPI 18 MPI_SSEND...74 MPI 19 MPI_RSEND...76 MPI 20 MPI_ISEND MPI 21 MPI_IRECV MPI 22 MPI_ISSEND MPI 23 MPI_IBSEND MPI 24 MPI_IRSEND MPI 25 MPI_WAIT MPI 26 MPI_TEST MPI 27 MPI_WAITANY MPI 28 MPI_WAITALL MPI 29 MPI_WAITSOME MPI 30 MPI_TESTANY MPI 31 MPI_TESTALL MPI 32 MPI_TESTSOME MPI 33 MPI_CANCEL MPI 34 MPI_TEST_CANCELLED MPI 35 MPI_REQUEST_FREE MPI 36 MPI_PROBE MPI 37 MPI_IPROBE MPI 38 MPI_SEND_INIT MPI 39 MPI_BSEND_INIT MPI 40 MPI_SSEND_INIT
338 MPI 41 MPI_RSEND_INIT MPI 42 MPI_RECV_INIT MPI 43 MPI_START MPI 44 MPI_STARTALL MPI 45 MPI_BCAST MPI 46 MPI_GATHER MPI 47 MPI_GATHERV MPI 48 MPI_SCATTER MPI 49 MPI_SCATTERV MPI 50 MPI_ALLGATHER MPI 51 MPI_ALLGATHERV MPI 52 MPI_ALLTOALL MPI 53 MPI_ALLTOALLV MPI 54 MPI_BARRIER MPI 55 MPI_REDUCE MPI 56 MPI_ALLREDUCE MPI 57 MPI_REDUCE_SCATTER MPI 58 MPI_SCAN MPI 59 MPI_OP_CREATE MPI 60 MPI_OP_FREE MPI 61 MPI_TYPE_CONTIGUOUS MPI 62 MPI_TYPE_VECTOR MPI 63 MPI_TYPE_HVECTOR MPI 64 MPI_TYPE_INDEXED MPI 65 MPI_TYPE_HINDEXED MPI 66 MPI_TYPE_STRUCT MPI 67 MPI_TYPE_COMMIT MPI 68 MPI_TYPE_FREE MPI 69 MPI_ADDRESS MPI 70 MPI_TYPE_EXTENT MPI 71 MPI_TYPE_SIZE MPI 72 MPI_GET_ELEMENTS MPI 73 MPI_GET_COUNT MPI 74 MPI_TYPE_LB MPI 75 MPI_TYPE_UB MPI 76 MPI_PACK MPI 77 MPI_UNPACK MPI 78 MPI_PACK_SIZE MPI 79 MPI_GROUP_SIZE MPI 80 MPI_GROUP_RANK MPI 81 MPI_GROUP_TRANSLATE_RANKS MPI 82 MPI_GROUP_COMPARE MPI 83 MPI_COMM_GROUP MPI 84 MPI_GROUP_UNION
339 MPI 85 MPI_GROUP_INTERSECTION MPI 86 MPI_GROUP_DIFFERENCE MPI 87 MPI_GROUP_INCL MPI 88 MPI_GROUP_EXCL MPI 89 MPI_GROUP_RANGE_INCL MPI 90 MPI_GROUP_RANGE_EXCL MPI 91 MPI_GROUP_FREE MPI 92 MPI_COMM_COMPARE MPI 93 MPI_COMM_DUP MPI 94 MPI_COMM_CREATE MPI 95 MPI_COMM_SPLIT MPI 96 MPI_COMM_FREE MPI 97 MPI_COMM_TEST_INTER MPI 98 MPI_COMM_REMOTE_SIZE MPI 99 MPI_COMM_REMOTE_GROUP MPI 100 MPI_INTERCOMM_CREATE MPI 101 MPI_INTERCOMM_MERGE MPI 102 MPI_KEYVAL_CREATE MPI 103 MPI_KEYVAL_FREE MPI 104 MPI_ATTR_PUT MPI 105 MPI_ATTR_GET MPI 106 MPI_ATTR_DELETE MPI 107 MPI_CART_CREATE MPI 108 MPI_DIMS_CREATE MPI 109 MPI_TOPO_TEST MPI 110 MPI_CART_GET MPI 111 MPI_CART_RANK MPI 112 MPI_CARTDIM_GET MPI 113 MPI_CART_SHIFT MPI 114 MPI_CART_COORDS MPI 115 MPI_CART_SUB MPI 116 MPI_CART_MAP MPI 117 MPI_GRAPH_CREATE MPI 118 MPI_GRAPHDIMS_GET MPI 119 MPI_GRAPH_GET MPI 120 MPI_GRAPH_NEIGHBORS_COUNT MPI 121 MPI_GRAPH_NEIGHBORS MPI 122 MPI_GRAPH_MAP MPI 123 MPI_ERRHANDLER_CREATE MPI 124 MPI_ERRHANDLER_SET MPI 125 MPI_ERRHANDLER_GET MPI 126 MPI_ERRHANDLER_FREE MPI 127 MPI_ERROR_STRING MPI 128 MPI_ERROR_CLASS
340 MPI 129 MPI_COMM_SPAWN MPI 130 MPI_COMM_GET_PARENT MPI 131 MPI_COMM_SPAWN_MULTIPLE MPI 132 MPI_OPEN_PORT MPI 133 MPI_COMM_ACCEPT MPI 134 MPI_CLOSE_PORT MPI 135 MPI_COMM_CONNECT MPI 136 MPI_COMM_DISCONNECT MPI 137 MPI_PUBLISH_NAME MPI 138 MPI_LOOKUP_NAME MPI 139 MPI_UNPUBLISH_NAME MPI 140 MPI_COMM_JOIN MPI 141 MPI_WIN_CREATE MPI 142 MPI_WIN_FREE MPI 143 MPI_PUT MPI 144 MPI_GET MPI 145 MPI_ACCUMULATE MPI 146 MPI_WIN_GET_GROUP MPI 147 MPI_WIN_FENCE MPI 148 MPI_WIN_START MPI 149 MPI_WIN_COMPLETE MPI 150 MPI_WIN_POST MPI 151 MPI_WIN_WAIT MPI 152 MPI_WIN_TEST MPI 153 MPI_WIN_LOCK MPI 154 MPI_WIN_UNLOCK MPI 155 MPI_FILE_OPEN MPI 156 MPI_FILE_CLOSE MPI 157 MPI_FILE_DELETE MPI 158 MPI_FILE_SET_SIZE MPI 159 MPI_FILE_PREALLOCATE MPI 160 MPI_FILE_GET_SIZE MPI 161 MPI_FILE_GET_GROUP MPI 162 MPI_FILE_GET_AMODE MPI 163 MPI_FILE_SET_INFO MPI 164 MPI_FILE_GET_INFO MPI 165 MPI_FILE_READ_AT MPI 166 MPI_FILE_WRITE_AT MPI 167 MPI_FILE_READ_AT_ALL MPI 168 MPI_FILE_WRITE_AT_ALL MPI 169 MPI_FILE_IREAD_AT MPI 170 MPI_FILE_IWRITE_AT MPI 171 MPI_FILE_READ_AT_ALL_BEGIN MPI 172 MPI_FILE_READ_AT_ALL_END
341 MPI 173 MPI_FILE_WRITE_AT_ALL_BEGIN MPI 174 MPI_FILE_WRITE_AT_ALL_END MPI 175 MPI_FILE_SET_VIEW MPI 176 MPI_FILE_GET_VIEW MPI 177 MPI_FILE_SEEK MPI 178 MPI_FILE_GET_POSITION MPI 179 MPI_FILE_GET_BYTE_OFFSET MPI 180 MPI_FILE_READ MPI 181 MPI_FILE_WRITE MPI 182 MPI_FILE_READ_ALL MPI 183 MPI_FILE_WRITE_ALL MPI 184 MPI_FILE_IREAD MPI 185 MPI_FILE_IWRITE MPI 186 MPI_FILE_READ_ALL_BEGIN MPI 187 MPI_FILE_READ_ALL_END MPI 188 MPI_FILE_WRITE_ALL_BEGIN MPI 189 MPI_FILE_WRITE_ALL_END MPI 190 MPI_FILE_SEEK_SHARED MPI 191 MPI_FILE_GET_POSITION_SHARED MPI 192 MPI_FILE_READ_SHARED MPI 193 MPI_FILE_WRITE_SHARED MPI 194 MPI_FILE_READ_ORDERED MPI 195 MPI_FILE_WRITE_ORDERED MPI 196 MPI_FILE_IREAD_SHARED MPI 197 MPI_FILE_IWRITE_SHARED MPI 198 MPI_FILE_READ_ORDERED_BEGIN MPI 199 MPI_FILE_READ_ORDERED_END MPI 200 MPI_FILE_WRITE_ORDERED_BEGIN MPI 201 MPI_FILE_WRITE_ORDERED_END MPI 202 MPI_FILE_GET_TYPE_EXTENT MPI 203 MPI_REGISTER_DATAREP MPI 204 MPI_FILE_SET_ATOMICITY MPI 205 MPI_FILE_GET_ATOMICITY MPI 206 MPI_FILE_SYNC MPI 207 MPI_TYPE_CREATE_DARRAY MPI 208 MPI_TYPE_CREATE_SUBARRAY
342 1 MPI 1. C MPI C MPI_CHAR MPI_BYTE MPI_SHORT MPI_INT MPI_LONG MPI_FLOAT MPI_DOUBLE MPI_UNSIGNED_CHAR MPI_UNSIGNED_SHORT MPI_UNSIGNED MPI_UNSIGNED_LONG MPI_LONG_DOUBLE C char short int long float double unsigned char unsigned short unsigned int unsigned long long double (some systems may not implement) 2. MPI_MAXLOC MPI_MINLOC C MPI C MPI_FLOAT_INT struct { float, int } MPI_LONG_INT struct { long, int } MPI_DOUBLE_INT struct { double, int } MPI_SHORT_INT struct { short, int } MPI_2INT struct { int, int } MPI_LONG_DOUBLE_INT struct { long double, int }; MPI_LONG_LONG_INT struct { long long, int }; 3. MPI MPI_PACKED MPI_UB MPI_LB For MPI_Pack and MPI_Unpack For MPI_Type_struct; an upper-bound indicator For MPI_Type_struct; a lower-bound indicator 4. Fortran MPI MPI_REAL MPI_INTEGER MPI_LOGICAL MPI_DOUBLE_PRECISION MPI_COMPLEX MPI_DOUBLE_COMPLEX Fortran REAL INTEGER LOGICAL DOUBLE PRECISION COMPLEX complex*16 complex*32 325
343 5. FORTRAN MPI MPI_INTEGER1 MPI_INTEGER2 MPI_INTEGER4 MPI_REAL4 MPI_REAL8 Fortran integer*1 integer*2 integer*4 real*4 real*8 6. MPI_MAXLOC MPI_MINLOC Fortran MPI MPI_2INTEGER MPI_2REAL MPI_2DOUBLE_PRECISION MPI_2COMPLEX MPI_2DOUBLE_COMPLEX 7. C MPI_Comm Fortran INTEGER,INTEGER REAL, REAL DOUBLE PRECISION, DOUBLE PRECISION COMPLEX, COMPLEX complex*16, complex*16 Fortran MPI_COMM_WORLD MPI_COMM_SELF 8. C MPI_Group Fortran INTEGER MPI_GROUP_EMPTY 9. MPI_IDENT MPI_CONGRUENT MPI_SIMILAR MPI_UNEQUAL 10. MPI_REDUCE, MPI_ALLREDUCE, MPI_REDUCE_SCATTER, and MPI_SCAN C MPI_Op Fortran INTEGER MPI_MAX MPI_MIN MPI_SUM MPI_PROD MPI_LAND 326
344 327 MPI_BAND MPI_LOR MPI_BOR MPI_LXOR MPI_BXOR MPI_MINLOC MPI_MAXLOC 11. C Fortran MPI_TAG_UB tag MPI_HOST MPI_IO I/O MPI_WTIME_IS_GLOBAL MPI_WTIME MPI_COMM_NULL MPI_OP_NULL MPI_GROUP_NULL MPI_DATATYPE_NULL MPI_REQUEST_NULL MPI_ERRHANDLER_NULL 13. MPI_MAX_PROCESSOR_NAME MPI_MAX_ERROR_STRING MPI_UNDEFINED MPI_UNDEFINED_RANK MPI_KEYVAL_INVALID MPI_BSEND_OVERHEAD MPI_PROC_NULL MPI_ANY_SOURCE MPI_ANY_TAG tag MPI_BOTTOM 14. MPI_GRAPH MPI_CART
345 MPI MPI_Status MPI_SOURCE MPI_TAG MPI_ERROR 16. MPI_Aint C MPI_Handler_function C MPI_User_function C MPI_Copy_function MPI_NULL_COPY_FN MPI_Delete_function MPI_NULL_DELETE_FN MPI_DUP_FN MPI_ERRORS_ARE_FATAL MPI_ERRORS_RETURN 17. MPI MPI_SUCCESS MPI_ERR_BUFFER MPI_ERR_COUNT MPI_ERR_TYPE MPI_ERR_TAG tag MPI_ERR_COMM MPI_ERR_RANK MPI_ERR_ROOT ROOT MPI_ERR_GROUP MPI_ERR_OP MPI_ERR_TOPOLOGY MPI_ERR_DIMS MPI_ERR_ARG MPI_ERR_UNKNOWN MPI_ERR_TRUNCATE MPI_ERR_OTHER MPI_ERR_INTERN MPI_ERR_IN_STATUS status MPI_ERR_PENDING MPI_ERR_REQUEST MPI_ERR_LASTCODE
346 329 2 MPICH MPI MPI_Abort MPI_Address MPI_Allgather MPI_Allgatherv MPI_Allreduce MPI_Alltoall MPI_Alltoallv MPI_Attr_delete MPI_Attr_get MPI_Attr_put MPI_Barrier MPI_Bcast MPI_Bsend MPI_Bsend_init MPI_Buffer_attach MPI_Buffer_detach MPI_Cancel MPI_Cart_coords MPI_Cart_create MPI_Cart_get MPI_Cart_map MPI_Cart_rank MPI_Cart_shift MPI_Cart_sub MPI_Cartdim_get MPI_CHAR MPI_Comm_compare MPI_Comm_create MPI_Comm_dup MPI_Comm_free MPI_Comm_group MPI_Comm_rank MPI_Comm_remote_group MPI_Comm_remote_size MPI_Comm_size MPI_Comm_split MPI_Comm_test_inter MPI_Dims_create MPI_DUP_FN MPI_Errhandler_create MPI_Errhandler_free MPI_Errhandler_get MPI_Errhandler_set MPI_Error_class MPI_Error_string MPI_File_c2f MPI_File_close MPI_File_delete MPI_File_f2c MPI_File_get_amode MPI_File_get_atomicity MPI_File_get_byte_offset MPI_File_get_errhandler MPI_File_get_group MPI_File_get_info MPI_File_get_position MPI_File_get_position_shared MPI_File_get_size MPI_File_get_type_extent MPI_File_get_view MPI_File_iread MPI_File_iread_at MPI_File_iread_shared MPI_File_iwrite MPI_File_iwrite_at MPI_File_iwrite_shared MPI_File_open MPI_File_preallocate MPI_File_preallocate MPI_File_read MPI_File_read_all MPI_File_read_all_begin MPI_File_read_all_end MPI_File_read_at MPI_File_read_at_all MPI_File_read_at_all_begin MPI_File_read_at_all_end MPI_File_read_ordered MPI_File_read_ordered_begin MPI_File_read_ordered_end MPI_File_read_shared MPI_File_seek MPI_File_seek_shared MPI_File_set_atomicity MPI_File_set_errhandler MPI_File_set_info MPI_File_set_size MPI_File_set_view MPI_File_sync MPI_File_write MPI_File_write_all MPI_File_write_all_begin MPI_File_write_all_end MPI_File_write_at
347 330 MPI_File_write_at_all MPI_File_write_at_all_begin MPI_File_write_at_all_end MPI_File_write_ordered MPI_File_write_ordered_begin MPI_File_write_ordered_end MPI_File_write_shared MPI_Finalize MPI_Finalized MPI_Gather MPI_Gatherv MPI_Get_count MPI_Get_elements MPI_Get_processor_name MPI_Get_version MPI_Graph_create MPI_Graph_get MPI_Graph_map MPI_Graph_neighbors MPI_Graph_neighbors_count MPI_Graphdims_get MPI_Group_compare MPI_Group_difference MPI_Group_excl MPI_Group_free MPI_Group_incl MPI_Group_intersection MPI_Group_range_excl MPI_Group_range_incl MPI_Group_rank MPI_Group_size MPI_Group_translate_ranks MPI_Group_union MPI_Ibsend MPI_Info_c2f MPI_Info_create MPI_Info_delete MPI_Info_dup MPI_Info_f2c MPI_Info_free MPI_Info_get MPI_Info_get_nkeys MPI_Info_get_nthkey MPI_Info_get_valuelen MPI_Info_set MPI_Init MPI_Init_thread MPI_Initialized MPI_Int2handle MPI_Intercomm_create MPI_Intercomm_merge MPI_Iprobe MPI_Irecv MPI_Irsend MPI_Isend MPI_Issend MPI_Keyval_create MPI_Keyval_free MPI_NULL_COPY_FN MPI_NULL_DELETE_FN MPI_Op_create MPI_Op_free MPI_Pack MPI_Pack_size MPI_Pcontrol MPI_Probe MPI_Recv MPI_Recv_init MPI_Reduce MPI_Reduce_scatter MPI_Request_c2f MPI_Request_free MPI_Rsend MPI_Rsend_init MPI_Scan MPI_Scatter MPI_Scatterv MPI_Send MPI_Send_init MPI_Sendrecv MPI_Sendrecv_replace MPI_Ssend MPI_Ssend_init MPI_Start MPI_Startall MPI_Status_c2f MPI_Status_set_cancelled MPI_Status_set_elements MPI_Test MPI_Test_cancelled MPI_Testall MPI_Testany MPI_Testsome MPI_Topo_test MPI_Type_commit MPI_Type_contiguous MPI_Type_create_darray MPI_Type_create_indexed_block MPI_Type_create_subarray MPI_Type_extent MPI_Type_free MPI_Type_get_contents MPI_Type_get_envelope MPI_Type_hindexed MPI_Type_hvector MPI_Type_indexed MPI_Type_lb MPI_Type_size MPI_Type_struct MPI_Type_ub
348 331 MPI_Type_vector MPI_Unpack MPI_Wait MPI_Waitall MPI_Waitany MPI_Waitsome MPI_Wtick MPI_Wtime MPIO_Request_c2f MPIO_Request_f2c MPIO_Test MPIO_Wait 2. MPE CLOG_commtype CLOG_cput CLOG_csync CLOG_Finalize CLOG_get_new_event CLOG_get_new_state CLOG_Init CLOG_init_buffers CLOG_mergelogs CLOG_mergend CLOG_msgtype CLOG_newbuff CLOG_nodebuffer2disk CLOG_Output CLOG_procbuf CLOG_reclen CLOG_rectype CLOG_reinit_buff CLOG_treesetup MPE MPE_Add_RGB_color MPE_CaptureFile MPE_CaptureFile MPE_Close_graphics MPE_Comm_global_rank MPE_Counter_create MPE_Counter_free MPE_Counter_nxtval MPE_Decomp1d MPE_Describe_event MPE_Describe_state MPE_Draw_circle MPE_Draw_line MPE_Draw_logic MPE_Draw_point MPE_Draw_points MPE_Draw_string MPE_Fill_circle MPE_Fill_rectangle MPE_Finish_log MPE_Get_mouse_press MPE_GetTags MPE_Iget_mouse_press MPE_Iget_mouse_press MPE_Init_log MPE_Initialized_logging MPE_IO_Stdout_to_file MPE_Line_thickness MPE_Log_event MPE_Log_get_event_number MPE_Log_receive MPE_Log_send MPE_Make_color_array MPE_Num_colors MPE_Open_graphics MPE_Print_datatype_pack_action MPE_Print_datatype_unpack_action MPE_ReturnTags MPE_Seq_begin MPE_Seq_end MPE_Start_log MPE_Stop_log MPE_TagsEnd MPE_Update
模板
MPI MPI MPI MPI MPI MPI 4 18 9% 5 ? 6 ? 7 数 个数 数 个数 个数 个数 8 ccnuma; SMP MPP; Cluster 9 10 11 12 13 MPI MPI MPI MPI MPI? MPI MPI MPI MPI 15 MPI? MPI(Message Passing Interface ) 1994 5 MPI MPI MPI MPI C
untitled
MPICH [email protected] 1 MPICH for Microsoft Windows 1.1 MPICH for Microsoft Windows Windows NT4/2000/XP Professional Server Windows 95/98 TCP/IP MPICH MS VC++ 6.x MS VC++.NET Compaq Visual Fortran 6.x
PowerPoint 演示文稿
第四讲 消息传递编程接口 MPI 一 MPI 编程基础 主要内容 MPI 安装 程序编译与运行 MPI 编程基础 MPI 程序基本结构 MPI 数据类型 消息发送和接收 MPI 一些常用函数 MPI 介绍 Message Passing Interface 消息传递编程标准, 目前最为通用的并行编程方式 提供一个高效 可扩展 统一的并行编程环境 MPI 是一个库, 不是一门语言,MPI 提供库函数
消息传递并行编程环境MPI.doc
973 MPI PETS 8 15 8 16 8 17 MPI MPI MPI MPI 2 MPI PETS PETS 1 1971 7 1992 1997 1999 2 MPI MPI MPI 1 MPI MPI MPI 2 - u=f MPI 3 1 proess 1 2 2 CPU 4 send reeive barrier redution 1 2 3 CPU soket, 4 : API
Parallel Programming with MPI
MPI 并行编程入门 中国科学院计算机网络信息中心超级计算中心 参考材料 张林波清华大学出版社莫则尧科学出版社都志辉清华大学出版社 消息传递平台 MPI 什么是 MPI (Message Passing Interface) 是函数库规范, 而不是并行语言 ; 操作如同库函数调用 是一种标准和规范, 而非某个对它的具体实现 (MPICH 等 ), 与编程语言无关 是一种消息传递编程模型, 并成为这类编程模型的代表
第7章-并行计算.ppt
EFEP90 10CDMP3 CD t 0 t 0 To pull a bigger wagon, it is easier to add more oxen than to grow a gigantic ox 10t 0 t 0 n p Ts Tp if E(n, p) < 1 p, then T (n) < T (n, p) s p S(n,p) = p : f(x)=sin(cos(x))
Microsoft PowerPoint - Tongji_MPI编程初步
并行编程初步 张丹丹 上海超级计算中心 2011-3-4 提纲 引言 认识 MPI 编程 MPI 编程简介 实例 并行计算机体系架构 共享存储 (Shared Memory) 并行计算机体系架构 分布式存储 (Distributed Memory) 混合架构 (Hybrid) 并行计算机体系架构 并行编程模型 数据并行模型 相同的操作同时作用于不同的数据 共享变量模型 用共享变量实现并行进程间的通信
30,000,000 75,000,000 75,000, (i) (ii) (iii) (iv)
30,000,000 75,000,000 75,000,000 24 (i) (ii) (iii) (iv) # * 1,800,000 1,800,000 15% 3,400,000 3,400,000 15% 4,200,000 4,200,000 10% 8,600,000 8,600,000 10% 12,600,000 12,600,000 88% 10% 16,000,000 16,000,000
mannal
高 性 能 集 群 计 算 机 使 用 说 明 书 版 本 1.0.8 高 性 能 计 算 研 究 组 编 2008 年 3 月 12 日 第 1 页 共 30 页 高 性 能 集 群 计 算 机... 1 使 用 说 明 书... 1 高 性 能 计 算 集 群 使 用 说 明... 3 1. 集 群 系 统 概 述... 3 2. 使 用 方 法... 5 1. 登 录 方 法... 5 2.MPI
2015年廉政公署民意調查
報 告 摘 要 2015 年 廉 政 公 署 周 年 民 意 調 查 背 景 1.1 為 了 掌 握 香 港 市 民 對 貪 污 問 題 和 廉 政 公 署 工 作 的 看 法, 廉 政 公 署 在 1992 至 2009 年 期 間, 每 年 均 透 過 電 話 訪 問 進 行 公 眾 民 意 調 查 為 更 深 入 了 解 公 眾 對 貪 污 問 題 的 看 法 及 關 注, 以 制 訂 適 切
C/C++ - 文件IO
C/C++ IO Table of contents 1. 2. 3. 4. 1 C ASCII ASCII ASCII 2 10000 00100111 00010000 31H, 30H, 30H, 30H, 30H 1, 0, 0, 0, 0 ASCII 3 4 5 UNIX ANSI C 5 FILE FILE 6 stdio.h typedef struct { int level ;
智力测试故事
II 980.00 ... 1... 1... 1... 2... 2... 2... 3... 3... 3... 3... 4... 4... 5... 5... 6... 6... 7... 7... 8... 8... 8... 9... 9...10...10...10 I II...11...11...11...12...13...13...13...14...14...14...15...15...15...16...16...17...17...18...18...19...19...19...19...20...20...21...21...21
大綱介紹 MPI 標準介紹 MPI 的主要目標 Compiler & Run 平行程式 MPICH 程式基本架構 點對點通訊 函數介紹 集體通訊 函數介紹
MPI 平行程式設計 勁智數位科技股份有限公司 技術研發部林勝峰 [email protected] 大綱介紹 MPI 標準介紹 MPI 的主要目標 Compiler & Run 平行程式 MPICH 程式基本架構 點對點通訊 函數介紹 集體通訊 函數介紹 MPI (Message Passing Interface) Version1.0:June, 1994. Version1.1:June,
Microsoft Word - John_Ch_1202
新 约 圣 经 伴 读 约 翰 福 音 目 录 说 明..I 序 言 : 圣 经 中 神 圣 启 示 的 三 层.II 按 时 分 粮 的 原 则..VIII 纲 目 XI 第 一 章..1 第 二 章 13 第 三 章 25 第 四 章 37 第 五 章 49 第 六 章 61 第 七 章 73 第 八 章 85 第 九 章 97 第 十 章..109 第 十 一 章..121 第 十 二 章..133
全唐诗50
... 1... 1... 2... 2... 3... 3... 3... 4... 4... 5... 5... 6... 6... 6... 7... 7... 7... 8... 8... 8... 9 I II... 9...10...10...10...11...11...11...12...12...12...13...14...14...15...15...16...16...16...17,...17...18...18...19...19...19
_汪_文前新ok[3.1].doc
普 通 高 校 本 科 计 算 机 专 业 特 色 教 材 精 选 四 川 大 学 计 算 机 学 院 国 家 示 范 性 软 件 学 院 精 品 课 程 基 金 青 年 基 金 资 助 项 目 C 语 言 程 序 设 计 (C99 版 ) 陈 良 银 游 洪 跃 李 旭 伟 主 编 李 志 蜀 唐 宁 九 李 涛 主 审 清 华 大 学 出 版 社 北 京 i 内 容 简 介 本 教 材 面 向
FY.DOC
高 职 高 专 21 世 纪 规 划 教 材 C++ 程 序 设 计 邓 振 杰 主 编 贾 振 华 孟 庆 敏 副 主 编 人 民 邮 电 出 版 社 内 容 提 要 本 书 系 统 地 介 绍 C++ 语 言 的 基 本 概 念 基 本 语 法 和 编 程 方 法, 深 入 浅 出 地 讲 述 C++ 语 言 面 向 对 象 的 重 要 特 征 : 类 和 对 象 抽 象 封 装 继 承 等 主
I. 1-2 II. 3 III. 4 IV. 5 V. 5 VI. 5 VII. 5 VIII. 6-9 IX. 9 X XI XII. 12 XIII. 13 XIV XV XVI. 16
125-0834I/1405/GH I. 1-2 II. 3 III. 4 IV. 5 V. 5 VI. 5 VII. 5 VIII. 6-9 IX. 9 X. 10-11 XI. 11-12 XII. 12 XIII. 13 XIV. 14-15 XV. 15-16 XVI. 16 I. * ++p ++ p ++ ++ * ++p ++ ++ ++p 1 2 ++ ++ ++ ++ ++ I.
CC213
: (Ken-Yi Lee), E-mail: [email protected] 49 [P.51] C/C++ [P.52] [P.53] [P.55] (int) [P.57] (float/double) [P.58] printf scanf [P.59] [P.61] ( / ) [P.62] (char) [P.65] : +-*/% [P.67] : = [P.68] : ,
Microsoft Word - Final Chi-Report _PlanD-KlnEast_V7_ES_.doc
九 龍 東 商 業 的 統 計 調 查 - 行 政 摘 要 - 2011 年 5 月 統 計 圖 行 政 摘 要...1 圖 I: 在 不 同 地 區 及 樓 宇 類 別 的 數 目 及 比 例...9 圖 II: 影 響 選 擇 地 點 的 因 素 的 重 要 程 度 對 比 就 現 時 所 在 地 點 各 項 因 素 的 滿 意 程 度...20 圖 III: 影 響 選 擇 樓 宇 的 因 素
施 的 年 度 維 修 工 程 已 於 4 月 15 日 完 成, 並 於 4 月 16 日 重 新 開 放 給 市 民 使 用 ii. 天 水 圍 游 泳 池 的 年 度 維 修 工 程 已 於 3 月 31 日 完 成, 並 於 4 月 1 日 重 新 開 放 給 市 民 使 用 iii. 元
地 委 會 文 件 2016/ 第 25 號 ( 於 6.5.2016 會 議 討 論 ) 康 樂 及 文 化 事 務 署 在 元 朗 區 內 舉 辦 的 康 樂 體 育 活 動 及 設 施 管 理 綜 合 匯 報 (2016 年 5 月 號 報 告 ) 目 的 本 文 件 旨 在 向 各 委 員 匯 報 康 樂 及 文 化 事 務 署 ( 康 文 署 ) 於 2016 年 2 月 至 5 月 在
C/C++ - 函数
C/C++ Table of contents 1. 2. 3. & 4. 5. 1 2 3 # include # define SIZE 50 int main ( void ) { float list [ SIZE ]; readlist (list, SIZE ); sort (list, SIZE ); average (list, SIZE ); bargragh
新版 明解C言語入門編
328, 4, 110, 189, 103, 11... 318. 274 6 ; 10 ; 5? 48 & & 228! 61!= 42 ^= 66 _ 82 /= 66 /* 3 / 19 ~ 164 OR 53 OR 164 = 66 ( ) 115 ( ) 31 ^ OR 164 [] 89, 241 [] 324 + + 4, 19, 241 + + 22 ++ 67 ++ 73 += 66
奇闻怪录
... 1... 1... 2... 3... 3... 4... 4... 5... 5... 6... 8... 9... 10... 10... 11... 11... 13... 13... 14... 14... 15... 16... 17... 21 I ... 22... 23... 23... 24... 25... 25... 26... 27... 28... 29 UFO...
<4D F736F F D20BB4FAA46BFA4B2C4A447B4C15F D313038A67E5FBAEEA658B56FAE69B9EAAC49A4E8AED72D5FAED6A977A5BB5F >
行 政 院 104 年 11 月 2 日 院 臺 綜 字 第 1040149345A 號 函 核 定 臺 東 縣 第 二 期 (105-108 年 ) 綜 合 發 展 實 施 方 案 ( 核 定 本 ) 臺 東 縣 政 府 中 華 民 國 1 0 4 年 1 1 月 臺 東 縣 第 二 期 (105-108 年 ) 綜 合 發 展 實 施 方 案 ( 核 定 本 ) 目 錄 第 一 章 前 言...
Microsoft Word - COC HKROO App I _Chi_ Jan2012.doc
附 錄 I 目 錄 項 目 貨 品 描 述 頁 數 (I) 活 動 物 ; 動 物 1 (II) 植 物 2 (III) 動 物 或 植 物 脂 肪 及 油 及 其 分 化 後 剩 餘 的 ; 經 處 理 可 食 的 脂 肪 ; 動 物 或 植 物 蠟 2 (IV) 經 配 製 的 食 品 ; 飲 料 酒 及 醋 ; 煙 草 及 製 成 的 煙 草 代 替 品 2 (V) 礦 產 5 (VI) 化
C 1
C homepage: xpzhangme 2018 5 30 C 1 C min(x, y) double C // min c # include # include double min ( double x, double y); int main ( int argc, char * argv []) { double x, y; if( argc!=
(i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) (i) (ii)(iii) (iv) (v)
1948 12 1 1986 1 1995 1995 3 1995 5 2003 4 2003 12 2015 82015 10 1 2004 2 1 (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) (i) (ii)(iii) (iv) (v) (vi) (vii)(viii) (ix) (x) (xi) 2013 8 15 (i) (ii) (iii)
財 務 委 員 會 審 核 2014 至 2015 年 度 開 支 預 算 的 報 告 2014 年 7 月
香 港 特 別 行 政 區 立 法 會 財 務 委 員 會 審 核 2014 至 2015 年 度 開 支 預 算 的 報 告 2014 年 7 月 財 務 委 員 會 審 核 2014 至 2015 年 度 開 支 預 算 的 報 告 2014 年 7 月 章 節 目 錄 頁 數 I 序 言 1-2 II 公 務 員 事 務 3-9 III 司 法 及 法 律 事 務 10-19 IV 財 經 事
科学计算的语言-FORTRAN95
科 学 计 算 的 语 言 -FORTRAN95 目 录 第 一 篇 闲 话 第 1 章 目 的 是 计 算 第 2 章 FORTRAN95 如 何 描 述 计 算 第 3 章 FORTRAN 的 编 译 系 统 第 二 篇 计 算 的 叙 述 第 4 章 FORTRAN95 语 言 的 形 貌 第 5 章 准 备 数 据 第 6 章 构 造 数 据 第 7 章 声 明 数 据 第 8 章 构 造
PowerPoint 演示文稿
机群应用开发 并行编程原理及程序设计 Parallel Programming: Fundamentals and Implementation 戴荣 dair@dawningcomcn 曙光信息产业有限公司 2008 年 7 月 2008 年 7 月 1 参考文献 黄铠, 徐志伟著, 陆鑫达等译 可扩展并行计算技术, 结构与编程 北京 : 机械工业出版社, P33~56,P227~237, 2000
RDEC-RES
RDEC-RES-089-005 RDEC-RES-089-005 VI I II III 6 IV 7 3 V VI VII VIII IX X XI XII XIII XIV XV XVI XVII XVIII XIX XX 1 2 3 4 5 6 7 8 躰 ( 9 10 躰 11 12 躰 1 13 14 躰 15 16 躰 17 18 19 1 20 21 22 2 23 24 25 26
Parallel Programming with MPI
MPI 并行编程入门 中国科学院计算机网络信息中心超级计算中心 聚合通信 定义 三种通信方式 聚合函数列表 同步 广播 收集 散发 全散发收集 归约 定义 communicator 1 3 4 5 0 2 一个通信器的所有进程参与, 所有进程都调用聚合通信函数 MPI 系统保证聚合通信函数与点对点调用不会混淆 聚合通信不需要消息标号 聚合通信函数都为阻塞式函数 聚合通信的功能 : 通信 同步 计算等
「保險中介人資格考試」手冊
目 錄 內 容 頁 次 1. 引 言.. 1 2. 考 試.. 1 3. 報 考 詳 情.. 3 4. 註 冊 手 續.. 3 5. 考 試 費.. 4 6. 准 考 證.. 5 7. 選 擇 考 試 時 間.. 5 8. 電 腦 或 系 統 出 現 問 題..... 5 9. 考 試 規 則.. 5 10. 取 消 資 格.. 6 11. 核 實 考 生 身 分.. 6 12. 發 出 成 績 通
Microsoft Word - Entry-Level Occupational Competencies for TCM in Canada200910_ch _2_.doc
草 稿 致 省 級 管 理 單 位 之 推 薦 書 二 零 零 九 年 十 月 十 七 日 加 拿 大 中 醫 管 理 局 聯 盟 All rights reserved 序 言 加 拿 大 中 醫 管 理 局 聯 盟, 於 二 零 零 八 年 一 月 至 二 零 零 九 年 十 月 間, 擬 定 傳 統 中 醫 執 業 之 基 礎 文 件 由 臨 床 經 驗 豐 富 之 中 醫 師 教 育 者 及
, 7, Windows,,,, : ,,,, ;,, ( CIP) /,,. : ;, ( 21 ) ISBN : -. TP CIP ( 2005) 1
21 , 7, Windows,,,, : 010-62782989 13501256678 13801310933,,,, ;,, ( CIP) /,,. : ;, 2005. 11 ( 21 ) ISBN 7-81082 - 634-4... - : -. TP316-44 CIP ( 2005) 123583 : : : : 100084 : 010-62776969 : 100044 : 010-51686414
山东出版传媒招股说明书
( 山 东 省 济 南 市 英 雄 山 路 189 号 ) 首 次 公 开 发 行 股 票 ( 申 报 稿 ) 保 荐 机 构 ( 主 承 销 商 ) 中 银 国 际 证 券 有 限 责 任 公 司 ( 上 海 市 浦 东 银 城 中 路 200 号 中 银 大 厦 39 层 ) 首 次 公 开 发 行 股 票 ( 一 ) 发 行 股 票 类 型 : 人 民 币 普 通 股 (A 股 ) ( 二 )
(b)
1. (a) (b) (c) 22 85155 (i) (ii) 2200 5 35% 20% 500 3,000 3015 50% 30 (i) (ii) (iii) (iii) 30% QFII 15% H (20)(5) (iv) (i)(ii) (iii) (iv) (v) 10 30 (vi) 5% (vii) (1) (1) 25%(1) (viii) (ix) 10% 20 45 20
-i-
-i- -ii- -iii- -iv- -v- -vi- -vii- -viii- -ix- -x- -xi- -xii- 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-10 1-11 1-12 1-13 1-14 1-15 1-16 1-17 1-18 1-19 1-20 1-21 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 2-11
Microsoft Word - 强迫性活动一览表.docx
1 1 - / 2 - / 3 - / 4 - / 5 - I. 1. / 2. / 3. 4. 5. 6. 7. 8. 9 10 11. 12. 2 13. 14. 15. 16. 17. 18. 19. 20 21. 22 23. 24. / / 25. 26. 27. 28. 29. 30. 31. II. 1. 2 3. 4 3 5. 6 7 8. 9 10 11 12 13 14. 15.
- 2 - 获 豁 免 计 算 入 总 楼 面 面 积 及 / 或 上 盖 面 积 的 环 保 及 创 新 设 施 根 据 建 筑 物 条 例 的 规 定 4. 以 下 的 环 保 设 施 如 符 合 某 些 条 件, 并 由 有 关 人 士 提 出 豁 免 申 请, 则 可 获 豁 免 计 算 入
屋 宇 署 地 政 总 署 规 划 署 联 合 作 业 备 考 第 1 号 环 保 及 创 新 的 楼 宇 引 言 为 了 保 护 和 改 善 建 筑 及 自 然 环 境, 政 府 推 广 建 造 环 保 及 创 新 的 楼 宇, 目 的 是 鼓 励 业 界 设 计 和 建 造 加 入 以 下 措 施 的 楼 宇 : (a) 采 用 楼 宇 整 体 使 用 周 期 方 法 规 划 设 计 建 造 和
1 2 6 8 15 36 48 55 58 65 67 74 76 150 152 1 3 1 2 4 2 2001 2000 1999 12 31 12 31 12 31 304,347 322,932 231,047 14,018 16,154 5,665 (i) 0.162 0.193 0.082 (ii) 0.165 0.227 0.082 (iii) 10.08 13.37 6.47 0.688
建築污染綜合指標之研究
1 2 3 4 5 (3) 6 7 (11) 8 9 10 11 12 250 13 14 15 16 17 18 V V 0 a a 0 Li 10 ( 10 ) a a 0 19 20 21 22 23 24 ( ) ( ) + 25 0 1 1 ( 1 ) ( 1 ) + ( 1 ) ( 1 ) + ( 1 ) ( 1 ) + ( 1 ) ( 1 ) + + 26 ( + 27 n i 1 28
新・解きながら学ぶC言語
330!... 67!=... 42 "... 215 " "... 6, 77, 222 #define... 114, 194 #include... 145 %... 21 %... 21 %%... 21 %f... 26 %ld... 162 %lf... 26 %lu... 162 %o... 180 %p... 248 %s... 223, 224 %u... 162 %x... 180
<D6D0B9FAB9C5CAB757512E6D7073>
黄 河 文 明 的 历 史 变 迁 丛 书 编 委 会 学 术 顾 问 李 学 勤 朱 绍 侯 姚 瀛 艇 郝 本 性 晁 福 林 王 巍 主 任 李 小 建 苗 长 虹 副 主 任 覃 成 林 高 有 鹏 牛 建 强 刘 东 勋 主 编 李 玉 洁 编 委 苗 书 梅 程 遂 营 王 蕴 智 张 新 斌 郑 慧 生 涂 白 奎 袁 俊 杰 薛 瑞 泽 陈 朝 云 孔 学 郑 贞 富 陈 彩 琴 石
C/C++ - 字符输入输出和字符确认
C/C++ Table of contents 1. 2. getchar() putchar() 3. (Buffer) 4. 5. 6. 7. 8. 1 2 3 1 // pseudo code 2 read a character 3 while there is more input 4 increment character count 5 if a line has been read,
1 LINUX IDE Emacs gcc gdb Emacs + gcc + gdb IDE Emacs IDE C Emacs Emacs IDE ICE Integrated Computing Environment Emacs Unix Linux Emacs Emacs Emacs Un
Linux C July 27, 2016 Contents 1 Linux IDE 1 2 GCC 3 2.1 hello.c hello.exe........................... 5 2.2............................... 9 2.2.1 -Wall................................ 9 2.2.2 -E..................................
<4D6963726F736F667420576F7264202D20CDF2B4EFB5E7D3B0D4BACFDFB9C9B7DDD3D0CFDEB9ABCBBECAD7B4CEB9ABBFAAB7A2D0D0B9C9C6B1D5D0B9C9CBB5C3F7CAE9A3A8C9EAB1A8B8E532303134C4EA34D4C23137C8D5B1A8CBCDA3A92E646F63>
( 住 所 : 北 京 市 朝 阳 区 建 国 路 93 号 万 达 广 场 B 座 11 层 ) 首 次 公 开 发 行 A 股 股 票 ( 申 报 稿 ) 保 荐 人 ( 主 承 销 商 ) 住 所 : 上 海 市 浦 东 银 城 中 路 200 号 中 银 大 厦 39 层 万 达 电 影 院 线 股 份 有 限 公 司 首 次 公 开 发 行 股 票 本 公 司 的 发 行 申 请 尚 未
对联故事
980.00 ... 1... 1... 2... 3... 3... 4... 4... 5... 5... 6... 7... 7... 8... 9...10...10...11...12...13...13...14...15...15...16...17 I II...18...18...19...19...20...21...21...22...22...23...24...25...25...26...26...27...28...29...29...30...30...31...32...32...33...34...34...35
新・明解C言語入門編『索引』
!... 75!=... 48 "... 234 " "... 9, 84, 240 #define... 118, 213 #include... 148 %... 23 %... 23, 24 %%... 23 %d... 4 %f... 29 %ld... 177 %lf... 31 %lu... 177 %o... 196 %p... 262 %s... 242, 244 %u... 177
,, 17 075 200,, 170, 1, 40, 4 000, 5,,,, 100 600, 862,, 100, 2 /5,, 1 /5, 1 2,, 1 /5,,, 1 /2,, 800,,,,,,, 300,,,,,, 4 300,,,,, ,,,,,,,,,,, 2003 9 3 3. 22 24 4. 26 30 2 33 33 1. 34 61 1. 1 37 63 1. 2 44
<4D F736F F D D342DA57CA7DEA447B14D2DA475B57BBB50BADEB27AC3FEB14DA447B8D5C344>
1. 請 問 誰 提 出 積 體 電 路 (IC) 上 可 容 納 的 電 晶 體 數 目, 約 每 隔 24 個 月 (1975 年 更 改 為 18 個 月 ) 便 會 增 加 一 倍, 效 能 也 將 提 升 一 倍, 也 揭 示 了 資 訊 科 技 進 步 的 速 度? (A) 英 特 爾 (Intel) 公 司 創 始 人 戈 登. 摩 爾 (Gordon Moore) (B) 微 軟 (Microsoft)
How to Debug Tuxedo Server printf( Input data is: %s, inputstr); fprintf(stdout, Input data is %s, inputstr); fprintf(stderr, Input data is %s, inputstr); printf( Return data is: %s, outputstr); tpreturn(tpsuccess,
C/C++语言 - C/C++数据
C/C++ C/C++ Table of contents 1. 2. 3. 4. char 5. 1 C = 5 (F 32). 9 F C 2 1 // fal2cel. c: Convert Fah temperature to Cel temperature 2 # include < stdio.h> 3 int main ( void ) 4 { 5 float fah, cel ;
全唐诗28
... 1... 1... 1... 2... 2... 2... 3... 3... 4... 4... 4... 5... 5... 5... 5... 6... 6... 6... 6... 7... 7... 7... 7... 8... 8 I II... 8... 9... 9... 9...10...10...10...11...11...11...11...12...12...12...13...13...13...14...14...14...15...15...15...16...16...16...17...17
「香港中學文言文課程的設計與教學」單元設計範本
1. 2. 3. (1) (6) ( 21-52 ) (7) (12) (13) (16) (17) (20) (21) (24) (25) (31) (32) (58) 1 2 2007-2018 7 () 3 (1070) (1019-1086) 4 () () () () 5 () () 6 21 1. 2. 3. 1. 2. 3. 4. 5. 6. 7. 8. 9. ( ) 7 1. 2.
软件测试(TA07)第一学期考试
一 判 断 题 ( 每 题 1 分, 正 确 的, 错 误 的,20 道 ) 1. 软 件 测 试 按 照 测 试 过 程 分 类 为 黑 盒 白 盒 测 试 ( ) 2. 在 设 计 测 试 用 例 时, 应 包 括 合 理 的 输 入 条 件 和 不 合 理 的 输 入 条 件 ( ) 3. 集 成 测 试 计 划 在 需 求 分 析 阶 段 末 提 交 ( ) 4. 单 元 测 试 属 于 动
歡 迎 您 成 為 滙 豐 銀 聯 雙 幣 信 用 卡 持 卡 人 滙 豐 銀 聯 雙 幣 信 用 卡 同 時 兼 備 港 幣 及 人 民 幣 戶 口, 讓 您 的 中 港 消 費 均 可 以 當 地 貨 幣 結 算, 靈 活 方 便 此 外, 您 更 可 憑 卡 於 全 球 近 400 萬 家 特
歡 迎 您 成 為 滙 豐 銀 聯 雙 幣 信 用 卡 持 卡 人 滙 豐 銀 聯 雙 幣 信 用 卡 同 時 兼 備 港 幣 及 人 民 幣 戶 口, 讓 您 的 中 港 消 費 均 可 以 當 地 貨 幣 結 算, 靈 活 方 便 此 外, 您 更 可 憑 卡 於 全 球 近 400 萬 家 特 約 商 戶 簽 賬, 尊 享 種 種 購 物 飲 食 及 娛 樂 消 費 優 惠 如 需 查 詢 滙
普 通 高 等 教 育 十 二 五 重 点 规 划 教 材 计 算 机 系 列 中 国 科 学 院 教 材 建 设 专 家 委 员 会 十 二 五 规 划 教 材 操 作 系 统 戴 仕 明 姚 昌 顺 主 编 姜 华 张 希 伟 副 主 编 郑 尚 志 梁 宝 华 参 编 参 编 周 进 钱 进
科 学 出 版 社 普 通 高 等 教 育 十 二 五 重 点 规 划 教 材 计 算 机 系 列 中 国 科 学 院 教 材 建 设 专 家 委 员 会 十 二 五 规 划 教 材 操 作 系 统 戴 仕 明 姚 昌 顺 主 编 姜 华 张 希 伟 副 主 编 郑 尚 志 梁 宝 华 参 编 参 编 周 进 钱 进 参 编 北 京 内 容 简 介 本 书 由 浅 入 深 系 统 全 面 地 介 绍
「保險中介人資格考試」手冊
保 險 中 介 人 資 格 考 試 手 冊 目 錄 內 容 頁 次 1. 引 言.. 1 2. 考 試... 1 3. 報 考 詳 情... 3 4. 報 名 手 續... 4 5. 考 試 費... 5 6. 准 考 證... 5 7. 選 擇 考 試 時 間... 6 8. 電 腦 或 系 統 出 現 問 題...... 6 9. 考 試 規 則..... 6 10. 取 消 資 格... 7
C/C++程序设计 - 字符串与格式化输入/输出
C/C++ / Table of contents 1. 2. 3. 4. 1 i # include # include // density of human body : 1. 04 e3 kg / m ^3 # define DENSITY 1. 04 e3 int main ( void ) { float weight, volume ; int
- 1 - ( ) ( ) ( )
: 2 2868 4679 [email protected] www.edlb.gov.hk/edb/chi/papers/cdoc/ - 1 - ( ) 2. 3. 4. 2004 ( ) 60 5. ( ) - 2-6. ( 483 ) 7. ( 32 ) ( 448 ) 1995 ( ) 1 8. 1 ( ) 128 129 130 - 3-9. (i) (ii) (iii)
天主教永年高級中學綜合高中課程手冊目錄
天 主 教 永 年 高 級 中 學 綜 合 高 中 課 程 手 冊 目 錄 壹 學 校 背 景. 貳 教 育 理 念 與 教 育 目 標. 3 一 規 劃 理 念...3 二 教 育 目 標...3 參 畢 業 要 求. 5 一 總 學 分 數...5 二 必 選 修 學 分 數...5 三 必 須 參 加 活 動...9 四 成 績 評 量 方 式...9 肆 課 程 概 述.. 9 一 課 程
Microsoft Word - Panel Paper on T&D-Chinese _as at 6.2.2013__final_.doc
二 零 一 三 年 二 月 十 八 日 會 議 討 論 文 件 立 法 會 CB(4)395/12-13(03) 號 文 件 立 法 會 公 務 員 及 資 助 機 構 員 工 事 務 委 員 會 公 務 員 培 訓 及 發 展 概 況 目 的 本 文 件 介 紹 公 務 員 事 務 局 為 公 務 員 所 提 供 培 訓 和 發 展 的 最 新 概 況, 以 及 將 於 二 零 一 三 年 推 出
II II
I I II II III 1. 2. 3. III 4. IV 5. 6. 8. 9. 10. 12. IV V V VI VI VII VII VIII VIII IX IX X X XI XI XII XII 1 1 2 2 3 3 4 33 35 4 5 5 6 6 7 ( ) 7 8 8 9 9 10 10 11 11 12 12 13 13 14 14 15 15 16 16 17 17
目录 第一章 MPI 简介 消息传递编程的相关概念 分布式内存 消息传输 进程 消息传递库 发送 / 接收 同步 / 异步 阻塞
神威蓝光 计算机系统 MPI 用户手册 国家超级计算济南中心 2011 年 03 月 目录 第一章 MPI 简介... 1 1.1 消息传递编程的相关概念... 2 1.1.1 分布式内存... 2 1.1.2 消息传输... 3 1.1.3 进程... 3 1.1.4 消息传递库... 3 1.1.5 发送 / 接收... 3 1.1.6 同步 / 异步... 3 1.1.7 阻塞通讯... 4
《小王子》 (法)圣埃克苏佩里 原著
小 王 子 ( 法 ) 圣 埃 克 苏 佩 里 原 著 献 给 莱 翁 维 尔 特 请 孩 子 们 原 谅 我 把 这 本 书 献 给 了 一 个 大 人 我 有 一 条 正 当 的 理 由 : 这 个 大 人 是 我 在 世 界 上 最 好 的 朋 友 我 另 有 一 条 理 由 : 这 个 大 人 什 么 都 懂 ; 即 使 儿 童 读 物 也 懂 我 还 有 第 三 条 理 由 ; 这 个 大
Microsoft PowerPoint - OPVB1基本VB.ppt
大 綱 0.VB 能 做 什 麼? CH1 VB 基 本 認 識 1.VB 歷 史 與 版 本 2.VB 環 境 簡 介 3. 即 時 運 算 視 窗 1 0.VB 能 做 什 麼? Visual Basic =>VB=> 程 式 設 計 語 言 => 設 計 程 式 設 計 你 想 要 的 功 能 的 程 式 自 動 化 資 料 庫 計 算 模 擬 遊 戲 網 路 監 控 實 驗 輔 助 自 動
Microsoft Word - NCH final report_CHI _091118_ revised on 10 Dec.doc
十 八 區 區 議 會 的 簡 介 會 (1) 東 區 區 議 會 (2008 年 4 月 24 日 ) III. 中 環 新 海 濱 城 市 設 計 研 究 第 二 階 段 公 眾 參 與 ( 東 區 區 議 會 文 件 第 51/08 號 ) 10. 主 席 歡 迎 發 展 局 副 秘 書 長 ( 規 劃 及 地 政 ) 麥 駱 雪 玲 太 平 紳 士 規 劃 署 副 署 長 / 地 區 黃 婉
Microsoft Word - 0B 封裡面.doc
國 立 臺 灣 大 學 校 總 區 東 區 規 劃 調 整 委 託 技 術 服 務 報 告 書 定 稿 版 目 錄 審 核 意 見 修 正 辦 理 情 形 壹 計 畫 緣 起 及 目 標 1-1 計 畫 緣 起 1 1-2 計 畫 目 標 與 效 益 1 1-3 計 畫 範 圍 2 1-4 計 畫 期 程 2 貳 基 地 環 境 概 述 2-1 東 ( 北 ) 區 使 用 現 況 3 2-2 東
<4D6963726F736F667420576F7264202D20B6ABD0CBD6A4C8AFB9C9B7DDD3D0CFDEB9ABCBBECAD7B4CEB9ABBFAAB7A2D0D0B9C9C6B1D5D0B9C9CBB5C3F7CAE9A3A8C9EAB1A8B8E5202032303134C4EA33D4C23131C8D5B1A8CBCDA3A92E646F63>
东 兴 证 券 股 份 有 限 公 司 ( 住 所 : 北 京 市 西 城 区 金 融 大 街 5 号 ( 新 盛 大 厦 )12 15 层 ) 首 次 公 开 发 行 股 票 招 股 说 明 书 ( 申 报 稿 ) 保 荐 人 ( 主 承 销 商 ) 瑞 银 证 券 有 限 责 任 公 司 住 所 : 北 京 市 西 城 区 金 融 大 街 7 号 英 蓝 国 际 金 融 中 心 12 层 15
一、
... 1...24...58 - 2 - - 3 - - 4 - - 5 - - 6 - - 7 - - 8 - i. ii. iii. iv. i. ii. iii. iv. v. vi. vii. viii. ix. x. - 9 - xi. - 10 - - 11 - -12- -13- -14- -15- C. @ -16- @ -17- -18- -19- -20- -21- -22-
江苏宁沪高速公路股份有限公司.PDF
- 1 - - 2 - - 3 - - 4 - - 5 - - 6 - - 7 - - 8 - 33.33% ( ) ( ) ( ) 33.33% ( ) ( ) ( ) 1 1 1992 8 3200001100976 1997 6 27 H 12.22 2001 1 16 A 1.5 2001 12 3 503,774.75 14,914,399,845.00 13,445,370,274.00
2015 2 104 109 110 112 113 113 113 114 1 (1) 9,654,346,443 6,388,650,779 4,183,429,633 1,183,342,128 (2) 47,710,000 41,600,000 16,600,000 13,200,000 (3), (1) 371,147,787 125,421,629 749,150,179 565,001,961
epub83-1
C++Builder 1 C + + B u i l d e r C + + B u i l d e r C + + B u i l d e r C + + B u i l d e r 1.1 1.1.1 1-1 1. 1-1 1 2. 1-1 2 A c c e s s P a r a d o x Visual FoxPro 3. / C / S 2 C + + B u i l d e r / C
Microsoft Word - MP2018_Report_Chi _12Apr2012_.doc
人 力 資 源 推 算 報 告 香 港 特 別 行 政 區 政 府 二 零 一 二 年 四 月 此 頁 刻 意 留 空 - 2 - 目 錄 頁 前 言 詞 彙 縮 寫 及 注 意 事 項 摘 要 第 一 章 : 第 二 章 : 第 三 章 : 第 四 章 : 附 件 一 : 附 件 二 : 附 件 三 : 附 件 四 : 附 件 五 : 附 件 六 : 附 件 七 : 引 言 及 技 術 大 綱 人
南華大學數位論文
1 i -------------------------------------------------- ii iii iv v vi vii 36~39 108 viii 15 108 ix 1 2 3 30 1 ~43 2 3 ~16 1 2 4 4 5 3 6 8 6 4 4 7 15 8 ----- 5 94 4 5 6 43 10 78 9 7 10 11 12 10 11 12 9137
李天命的思考藝術
ii iii iv v vi vii viii ix x 3 1 2 3 4 4 5 6 7 8 9 5 10 1 2 11 6 12 13 7 8 14 15 16 17 18 9 19 20 21 22 10 23 24 23 11 25 26 7 27 28 12 13 29 30 31 28 32 14 33 34 35 36 5 15 3 1 2 3 4 5 6 7 8 9 10 11
皮肤病防治.doc
...1...1...2...3...4...5...6...7...7...9...10... 11...12...14...15...16...18...19...21 I ...22...22...24...25...26...27...27...29...30...31...32...33...34...34...36...36...37...38...40...41...41...42 II
性病防治
...1...2...3...4...5...5...6...7...7...7...8...8...9...9...10...10... 11... 11 I ...12...12...12...13...14...14...15...17...20...20...21...22...23...23...25...27...33...34...34...35...35 II ...36...38...39...40...41...44...49...49...53...56...57...57...58...58...59...60...60...63...63...65...66
中国南北特色风味名菜 _一)
...1...1...2...3...3...4...5...6...7...7...8...9... 10... 11... 13... 13... 14... 16... 17 I ... 18... 19... 20... 21... 22... 23... 24... 25... 27... 28... 29... 30... 32... 33... 34... 35... 36... 37...
全唐诗24
... 1... 1... 2... 2... 3... 3... 4... 4... 5... 5... 6... 6... 7... 7... 8... 8... 9... 9...10...10...10...11...12...12...12...13...13 I II...14...14...14...15...15...15...16...16...16...17...17...18...18...18...19...19...19...20...20...20...21...21...22...22...23...23...23...24
509 (ii) (iii) (iv) (v) 200, , , , C 57
59 (ii) (iii) (iv) (v) 500,000 500,000 59I 18 (ii) (iii) (iv) 200,000 56 509 (ii) (iii) (iv) (v) 200,000 200,000 200,000 500,000 57 43C 57 (ii) 60 90 14 5 50,000 43F 43C (ii) 282 24 40(1B) 24 40(1) 58
95年度社區教育學習計畫執行成果報告
宜 蘭 縣 政 府 環 境 保 護 局 101 年 度 宜 蘭 縣 環 境 教 育 委 託 執 行 計 畫 期 末 報 告 ( 定 稿 版 ) 社 團 法 人 宜 蘭 縣 博 物 館 家 族 協 會 中 華 民 國 102 年 6 月 宜 蘭 縣 政 府 環 境 保 護 局 101 年 度 宜 蘭 縣 環 境 教 育 委 託 執 行 計 畫 期 末 報 告 ( 定 稿 版 ) 計 畫 主 持 人 :
Parallel Programing with MPI Binding with Fortran, C,C++
M P I 并行编程 C/C++/Fortran 语言绑定 ID: 独峰箫狼 版本 : v1.2.1 :[email protected] 1 目录 并行简介 MPI 概述 MPI 编程 MPI 基本概念 六个基本函数 详解消息通信域 衍生数据类型 点对点通信 群集通信 一些练习程序 2 并行简介 何谓并行 如何创建并行程序 并行结构 并行编程模型 3 何谓并行 多个线程同时进行工作 就像电路的并联可以起到分流的作用一样
W. Richard Stevens UNIX Sockets API echo Sockets TCP OOB IO C struct C/C++ UNIX fork() select(2)/poll(2)/epoll(4) IO IO CPU 100% libevent UNIX CPU IO
Linux muduo C++ ([email protected]) 2012-09-30 C++ TCP C++ x86-64 Linux TCP one loop per thread Linux native muduo C++ IT 5 C++ muduo 2 C++ C++ Primer 4 W. Richard Stevens UNIX Sockets API echo Sockets
全国计算机技术与软件专业技术资格(水平)考试
全 国 计 算 机 技 术 与 软 件 专 业 技 术 资 格 ( 水 平 ) 考 试 2008 年 上 半 年 程 序 员 下 午 试 卷 ( 考 试 时 间 14:00~16:30 共 150 分 钟 ) 试 题 一 ( 共 15 分 ) 阅 读 以 下 说 明 和 流 程 图, 填 补 流 程 图 中 的 空 缺 (1)~(9), 将 解 答 填 入 答 题 纸 的 对 应 栏 内 [ 说 明
Symantec™ Sygate Enterprise Protection 防护代理安装使用指南
Symantec Sygate Enterprise Protection 防 护 代 理 安 装 使 用 指 南 5.1 版 版 权 信 息 Copyright 2005 Symantec Corporation. 2005 年 Symantec Corporation 版 权 所 有 All rights reserved. 保 留 所 有 权 利 Symantec Symantec 徽 标 Sygate
