mpi



Similar documents
模板

untitled

PowerPoint 演示文稿

消息传递并行编程环境MPI.doc

Parallel Programming with MPI

第7章-并行计算.ppt

Microsoft PowerPoint - Tongji_MPI编程初步

30,000,000 75,000,000 75,000, (i) (ii) (iii) (iv)

mannal

2015年廉政公署民意調查

C/C++ - 文件IO

智力测试故事

大綱介紹 MPI 標準介紹 MPI 的主要目標 Compiler & Run 平行程式 MPICH 程式基本架構 點對點通訊 函數介紹 集體通訊 函數介紹

Microsoft Word - John_Ch_1202

全唐诗50

_汪_文前新ok[3.1].doc

FY.DOC

I. 1-2 II. 3 III. 4 IV. 5 V. 5 VI. 5 VII. 5 VIII. 6-9 IX. 9 X XI XII. 12 XIII. 13 XIV XV XVI. 16

CC213

Microsoft Word - Final Chi-Report _PlanD-KlnEast_V7_ES_.doc

施 的 年 度 維 修 工 程 已 於 4 月 15 日 完 成, 並 於 4 月 16 日 重 新 開 放 給 市 民 使 用 ii. 天 水 圍 游 泳 池 的 年 度 維 修 工 程 已 於 3 月 31 日 完 成, 並 於 4 月 1 日 重 新 開 放 給 市 民 使 用 iii. 元

C/C++ - 函数

新版 明解C言語入門編

奇闻怪录

<4D F736F F D20BB4FAA46BFA4B2C4A447B4C15F D313038A67E5FBAEEA658B56FAE69B9EAAC49A4E8AED72D5FAED6A977A5BB5F >

Microsoft Word - COC HKROO App I _Chi_ Jan2012.doc

C 1

(i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) (i) (ii)(iii) (iv) (v)

財 務 委 員 會 審 核 2014 至 2015 年 度 開 支 預 算 的 報 告 2014 年 7 月

科学计算的语言-FORTRAN95

PowerPoint 演示文稿

RDEC-RES

Parallel Programming with MPI

「保險中介人資格考試」手冊

Microsoft Word - Entry-Level Occupational Competencies for TCM in Canada200910_ch _2_.doc

, 7, Windows,,,, : ,,,, ;,, ( CIP) /,,. : ;, ( 21 ) ISBN : -. TP CIP ( 2005) 1

山东出版传媒招股说明书

(b)

我国服装行业企业社会责任问题的探讨.pages

-i-

Microsoft Word - 强迫性活动一览表.docx

- 2 - 获 豁 免 计 算 入 总 楼 面 面 积 及 / 或 上 盖 面 积 的 环 保 及 创 新 设 施 根 据 建 筑 物 条 例 的 规 定 4. 以 下 的 环 保 设 施 如 符 合 某 些 条 件, 并 由 有 关 人 士 提 出 豁 免 申 请, 则 可 获 豁 免 计 算 入


建築污染綜合指標之研究

新・解きながら学ぶC言語

<D6D0B9FAB9C5CAB757512E6D7073>

C/C++ - 字符输入输出和字符确认

1 LINUX IDE Emacs gcc gdb Emacs + gcc + gdb IDE Emacs IDE C Emacs Emacs IDE ICE Integrated Computing Environment Emacs Unix Linux Emacs Emacs Emacs Un

<4D F736F F D20CDF2B4EFB5E7D3B0D4BACFDFB9C9B7DDD3D0CFDEB9ABCBBECAD7B4CEB9ABBFAAB7A2D0D0B9C9C6B1D5D0B9C9CBB5C3F7CAE9A3A8C9EAB1A8B8E C4EA34D4C23137C8D5B1A8CBCDA3A92E646F63>

对联故事

新・明解C言語入門編『索引』


<4D F736F F D D342DA57CA7DEA447B14D2DA475B57BBB50BADEB27AC3FEB14DA447B8D5C344>


C/C++语言 - C/C++数据

全唐诗28

「香港中學文言文課程的設計與教學」單元設計範本

软件测试(TA07)第一学期考试

歡 迎 您 成 為 滙 豐 銀 聯 雙 幣 信 用 卡 持 卡 人 滙 豐 銀 聯 雙 幣 信 用 卡 同 時 兼 備 港 幣 及 人 民 幣 戶 口, 讓 您 的 中 港 消 費 均 可 以 當 地 貨 幣 結 算, 靈 活 方 便 此 外, 您 更 可 憑 卡 於 全 球 近 400 萬 家 特

普 通 高 等 教 育 十 二 五 重 点 规 划 教 材 计 算 机 系 列 中 国 科 学 院 教 材 建 设 专 家 委 员 会 十 二 五 规 划 教 材 操 作 系 统 戴 仕 明 姚 昌 顺 主 编 姜 华 张 希 伟 副 主 编 郑 尚 志 梁 宝 华 参 编 参 编 周 进 钱 进

「保險中介人資格考試」手冊

C/C++程序设计 - 字符串与格式化输入/输出

- 1 - ( ) ( ) ( )

天主教永年高級中學綜合高中課程手冊目錄

Microsoft Word - Panel Paper on T&D-Chinese _as at __final_.doc

II II

目录 第一章 MPI 简介 消息传递编程的相关概念 分布式内存 消息传输 进程 消息传递库 发送 / 接收 同步 / 异步 阻塞

《小王子》 (法)圣埃克苏佩里 原著

Microsoft PowerPoint - OPVB1基本VB.ppt

Microsoft Word - NCH final report_CHI _091118_ revised on 10 Dec.doc

Microsoft Word - 0B 封裡面.doc

<4D F736F F D20B6ABD0CBD6A4C8AFB9C9B7DDD3D0CFDEB9ABCBBECAD7B4CEB9ABBFAAB7A2D0D0B9C9C6B1D5D0B9C9CBB5C3F7CAE9A3A8C9EAB1A8B8E C4EA33D4C23131C8D5B1A8CBCDA3A92E646F63>

一、

江苏宁沪高速公路股份有限公司.PDF


epub83-1

Microsoft Word - MP2018_Report_Chi _12Apr2012_.doc

南華大學數位論文

李天命的思考藝術

皮肤病防治.doc

性病防治

中国南北特色风味名菜 _一)

全唐诗24

509 (ii) (iii) (iv) (v) 200, , , , C 57

95年度社區教育學習計畫執行成果報告

Parallel Programing with MPI Binding with Fortran, C,C++

W. Richard Stevens UNIX Sockets API echo Sockets TCP OOB IO C struct C/C++ UNIX fork() select(2)/poll(2)/epoll(4) IO IO CPU 100% libevent UNIX CPU IO

全国计算机技术与软件专业技术资格(水平)考试

Symantec™ Sygate Enterprise Protection 防护代理安装使用指南

Transcription:

MPI I

II MPI FORTRAN C MPI MPI C /FORTRAN MPI MPI MPI MPI MPI MPI-2 MPI-1 MPI-2 MPI MPI

...IX...XI... XII...XIV... XVII...1 1...2 1.1...2 1.1.1...2 1.1.2...3 1.2...4 1.3...5 2...6 2.1...6 2.2...7 2.3...8 3...9 3.1...9 3.2...9 3.3... 11 MPI...12 4 MPI...13 4.1 MPI...13 4.2 MPI...13 4.3 MPI...14 4.4 MPI...14 4.5 MPI...15 4.6...15 5 MPI...16 5.1 MPI Hello World!...16 5.1.1 FORTRAN77+MPI...16 5.1.2 C+MPI...18 5.2 MPI...21 5.3...22 6 MPI...23 6.1...23 6.1.1 MPI...23 III

6.1.2 MPI...25 6.1.3 MPI...25 6.1.4...25 6.1.5...26 6.1.6...26 6.1.7...27 6.1.8 status...27 6.1.9...28 6.2 MPI...29 6.3 MPI...30 6.3.1 MPI...30 6.3.2...32 6.4 MPI...33 6.4.1 MPI...33 6.4.2...34 6.4.3 MPI...35 6.5...35 7 MPI...36 7.1 MPI...36 7.2 MPI...38 7.3...39 7.4...41 7.5...43 7.6...46 7.7 MPI...47 7.8...50 8 MPI...51 8.1 MPI...51 8.1.1 Jacobi...51 8.1.2 MPI Jacobi...52 8.1.3 Jacobi...55 8.1.4 Jacobi...60 8.2 MPI...62 8.2.1...62 8.2.2...65 8.3...68 9 MPI...69 9.1...69 9.2...70 9.3...74 9.4...76 9.5...79 10 MPICH MPI...80 10.1 Linux MPICH...80 IV

10.1.1...80 10.1.2...81 10.1.3...82 10.1.4...82 10.1.5...83 10.1.6...83 10.1.7...86 10.2 Windows NT MPICH...87 10.2.1...87 10.2.2...87 10.2.3...88 10.2.4...91 11...92 11.1...92 11.2...93 11.3...94 MPI...95 12 MPI...96 12.1...96 12.2...97 12.3...99 12.4...101 12.5...102 12.5.1... 102 12.5.2... 104 12.6...107 12.6.1... 107 12.6.2... 109 12.7...110 12.8...112 12.9 Jacobi...113 12.10...116 12.11 Jacobi...119 12.12...122 13 MPI...123 13.1...123 13.1.1... 123 13.1.2... 124 13.1.3... 125 13.2...126 13.3...127 13.4...130 13.5...132 13.6...135 V

13.7...138 13.8...139 13.9 MPI...141 13.10 π...142 13.11...144 13.12...145 13.13...146 13.14...147 13.15...149 13.16 MINLOC MAXLOC...151 13.17...153 13.18...155 14 MPI...156 14.1...156 14.2...157 14.2.1... 157 14.2.2... 158 14.2.3... 160 14.2.4... 163 14.2.5... 164 14.3...171 14.4...172 14.5...175 14.6...177 14.7...181 15 MPI...182 15.1...182 15.2...182 15.3...187 15.4...190 15.5...194 15.6...198 16 MPI...199 16.1...199 16.2...199 16.3...205 16.4 Jacobi...208 16.5...212 17 MPI...213 17.1...213 17.2...215 18 MPI...216 18.1 MPI-1 C...216 18.2 MPI-1 Fortran...223 VI

18.3 MPI-2 C...234 18.4 MPI-2 Fortran...243 18.5...258 MPI MPI-2...259 19...260 19.1...260 19.2 MPI...262 19.3...264 19.4 socket...268 19.5...268 20...269 20.1...269 20.2...270 20.2.1... 270 20.2.2... 271 20.2.3... 272 20.2.4... 273 20.3...275 20.3.1... 275 20.3.2... 276 20.3.3... 278 20.4...280 21 I/O...281 21.1...281 21.2...282 21.3...286 21.3.1... 286 21.3.2... 289 21.3.3... 291 21.4...293 21.4.1... 294 21.4.2... 298 21.4.3... 300 21.4.4... 301 21.5...303 21.5.1... 304 21.5.2... 306 21.5.3... 307 21.6...311 21.7...314...315...316 VII

...318 MPI...320 1 MPI...325 2 MPICH 1.2.1...329 VIII

IX 21, Great Challenge, 90 HPCC ASCI (Computational Science and Engineering) 777 THNPSC-1 THNPSC-2 2000 ---MPI MPI MPI THNPSC-1 THNPSC-2 2000 MPI MPI MPI

MPI MPI MPI MPI MPI-2 2001 2 2 X

XI -- PC MPI MPI 1994 MPI MPI FORTRAN 77 C MPI 6 FORTRAN 77 C MPI 6 Fortran90 C++ MPI Fortran 90 C++ MPI FORTRAN77 C MPI MPI MPI MPI MPI MPI MPI MPI-2 I/O MPI 2001 2 1

1 FORTRAN77+MPI...17 3 C+MPI...20 4 Fortran90+MPI...20 5...29 6 MPI_REAL...31 7 MPI_REAL MPI_BYTE...31 8 MPI_BYTE MPI_BYTE...32 9 MPI_CHARACTER...32 10...36 11 MPI...38 12 MPI...39 13 MPI...41 14...42 15...45 16...46 17...47 18...48 19...49 20 Jacobi...52 21 MPI_SEND MPI_RECV Jacobi...55 22 MPI_SENDRECV Jacobi...60 23 Jacobi...62 24...65 25...67 26...74 27...76 28...79 29...97 30 MPI_WAIT...103 31...108 32 MPI_REQUEST_FREE...110 33... 111 34...112 35...113 36 Jacobi...116 37 Jacobi...122 38...127 39 MPI_Gather...129 40 MPI_Gatherv...130 41 MPI_Scatter...132 XII

42 MPI_Scatterv...132 43 MPI_Allgather...134 44 MPI_Allgatherv...135 45 MPI_Alltoall...137 46...139 47 π...144 48...150 49...150 50...151 51 MPI_MAXLOC...153 52...155 53...165 54 MPI_ADDRESS...171 55 MPI...172 56...174 57...176 58...177 59...179 60...180 61...181 62...190 63...194 64...198 65...205 66 Jacobi...212 67...313 68...314 XIII

1...3 2...3 3...4 4...8 5 SPMD...10 6 SPMD...10 7 MPMD... 11 8 FORTRAN77+MPI 1...17 9 FORTRAN77+MPI 4...17 10 FORTRAN77+MPI...18 11 C+MPI 1...19 12 C+MPI 4...19 13 MPI...21 14 MPI...23 15 MPI...30 16 MPI_SEND...34 17 MPI_RECV...34 18 tag MPI...34 19...41 20...43 21...44 22...46 23...47 24...48 25...49 26 Jacobi...52 27 Jacobi...53 28 MPI_SENDRECV Jacobi...57 29...63 30...65 31...68 32...70 33...71 34...75 35...77 36...77 37 MPI...82 38...84 39...84 40 NT MPI...89 XIV

41 NT MPI...89 42 NT MPI 1...89 43 NT MPI 2...90 44 NT MPI 3...90 45...96 46...97 47...98 48...99 49...100 50...123 51...123 52...124 53 MPI...124 54 MPI...125 55...126 56...128 57...130 58...133 59 MPI_ALLTOALL...136 60 MPI...140 61 π...142 62 π...142 63...146 64...147 65...148 66...148 67...149 68...156 69 MPI_TYPE_CONTIGUOUS...158 70 MPI_TYPE_VECTOR...159 71 MPI_TYPE_INDEXED...161 72 MPI_TYPE_STRUCT...163 73...203 74...206 75...209 76...209 77...210 78...260 79...261 80...261 81...271 82 MPI_PUT...272 83 MPI_GET...273 84...273 XV

85 MPI_ACCUMULATE...274 86 MPI_WIN_FENCE...276 87...276 88...279 89 MPI_FILE_READ_AT...287 90 MPI_FILE_WRITE_AT...288 91...291 92...293 93...294 94...295 95...297 96...298 97...311 98...311 99...311 XVI

1...7 2 MPI...15 3 MPI FORTRAN77...29 4 MPI C...29 5 MPI...30 6 MPI...69 7 MPI...98 8...99 9 MPI...141 10 C FORTRAN MPI...141 11...141 12 MPI Fortran...152 13 MPI C...152 14 MPI_MAXLOC...153 15...193 16...199 17...206 18...206 19...215 20 I/O...282 21...283 22...296 XVII

MPI 1

1 1.1 1 2 3 1.1.1 SIMD Single-Instruction Multiple-Data MIMD Multiple- Instruction Multiple-Data 1 SIMD A=A+1 SIMD A 1 SIMD SIMD MIMD A=B+C+D-E+F*G A=(B+C)+(D-E)+(F*G) B+C D-E F*G SIMD MIMD SPMD Single-Program Multuple-Data MPMD Multiple-Program Multiple-Data 1 2

SPMD MPMD MPMD D D M SIMD MIMD M SPMD MPMD S SISD MISD S SPSD MPSD S M I S M P 1 1.1.2 2 2 3

Cluster Computing 1.2 3 3 4

1.3 5

6 2 2.1 SIMD SPMD B C A A=B+C B C A 1

1 SIMD/SPMD SIMD/MIMD/SPMD/MPMD 2.2 1 2 3 MPI 4 7

4 2.3 FORTRAN C MPI FORTRAN C 8

9 3 3.1 3.2 SIMD MIMD SPMD MPMD SPMD MPMD

5 SPMD 5 6 SPMD 6 SPMD MPMD SPMD MPMD 7 MPMD 10

7 MPMD 3.3 11

MPI MPI MPI MPI MPICH Linux NT MPI MPI MPI MPI-1 MPI-2 12

4 MPI MPI MPI MPI 4.1 MPI MPI MPI 1 MPI MPI FORTRAN+MPI C+MPI MPI FORTRAN77/C/Fortran90/C++ / / 2 MPI MPI MPI MPI 3 MPI MPI MPI MPI MPI, MPI 4.2 MPI MPI 1 2 3 C Fortran 77 PVM NX Express p4 13

MPI 4.3 MPI MPI Venus (IBM) NX/2 (Intel) Express (Parasoft) Vertex (ncube) P4 (ANL) PARMACS (ANL) Zipcode (MSU) Chimp (Edinburgh University) PVM (ORNL, UTK, Emory U.) Chameleon (ANL) PICL (ANL) MPI Dongarra,Hempel,Hey Walker MPI 1.0 MPI MPI MPI MPI MPI1.1 MPI MPI I/O MPI MPI MPI MPI-2 MPI MPI-1 MPI-2 I/O MPI-1 MPI-2 4.4 MPI MPI MPI FORTRAN C FORTRAN C MPI-1 MPI FORTRAN 77 C FORTRAN 77 C MPI-1 MPI Fortran90 FORTRAN Fortran90 FORTRAN 77 Fortran90 C++ C MPI-2 FORTRAN 77 C Fortran90 C++ MPI-2 14

4.5 MPI MPICH MPI http://www-unix.mcs.anl.gov/mpi/mpich MPICH MPI-1 MPI MPICH MPICH MPICH-1.2.1 MPI-2 Argonne MSU MPICH CHIMP Edinburgh MPI EPCC Edinburgh Parallel Computing Centre ftp://ftp.epcc.ed.ac.uk/pub/packages/chimp/release/ CHIMP 1991 1994 Alasdair Bruce, James (Hamish) Mills, Gordon Smith LAM (Local Area Multicomputer) MPI Ohio State University LAM/MPI 6.3.2 http://www.mpi.nd.edu/lam/download/ 2 MPI 2 MPI Mpich Argonne and MSU http://www-unix.mcs.anl.gov/mpi/mpich Chimp Edinburgh ftp://ftp.epcc.ed.ac.uk/pub/packages/chimp/ Lam Ohio State University http://www.mpi.nd.edu/lam/ 4.6 MPI MPI MPI FORTRAN C MPI MPI 15

5 MPI Hello World MPI MPI FORTRAN C MPI 5.1 MPI Hello World! C Hello World MPI 5.1.1 FORTRAN77+MPI 1 FORTRAN77+MPI MPI FORTRAN mpif.h MPI C FORTRAN MPI FORTRAN mpif.h Fortran90+MPI MPI-2 Fortran90 C++ Fortran90 include mpif.h use mpi MPI Fortran90 4 MPI MPI_MAX_PROCESSOR_NAME MPI MPI processor_name myid numprocs namelen rc ierr MPI MPI MPI_INIT MPI_FINALIZE MPI MPI MPI FORTRAN MPI_COMM_RANK myid MPI_COMM_SIZE numprocs MPI_GET_PROCESSOR_NAME processor_name namelen write FORTRAN FORTRAN 4 tp5 4 tp5 0 1 2 3 8 MPI 10 16

program main include 'mpif.h' character * (MPI_MAX_PROCESSOR_NAME) processor_name integer myid, numprocs, namelen, rc,ierr call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) call MPI_GET_PROCESSOR_NAME(processor_name, namelen, ierr) write(*,10) myid,numprocs,processor_name 10 FORMAT('Hello World! Process ',I2,' of ',I1,' on ', 20A) call MPI_FINALIZE(rc) end 1 FORTRAN77+MPI Hello World! Process 1 of 4 on tp5 Hello World! Process 0 of 4 on tp5 Hello World! Process 2 of 4 on tp5 Hello World! Process 3 of 4 on tp5 8 FORTRAN77+MPI 1 4 tp1 tp3 tp4 tp5 9 4 4 Hello World! Process 0 of 4 on tp5 Hello World! Process 1 of 4 on tp1 Hello World! Process 2 of 4 on tp3 Hello World! Process 3 of 4 on tp4 9 FORTRAN77+MPI 4 17

Hello World 0 1 2 3 MPI_INIT MPI_INIT MPI_INIT MPI_INIT MPI_COMM_RANK MPI_COMM_RANK MPI_COMM_RANK MPI_COMM_RAN myid=0 myid=1 myid=2 myid=3 MPI_GET_PROCESSOR_NAME MPI_GET_PROCESSOR_NAME MPI_GET_PROCESSOR_NAME MPI_GET_PROCESSOR_NAME processor_name= tp5 processor_name= tp5 processor_name= tp5 processor_name= tp5 namelen=3 namelen=3 namelen=3 namelen=3 write write write write Hello World! Process 0 of 4 on tp5 Hello World! Process 1 of 4 on tp5 Hello World! Process 2 of 4 on tp5 Hello World! Process 3 of 4 on tp5 MPI_FINALIZE MPI_FINALIZE MPI_FINALIZE MPI_FINALIZE Hello World 10 FORTRAN77+MPI 5.1.2 C+MPI 3 C+MPI FORTRAN77+MPI MPI C mpi.h mpif.h MPI FORTRAN77 MPI_MAX_PROCESSOR_NAME MPI MPI 18

processor_name FORTRAN77 myid numprocs namelen MPI MPI_Init MPI_Finalize MPI FORTRAN77+MPI FORTRAN77 C FORTRAN77 MPI FORTRAN77 MPI FORTRAN77 C MPI_ MPI MPI C MPI_Comm_rank myid MPI_Comm_size numprocs MPI_Get_processor_name processor_name namelen fprintf C 4 tp5 tp5 0 1 2 3 11 MPI FORTRAN77+MPI Hello World! Process 0 of 4 on tp5 Hello World! Process 1 of 4 on tp5 Hello World! Process 3 of 4 on tp5 Hello World! Process 2 of 4 on tp5 11 C+MPI 1 4 tp1 tp3 tp4 tp5 12 4 4 FORTRAN77+MPI C+MPI Hello World! Process 0 of 4 on tp5 Hello World! Process 1 of 4 on tp1 Hello World! Process 2 of 4 on tp3 Hello World! Process 3 of 4 on tp4 12 C+MPI 4 19

#include "mpi.h" #include <stdio.h> #include <math.h> void main(argc,argv) int argc; char *argv[]; { int myid, numprocs; int namelen; char processor_name[mpi_max_processor_name]; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Get_processor_name(processor_name,&namelen); fprintf(stderr,"hello World! Process %d of %d on %s\n", myid, numprocs, processor_name); } MPI_Finalize(); 3 C+MPI program main use mpi character * (MPI_MAX_PROCESSOR_NAME) processor_name integer myid, numprocs, namelen, rc, ierr call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) call MPI_GET_PROCESSOR_NAME(processor_name, namelen, ierr) print *,"Hello World! Process ",myid," of ", numprocs, " on", processor_name call MPI_FINALIZE(rc) end 4 Fortran90+MPI 20

MPI 13 MPI 13 MPI 5.2 MPI MPI MPI MPI_ MPI_ MPI FORTRAN MPI FORTRAN C MPI MPI_Aaaa_aaa MPI FORTRAN MPI_SUCCESS MPI FORTRAN FORTRAN 1 C 0 FORTRAN 77 MPI ANSI FORTRAN 77 ANSI FORTRAN 77 MPI, MPI MPI mpif.h, mpif.h 21

5.3 MPI MPI MPI MPI MPI Hello World MPI Hello World SPMD Single Program Multiple Data 22

6 MPI MPI-1 128 MPI-2 287 MPI MPI 6 6 6 MPI 6.1 6 MPI MPI FORTRAN 77 C 6.1.1 MPI C FORTRAN 14 MPI 14 MPI MPI FORTRAN 77 C MPI-2 C++ MPI IN OUT INOUT IN MPI MPI OUT MPI INOUT MPI MPI OUT INOUT MPI INOUT 23

MPI IN OUT INOUT MPI MPI OUT INOUT void copyintbuffer( int *pin, int *pout, int len ) { int i; for (i=0; i<len; ++i) *pout++ = *pin++; } int a[10]; copyintbuffer( a, a+3, 7); C, MPI FORTRAN77 MPI MPI, C FORTRAN 77 MPI_INIT MPI_INIT() int MPI_Init(int *argc, char ***argv) C C argc argv argc argv MPI_INIT(IERROR) INTEGER IERROR FORTRAN77 FORTRAN77 IERROR C FORTRAN77 void*,<type> MPI C FORTRAN77 MPI MPI_SEND C FORTRAN77 void * <type> 24

6.1.2 MPI MPI_INIT() int MPI_Init(int *argc, char ***argv) MPI_INIT(IERROR) INTEGER IERROR MPI 1 MPI_INIT MPI_INIT MPI MPI 6.1.3 MPI MPI_FINALIZE() int MPI_Finalize(void) MPI_FINALIZE(IERROR) INTEGER IERROR MPI 2 MPI_FINALIZE MPI_FINALIZE MPI MPI MPI 6.1.4 MPI_COMM_RANK(comm,rank) IN comm OUT rank comm int MPI_Comm_rank(MPI_Comm comm, int *rank) MPI_COMM_RANK(COMM,RANK,IERROR) INTEGER COMM,RANK,IERROR MPI 3 MPI_COMM_RANK 25

6.1.5 MPI_COMM_SIZE(comm,size) IN comm OUT size comm int MPI_Comm_size(MPI_Comm comm, int *size) MPI_COMM_SIZE(COMM,SIZE,IERROR) INTEGER COMM,SIZE,IERROR MPI 4 MPI_COMM_SIZE 6.1.6 MPI_SEND(buf,count,datatype,dest,tag,comm) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) int MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_SEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR MPI 5 MPI_SEND MPI_SEND count datatype dest tag MPI_SEND count datatype buf datatype MPI MPI_SEND 26

6.1.7 MPI_RECV source datatype tag count count datatype datatype buf MPI count datatype MPI MPI_RECV MPI_RECV(buf,count,datatype,source,tag,comm,status) OUT buf ( ) IN count ( ) IN datatype ( ) IN source ( ) IN tag ( ) IN comm ( ) OUT status ( ) int MPI_Recv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS (MPI_STATUS_SIZE) IERROR MPI 6 MPI_RECV 6.1.8 status status MPI C MPI_SOURCE MPI_TAG MPI_ERROR status.mpi_source status.mpi_tag status.mpi_error tag FORTRAN status MPI_STATUS_SIZE status(mpi_source) status(mpi_tat) status(mpi_error) tag 27

status MPI_GET_COUNT 6.1.9 0 1 Hello, process 1 1 #include "mpi.h" main( argc, argv ) int argc; char **argv; { char message[20]; int myrank; MPI_Init( &argc, &argv ); /* MPI */ MPI_Comm_rank( MPI_COMM_WORLD, &myrank ); /* */ if (myrank == 0) /* 0 */ { /* message MPI_Send strlen(message) MPI_CHAR 1 1 99 MPI_COMM_WORLD 0 1 */ strcpy(message,"hello, process 1"); MPI_Send(message, strlen(message), MPI_CHAR, 1, 99,MPI_COMM_WORLD); } else if(myrank==1) /* 1 */ { /* 1 message 20 MPI_CHAR 0 99 MPI_COMM_WORLD status */ MPI_Recv(message, 20, MPI_CHAR, 0, 99, MPI_COMM_WORLD, &status); printf("received :%s:", message); } MPI_Finalize(); 28

} /* MPI */ 5 6.2 MPI FORTRAN77 MPI FORTRAN77 3 3 MPI FORTRAN77 MPI FORTRAN77 MPI_INTEGER INTEGER MPI_REAL REAL MPI_DOUBLE_PRECISION DOUBLE PRECISION MPI_COMPLEX COMPLEX MPI_LOGICAL LOGICAL MPI_CHARACTER CHARACTER(1) MPI_BYTE MPI_PACKED MPI C 4 4 MPI C MPI C MPI_CHAR signed char MPI_SHORT signed short int MPI_INT signed int MPI_LONG signed long int MPI_UNSIGNED_CHAR unsigned char MPI_UNSIGNED_SHORT unsigned short int MPI_UNSIGNED unsigned int MPI_UNSIGNED_LONG unsigned long int MPI_FLOAT float MPI_DOUBLE double MPI_LONG_DOUBLE long double MPI_BYTE MPI_PACKED MPI_BYTE MPI_PACKED FORTRAN77 C MPI_BYTE (8 ) MPI, Fortran 77 ANSI C MPI 5 29

5 MPI MPI MPI_LONG_LONG_INT MPI MPI_DOUBLE_COMPLEX MPI_REAL2 MPI_REAL4 MPI_REAL8 MPI_INTEGER1 MPI_INTEGER2 MPI_INTEGER4 C long long int FORTRAN77 DOUBLE COMPLEX REAL*2 REAL*4 REAL*8 INTEGER*1 INTEGER*2 INTEGER*4 6.3 MPI 6.3.1 MPI MPI MPI 15 15 MPI MPI 1 2 3 1 2 3 MPI 1 2 FORTRAN77 INTEGER MPI_INTEGER REAL MPI_REAL FORTRAN77 MPI C int MPI_INT float MPI_FlOAT 30

MPI_INTEGER MPI_INTEGER MPI_REAL MPI_REAL C int long MPI MPI_INT MPI_LONG MPI_INT MPI_LONG MPI_INT MPI_LONG MPI MPI_BYTE MPI_PACKED MPI_TYPE MPI_PACK MPI_UNPACK REAL a(20),b(20) CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_SEND(a(1), 10, MPI_REAL, 1, tag, comm, ierr) ELSE IF (rank.eq. 1) THEN CALL MPI_RECV(b(1), 15, MPI_REAL, 0, tag, comm, status, ierr) END IF 6 MPI_REAL REAL a(20),b(20) CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_SEND(a(1), 10, MPI_REAL, 1, tag, comm, ierr) ELSE IF (rank.eq. 1) THEN CALL MPI_RECV(b(1), 40, MPI_BYTE, 0, tag, comm, status, ierr) END IF 7 MPI_REAL MPI_BYTE REAL a(20),b(20) CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_SEND(a(1), 40, MPI_BYTE, 1, tag, comm, ierr) ELSE IF (rank.eq. 1) THEN CALL MPI_RECV(b(1), 60, MPI_BYTE, 0, tag, comm, status, ierr) END IF 31

8 MPI_BYTE MPI_BYTE MPI_BYTE, MPI_PACKED 6 7 MPI_REAL MPI_BYTE 8 MPI_BYTE MPI_CHARACTER FORTRAN 77 CHARACTER CHARACTER FORTRAN 77 CHARACTER*10 a CHARACTER*10 b CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_SEND(a, 5, MPI_CHARACTER, 1, tag, comm, ierr) ELSE IF (rank.eq. 1) THEN CALL MPI_RECV(b(6), 5, MPI_CHARACTER, 0, tag, comm, status, ierr) END IF 9 MPI_CHARACTER 9 1 b 0 a Fortran CHARACTER, MPI MPI Fortran CHARACTER MPI 6.3.2 32

33 MPI_SEND( buf, count,datatype,dest,tag,comm) 32 64 MPI MPI MPI MPI MPI MPI (,MPI_BYTE ) MPI,, a b 10 10 ( ) a b 6.4 MPI 6.4.1 MPI MPI < / > < > MPI_SEND MPI_RECV 17 17

16 MPI_SEND MPI_RECV(buf,count,datatype,source,tag,comm,status) 17 MPI_RECV / tag 18 0 1 MPI_SEND( x,1,,1,tag1,comm) 1 MPI_SEND( y,1,,1,tag2,comm) 2 tag2 2 0 2 y x tag 1 MPI_RECV(x,1,,0,tag1,comm,status) tag1 1 1 1 MPI_RECV(y,1,,0,tag2,comm,status) 1 2 18 tag MPI 6.4.2 source,tag comm, source MPI_ANY_SOURCE tag tag MPI_ANY_TAG tag MPI_ANY_SOURCE MPI_ANY_TAG comm source ( source = MPI_ANY_SOURCE) tag( = MPI_ANY_TAG) MPI_ANY_SOURCE MPI_ANY_TAG MPI = Source = destination 34

35 6.4.3 MPI MPI N 0 N-1 MPI_COMM_WORLD MPI MPI MPI_COMM_WORLD 6.5 MPI MPI MPI MPI MPI MPI MPI MPI MPI

7 MPI MPI MPI MPI 7.1 MPI MPI MPI MPI_WTIME() double MPI_Wtime(void) DOUBLE PRECISION MPI_WTIME() MPI 7 MPI_WTIME MPI_WTIME, double starttime, endtime;... starttime = MPI_Wtime() endtime = MPI_Wtime() printf("that tooks %f secodes\n", endtime-starttime); 10 MPI_WTICK double MPI_Wtick DOUBLE PRECISION MPI_WTICK MPI 8 MPI_WTICK MPI_WTICK MPI_WTIME 36

MPI #include <stdio.h> #include <stdlib.h> #include "mpi.h" #include "test.h" int main( int argc, char **argv ) { int err = 0; double t1, t2; double tick; int i; MPI_Init( &argc, &argv ); t1 = MPI_Wtime();/* t1*/ t2 = MPI_Wtime();/* t2*/ if (t2 - t1 > 0.1 t2 - t1 < 0.0) { /* 0.1 */ err++; fprintf( stderr, "Two successive calls to MPI_Wtime gave strange results: (%f) (%f)\n", t1, t2 ); } /* 10 1 */ for (i = 0; i<10; i++) { t1 = MPI_Wtime();/* */ sleep(1);/* 1 */ t2 = MPI_Wtime();/* */ if (t2 - t1 >= (1.0-0.01) && t2 - t1 <= 5.0) break; /* */ if (t2 - t1 > 5.0) i = 9; /* */ } /* 10 */ if (i == 10) { /* */ fprintf( stderr, "Timer around sleep(1) did not give 1 second; gave %f\n",t2 - t1 ); err++; 37

} } tick = MPI_Wtick(); /* */ if (tick > 1.0 tick < 0.0) { /* */ err++; fprintf( stderr, "MPI_Wtick gave a strange result: (%f)\n", tick ); } MPI_Finalize( ); 11 MPI 7.2 MPI MPI rank MPI MPI_GET_PROCESSOR_NAME name, resultlen OUT name OUT resultlen int MPI_Get_processor_name ( char *name, int *resultlen) MPI_GET_PROCESSOR_NAME NAME, RESULTLEN, IERROR CHARACTER *(*) NAME INTEGER RESULTLEN, IERROR MPI 9 MPI_GET_PROCESSOR_NAME MPI_GET_PROCESSOR_NAME MPI_GET_VERSION(version, subversion) OUT version OUT subversion int MPI_Get_version(int * version, int * subversion) MPI_GET_VERSION(VERSION, SUBVERSION,IERROR) INTEGER VERSION, SUBVERSION, IERROR MPI 10 MPI_GET_VERSION MPI_GET_VERSION MPI version subversion MPI 38

program main include 'mpif.h' character*(mpi_max_processor_name) name integer resultlen, version, subversion, ierr C call MPI_Init( ierr ) name = " " C C C call MPI_Get_processor_name( name, resultlen, ierr ) name resultlen call MPI_GET_VERSION(version, subversion,ierr) MPI errs = 0 do i=resultlen+1, MPI_MAX_PROCESSOR_NAME if (name(i:i).ne. " ") then name resultlen errs = errs + 1 endif enddo if (errs.gt. 0) then print *, 'Non-blanks after name' else print *, name, " MPI version",version, ".", subversion endif call MPI_Finalize( ierr ) end 12 MPI 7.3 MPI MPI_INIT MPI MPI_INITALIZED MPI_INIT MPI_INITALIZED(flag) OUT flag MPI_INIT int MPI_Initialized(int *flag) MPI_INITALIZED(FLAG, IERROR) LOGICAL FLAG INTEGER IERROR MPI 11 MPI_INITALIZED 39

MPI_INITALIZED MPI_INIT flag=true flag=false MPI MPI MPI_ABORT(comm, errorcode) IN comm IN errorcode int MPI_Abort(MPI_Comm comm, int errorcode) MPI_ABORT(COMM, ERRORCODE, IERROR) INTEGER COMM, ERRORCODE, IERROR MPI 12 MPI_ABORT MPI_ABORT comm master #include "mpi.h" #include <stdio.h> /* masternode == 0 masternode!= 0 */ int main( int argc, char **argv ) { int node, size, i; int masternode = 0; /* */ MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &node); MPI_Comm_size(MPI_COMM_WORLD, &size); /* */ for (i=1; i<argc; i++) { fprintf(stderr,"myid=%d,procs=%d,argv[%d]=%s\n",node,size,i,argv[i]); if (argv[i] && strcmp( "lastmaster", argv[i] ) == 0) { masternode = size-1; /* master*/ } } if(node == masternode) { /* master */ fprintf(stderr,"myid=%d is masternode Abort!\n",node); MPI_Abort(MPI_COMM_WORLD, 99); } 40

} else { /* master */ fprintf(stderr,"myid=%d is not masternode Barrier!\n",node); MPI_Barrier(MPI_COMM_WORLD); } MPI_Finalize(); 13 MPI 7.4 19 0 1 N-1 19 #include <stdio.h> #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank, value, size; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); /* */ do { /* */ 41

if (rank == 0) { fprintf(stderr, "\nplease give new value="); /* 0 */ scanf( "%d", &value ); fprintf(stderr,"%d read <-<- (%d)\n",rank,value); if (size>1) { MPI_Send( &value, 1, MPI_INT, rank + 1, 0, MPI_COMM_WORLD ); fprintf(stderr,"%d send (%d)->-> %d\n", rank,value,rank+1); /* */ } } else { MPI_Recv( &value, 1, MPI_INT, rank - 1, 0, MPI_COMM_WORLD, &status ); /* */ fprintf(stderr,"%d receive (%d)<-<- %d\n",rank,value,rank-1); if (rank < size - 1) { MPI_Send( &value, 1, MPI_INT, rank + 1, 0, MPI_COMM_WORLD ); fprintf(stderr,"%d send (%d)->-> %d\n", rank,value,rank+1); /* */ } } MPI_Barrier(MPI_COMM_WORLD); /* */ } while ( value>=0); MPI_Finalize( ); } 14 7 76-3 20 42

Please give new value=76 0 read <-<- (76) 0 send (76)->-> 1 1 receive (76)<-<- 0 1 send (76)->-> 2 2 receive (76)<-<- 1 2 send (76)->-> 3 3 receive (76)<-<- 2 3 send (76)->-> 4 4 receive (76)<-<- 3 4 send (76)->-> 5 5 receive (76)<-<- 4 5 send (76)->-> 6 6 receive (76)<-<- 5 Please give new value=-3 0 read <-<- (-3) 0 send (-3)->-> 1 1 receive (-3)<-<- 0 2 receive (-3)<-<- 1 3 receive (-3)<-<- 2 4 receive (-3)<-<- 3 4 send (-3)->-> 5 5 receive (-3)<-<- 4 6 receive (-3)<-<- 5 1 send (-3)->-> 2 2 send (-3)->-> 3 3 send (-3)->-> 4 5 send (-3)->-> 6 20 7.5 21 43

0 hello hello hello hello hello 1 2 hello 21 #include "mpi.h" #include <stdio.h> #include <stdlib.h> void Hello( void ); int main(int argc, char *argv[]) { int me, option, namelen, size; char processor_name[mpi_max_processor_name]; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&me); MPI_Comm_size(MPI_COMM_WORLD,&size); /* */ if (size < 2) { /* 2 */ fprintf(stderr, "systest requires at least 2 processes" ); MPI_Abort(MPI_COMM_WORLD,1); } MPI_Get_processor_name(processor_name,&namelen); /* */ fprintf(stderr,"process %d is alive on %s\n", me, processor_name); MPI_Barrier(MPI_COMM_WORLD); /* */ Hello(); /* */ MPI_Finalize(); } 44

void Hello( void ) /* */ { int nproc, me; int type = 1; int buffer[2], node; MPI_Status status; MPI_Comm_rank(MPI_COMM_WORLD, &me); MPI_Comm_size(MPI_COMM_WORLD, &nproc); /* */ if (me == 0) { /* 0 */ printf("\nhello test from all to all\n"); fflush(stdout); } for (node = 0; node<nproc; node++) { /* */ if (node!= me) { /* */ buffer[0] = me; /* */ buffer[1] = node; /* */ MPI_Send(buffer, 2, MPI_INT, node, type, MPI_COMM_WORLD); /* */ MPI_Recv(buffer, 2, MPI_INT, node, type, MPI_COMM_WORLD, &status); /* */ if ( (buffer[0]!= node) (buffer[1]!= me) ) { /* */ (void) fprintf(stderr, "Hello: %d!=%d or %d!=%d\n", buffer[0], node, buffer[1], me); printf("mismatch on hello process ids; node = %d\n", node); } printf("hello from %d to %d\n", me, node); /* */ fflush(stdout); } } } 15 45

7.6 tag 22 ROOT 0 1 2 i N-1 ROOT ROOT 0 22 #include "mpi.h" #include <stdio.h> int main(argc, argv) int argc; char **argv; { int rank, size, i, buf[1]; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); if (rank == 0) { for (i=0; i<100*(size-1); i++) { MPI_Recv( buf, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status ); printf( "Msg=%d from %d with tag %d\n", buf[0], status.mpi_source, status.mpi_tag ); } } else { for (i=0; i<100; i++) buf[0]=rank+i; MPI_Send( buf, 1, MPI_INT, 0, i, MPI_COMM_WORLD ); } MPI_Finalize(); } 16 46

7.7 MPI MPI 17 CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_RECV(recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 1, tag, comm, ierr) ELSE IF( rank.eq. 1) CALL MPI_RECV(recvbuf, count, MPI_REAL, 0, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 0, tag, comm, ierr) END IF 17 23 47

24 CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_SEND(sendbuf, count, MPI_REAL, 1, tag, comm, ierr) CALL MPI_RECV(recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) ELSE rank.eq.1 CALL MPI_SEND(sendbuf, count, MPI_REAK, 0, tag, comm, status, ierr) CALL MPI_RECV(recvbuf, count, MPI_REAL, 0, tag, comm, status, ierr) END IF 18 24 0 1 MPI 0 1 1 0 25 48

CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_SEND(sendbuf, count, MPI_REAL, 1, tag, comm, ierr) CALL MPI_RECV(recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) ELSE rank.eq. 1 CALL MPI_RECV(recvbuf, count, MPI_REAL, 0, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 0, tag, comm, ierr) END IF 19 25 C A A D A D A D B C B C A C D B 49

7.8 MPI MPI MPI 50

51 8 MPI MPI MPI MPI MPI Jacobi MPI MPI MPI MPI SPMD MPI MPMD MPMD SPMD MPI SPMD SPMD SPMD SPMD SPMD SPMD 8.1 MPI 8.1.1 Jacobi Jacobi 20 Jacobi Jacobi

REAL A(N+1,N+1), B(N+1,N+1) DO K=1,STEP DO J=1,N DO I=1,N B(I,J)=0.25*(A(I-1,J)+A(I+1,J)+A(I,J+1)+A(I,J-1)) END DO END DO DO J=1,N DO I=1,N A(I,J)=B(I,J) END DO END DO 20 Jacobi 8.1.2 MPI Jacobi 26 4 0 1 2 3 1 26 Jacobi M M A(M,M) M=4*N 0 A(M,1:N) 1 A(M,N+1:2*N), 3 A(M,2*N+1:3*N) 3 A(M,3*N+1:M) 1 52

M*N N+2 0 1 8 0 Jacobi 27 FORTRAN 21 0 1 2 3 27 Jacobi C program main implicit none include 'mpif.h' integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) C integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer status(mpi_status_size) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) 53

print *, "Process ", myid, " of ", numprocs, " is alive" C do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do C Jacobi do n=1,steps C C C C if (myid.lt. 3) then call MPI_RECV(a(1,mysize+2),totalsize,MPI_REAL,myid+1,10, * MPI_COMM_WORLD,status,ierr) end if if ((myid.gt. 0) ) then call MPI_SEND(a(1,2),totalsize,MPI_REAL,myid-1,10, * MPI_COMM_WORLD,ierr) end if if (myid.lt. 3) then call MPI_SEND(a(1,mysize+1),totalsize,MPI_REAL,myid+1,10, * MPI_COMM_WORLD,ierr) end if if (myid.gt. 0) then call MPI_RECV(a(1,1),totalsize,MPI_REAL,myid-1,10, 54

* MPI_COMM_WORLD,status,ierr) end if begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 endif if (myid.eq. 3) then end_col=mysize endif do j=begin_col,end_col do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) end do end do end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) end do call MPI_Finalize(rc) end 21 MPI_SEND MPI_RECV Jacobi 8.1.3 Jacobi Jacobi MPI MPI 55

MPI_SENDRECV(sendbuf,sendcount,sendtype,dest,sendtag,recvbuf,recvcount, recvtype, source,recvtag,comm,status) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) IN dest ( ) IN sendtag ( ) OUT recvbuf ( ) IN recvcount ( ) IN recvtype ( ) IN source ( ) IN recvtag ( ) IN comm ( ) OUT status (status) int MPI_Sendrecv(void *sendbuf, int sendcount,mpi_datatype sendtype, int dest, int sendtag, void *recvbuf, int recvcount, MPI_Datatype recvtype, int source, int recvtag, MPI_Comm comm, MPI_Status *status) MPI_SENDRECV(SENDBUF, SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVBUF, RECVCOUNT, RECVTYPE, SOURCE, RECVTAG, COMM, STATUS, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVCOUNT, RECVTYPE,SOURCE, RECVTAG, COMM,STATUS(MPI_STATUS_SIZE), IERROR MPI 13 MPI_SENDRECV MPI_SENDRECV MPI_SENDRECV_REPLACE MPI_SENDRECV MPI_SENDRECV_REPLACE MPI_SENDRECV 56

MPI_SENDRECV_REPLACE(buf,count,datatype,dest,sendtag,source,recvtag,comm, status) INOUT buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN sendtag ( ) IN source ( ) IN recvtag ( ) IN comm ( ) OUT status (status) int MPI_Sendrecv_replace(void *buf, int count, MPI_Datatype datatype, int dest, int sendtag, int source,int recvtag, MPI_Comm comm, MPI_Status *status) MPI_SENDRECV_REPLACE(BUF, COUNT, DATATYPE, DEST, SENDTAG, SOURCE, RECVTAG, COMM, STATUS, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, SENDTAG, SOURCE, RECVTAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR MPI 14 MPI_SENDRECV_REPLACE Jacobi MPI_SENDRECV 28 28 MPI_SENDRECV Jacobi 57

22 MPI_SENDRECV program main implicit none include 'mpif.h' integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer status(mpi_status_size) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) print *, "Process ", myid, " of ", numprocs, " is alive" C do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do C 58

do n=1,steps C C if (myid.eq. 0) then call MPI_SEND(a(1,mysize+1),totalsize,MPI_REAL,myid+1,10, * MPI_COMM_WORLD,ierr) else if (myid.eq. 3) then call MPI_RECV(a(1,1),totalsize,MPI_REAL,myid-1,10, * MPI_COMM_WORLD,status,ierr) else call MPI_SENDRECV(a(1,mysize+1),totalsize,MPI_REAL,myid+1,10, * a(1,1),totalsize,mpi_real,myid-1,10, * MPI_COMM_WORLD,status,ierr) end if if (myid.eq. 0) then call MPI_RECV(a(1,mysize+2),totalsize,MPI_REAL,myid+1,10, * MPI_COMM_WORLD,status,ierr) else if (myid.eq. 3) then call MPI_SEND(a(1,2),totalsize,MPI_REAL,myid-1,10, * MPI_COMM_WORLD,ierr) else call MPI_SENDRECV(a(1,2),totalsize,MPI_REAL,myid-1,10, * a(1,mysize+2),totalsize,mpi_real,myid+1,10, * MPI_COMM_WORLD,status,ierr) end if begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 endif if (myid.eq. 3) then end_col=mysize endif do j=begin_col,end_col do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) 59

end do end do end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) end do call MPI_Finalize(rc) end 22 MPI_SENDRECV Jacobi 8.1.4 Jacobi MPI_PROC_NULL MPI MPI_PRC_NULL MPI_PROC_NULL Jacobi program main implicit none include 'mpif.h' integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer left,right,tag1,tag2 integer status(mpi_status_size) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) print *, "Process ", myid, " of ", numprocs, " is alive" 60

C do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do C C C C tag1=3 tag2=4 if (myid.gt. 0) then left=myid-1 else left=mpi_proc_null end if if (myid.lt. 3) then right=myid+1 else right=mpi_proc_null end if Jacobi do n=1,steps call MPI_SENDRECV(a(1,mysize+1),totalsize,MPI_REAL,right,tag1, * a(1,1),totalsize,mpi_real,left,tag1, * MPI_COMM_WORLD,status,ierr) call MPI_SENDRECV(a(1,2),totalsize,MPI_REAL,left,tag2, * a(1,mysize+2),totalsize,mpi_real,right,tag2, * MPI_COMM_WORLD,status,ierr) 61

begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 endif if (myid.eq. 3) then end_col=mysize endif do j=begin_col,end_col do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) end do end do end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) end do call MPI_Finalize(rc) end 23 Jacobi 8.2 MPI 8.2.1 C=A B 29 B A B A A 62

A B 29 program main include "mpif.h" integer MAX_ROWS,MAX_COLS, rows, cols parameter (MAX_ROWS=1000, MAX_COLS=1000) double precision a(max_rows, MAX_COLS),b(MAX_COLS),c(MAX_COLS) double precision buffer (MAX_COLS), ans integer myid, master, numprocs, ierr, status(mpi_status_size) integer i,j,numsent, numrcvd, sender integer anstype, row call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, myid, ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD, numprocs, ierr) master=0 rows=100 cols=100 C if (myid.eq. master) then A B do i=1,cols b(i)=1 do j=1,rows a(i,j)=i end do end do numsent=0 numrcvd=0 63

C C C C C C C C C C C C B call MPI_BCAST(b,cols,MPI_DOUBLE_PRECISION,master, $ MPI_COMM_WORLD, ierr) A numprocs-1 do i=1,min(numprocs-1,rows) do j=1,cols buffer(j)=a(i,j) end do call MPI_SEND(buffer, cols, MPI_DOUBLE_PRECISION,i, $ i,mpi_comm_world, ierr) numsent=numsent+1 end do do i=1,row call MPI_RECV(ans, 1,MPI_DOUBLE_PRECISION, MPI_ANY_SOURCE, $ MPI_ANY_TAG,MPI_COMM_WORLD, status, ierr) sender=status(mpi_source) anstype=status(mpi_tag) C c(anstype)=ans if (numsent.lt. rows) then do j=1,cols buffer(j)=a(numsent+1,j) end do call MPI_SEND(buffer,cols, MPI_DOUBLE_PRECISION, sender, $ numsent+1,mpi_comm_world, ierr) numsent=numsent+1 else 0 call MPI_SEND(1.0,0,MPI_DOUBLE_PRECISION,sender, $ 0, MPI_COMM_WORLD, ierr) end if else B call MPI_BCAST(b,cols,MPI_DOUBLE_PRECISION,master, $ MPI_COMM_WORLD, ierr) 64

C A 90 call MPI_RECV(buffer,cols, MPI_DOUBLE_PRECISION, master, $ MPI_ANY_TAG, MPI_COMM_WORLD, status,ierr) C 0 if (status(mpi_tag).ne. 0) then row=status(mpi_tag) ans=0.0 do i=1,cols ans=ans+buffer(i)*b(i) end do C call MPI_SEND(ans, 1, MPI_DOUBLE_PRECISION, master, row, $ MPI_COMM_WORLD, ierr) goto 90 end if endif call MPI_FINALIZE(ierr) end 24 8.2.2 30 0 1 2... 30 65

#include <stdio.h> #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank, size; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); if (rank == 0) master_io(); /* 0 */ else slave_io(); /* */ MPI_Finalize( ); } #define MSG_EXIT 1 #define MSG_PRINT_ORDERED 2 /* */ #define MSG_PRINT_UNORDERED 3 /* */ /* */ int master_io( void ) { int i,j, size, nslave, firstmsg; char buf[256], buf2[256]; MPI_Status status; MPI_Comm_size( MPI_COMM_WORLD, &size );/* */ nslave = size - 1;/* */ while (nslave > 0) {/* */ MPI_Recv( buf, 256, MPI_CHAR, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status );/* */ switch (status.mpi_tag) { case MSG_EXIT: nslave--; break;/* 1*/ case MSG_PRINT_UNORDERED:/* */ fputs( buf, stdout ); break; case MSG_PRINT_ORDERED:/* */ firstmsg = status.mpi_source; for (i=1; i<size; i++) {/* */ if (i == firstmsg) fputs( buf, stdout );/* */ 66

else {/* */ MPI_Recv( buf2, 256, MPI_CHAR, i, MSG_PRINT_ORDERED, MPI_COMM_WORLD, &status );/* */ fputs( buf2, stdout ); } } break; } } } /* */ int slave_io( void ) { char buf[256]; int rank; MPI_Comm_rank( MPI_COMM_WORLD, &rank );/* */ sprintf( buf, "Hello from slave %d ordered print\n", rank ); MPI_Send( buf, strlen(buf) + 1, MPI_CHAR, 0, MSG_PRINT_ORDERED, MPI_COMM_WORLD );/* */ sprintf( buf, "Goodbye from slave %d, ordered print\n", rank ); MPI_Send( buf, strlen(buf) + 1, MPI_CHAR, 0, MSG_PRINT_ORDERED, MPI_COMM_WORLD );/* */ sprintf( buf, "I'm exiting (%d),unordered print\n", rank ); MPI_Send( buf, strlen(buf) + 1, MPI_CHAR, 0, MSG_PRINT_UNORDERED, MPI_COMM_WORLD );/* */ MPI_Send( buf, 0, MPI_CHAR, 0, MSG_EXIT, MPI_COMM_WORLD );/* */ } 25 31 10 1 9 Hello from slave 1,ordered print Hello from slave 2,ordered print Hello from slave 3,ordered print Hello from slave 4,ordered print Hello from slave 5,ordered print Hello from slave 6,ordered print Hello from slave 7,ordered print Hello from slave 8,ordered print Hello from slave 9,ordered print 67

Goodbye from slave 1,ordered print Goodbye from slave 2,ordered print Goodbye from slave 3,ordered print Goodbye from slave 4,ordered print Goodbye from slave 5,ordered print Goodbye from slave 6,ordered print Goodbye from slave 7,ordered print Goodbye from slave 8,ordered print Goodbye from slave 9,ordered print I'm exiting (1),unordered print I'm exiting (3),unordered print I'm exiting (4),unordered print I'm exiting (7),unordered print I'm exiting (8),unordered print I'm exiting (9),unordered print I'm exiting (2),unordered print I'm exiting (5),unordered print I'm exiting (6),unordered print 31 8.3 MPI MPI MPI MPI 68

9 MPI MPI standard mode bufferedmode synchronous-mode ready-mode MPI 1 2 3 4 MPI 6 MPI MPI_SEND MPI_RECV MPI_BSEND MPI_SSEND MPI_RSEND MPI B S R 9.1 MPI 32 MPI MPI MPI 69

32 9.2 33 MPI_BSEND(buf, count, datatype, dest, tag, comm) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) int MPI_Bsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_BSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR MPI 15 MPI_BSEND MPI_BSEND MPI_SEND 70

33 MPI MPI MPI_BUFFER_ATTACH( buffer, size) IN buffer ( ) IN size ( ) int MPI_Buffer_attach( void* buffer, int size) MPI_BUFFER_ATTACH( BUFFER, SIZE, IERROR) <type>bufferr(*) INTEGER SIZE, IERROR MPI 16 MPI_BUFFER_ATTACH MPI_BUFFER_ATTACH size MPI MPI_BUFFER_DETACH( buffer, size) OUT buffer ( ) OUT size ( ) int MPI_Buffer_detach( void** buffer, int* size) MPI_BUFFER_DETACH( BUFFER, SIZE, IERROR) <type>buffer(*) INTEGER SIZE, IERROR MPI 17 MPI_BUFFER_DETACH MPI_BUFFER_DETACH size buffer 5 71

#include <stdio.h> #include <stdlib.h> #include "mpi.h" #define SIZE 6 /* */ static int src = 0; static int dest = 1; void Generate_Data ( double *, int ); /* */ void Normal_Test_Recv ( double *, int ); /* */ void Buffered_Test_Send ( double *, int ); /* */ void Generate_Data(buffer, buff_size) double *buffer; int buff_size; { int i; for (i = 0; i < buff_size; i++) buffer[i] = (double)i+1; } void Normal_Test_Recv(buffer, buff_size) double *buffer; int buff_size; { int i, j; MPI_Status Stat; double *b; b = buffer; /* buff_size - 1 */ MPI_Recv(b, (buff_size - 1), MPI_DOUBLE, src, 2000, MPI_COMM_WORLD, &Stat); fprintf(stderr,"standard receive a message of %d data\n",buff_size-1); for (j=0;j<buff_size-1;j++) fprintf(stderr," buf[%d]=%f\n",j,b[j]); b += buff_size - 1; /* */ MPI_Recv(b, 1, MPI_DOUBLE, src, 2000, MPI_COMM_WORLD, &Stat); fprintf(stderr,"standard receive a message of one data\n"); fprintf(stderr,"buf[0]=%f\n",*b); 72

} void Buffered_Test_Send(buffer, buff_size) double *buffer; int buff_size; { int i, j; void *bbuffer; int size; fprintf(stderr,"buffered send message of %d data\n",buff_size-1); for (j=0;j<buff_size-1;j++) fprintf(stderr,"buf[%d]=%f\n",j,buffer[j]); /* buff_size - 1 */ MPI_Bsend(buffer, (buff_size - 1), MPI_DOUBLE, dest, 2000, MPI_COMM_WORLD); buffer += buff_size - 1; fprintf(stderr,"buffered send message of one data\n"); fprintf(stderr,"buf[0]=%f\n",*buffer); /* 1 */ MPI_Bsend(buffer, 1, MPI_DOUBLE, dest, 2000, MPI_COMM_WORLD); /* */ MPI_Buffer_detach( &bbuffer, &size ); /* */ MPI_Buffer_attach( bbuffer, size ); } int main(int argc, char **argv) { int rank; /* My Rank (0 or 1) */ double buffer[size], *tmpbuffer, *tmpbuf; int tsize, bsize; char *Current_Test = NULL; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == src) /* */ Generate_Data(buffer, SIZE);/* */ MPI_Pack_size( SIZE, MPI_DOUBLE, MPI_COMM_WORLD, &bsize ); /* SIZE MPI_DOUBLE */ tmpbuffer = (double *) malloc( bsize + 2*MPI_BSEND_OVERHEAD ); /* */ 73

if (!tmpbuffer) { fprintf( stderr, "Could not allocate bsend buffer of size %d\n", bsize ); MPI_Abort( MPI_COMM_WORLD, 1 ); } MPI_Buffer_attach( tmpbuffer, bsize + MPI_BSEND_OVERHEAD ); /* MPI MPI */ Buffered_Test_Send(buffer, SIZE);/* */ MPI_Buffer_detach( &tmpbuf, &tsize );/* */ } else if (rank == dest) { /* */ Normal_Test_Recv(buffer, SIZE);/* */ } } else { fprintf(stderr, "*** This program uses exactly 2 processes! ***\n"); /* */ MPI_Abort( MPI_COMM_WORLD, 1 ); } MPI_Finalize(); 26 9.3 MPI_SSEND(buf, count, datatype, dest, tag, comm) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) int MPI_Ssend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_SSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR) MPI 18 MPI_SSEND 34 74

34 1 4 tag 1 2 #include <stdio.h> #include "mpi.h" #define SIZE 10 /* Amount of time in seconds to wait for the receipt of the second Ssend message */ static int src = 0; static int dest = 1; int main( int argc, char **argv) { int rank; /* My Rank (0 or 1) */ int act_size = 0; int flag, np, rval, i; int buffer[size]; MPI_Status status, status1, status2; int count1, count2; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size( MPI_COMM_WORLD, &np ); if (np!= 2) { fprintf(stderr, "*** This program uses exactly 2 processes! ***\n"); MPI_Abort( MPI_COMM_WORLD, 1 ); } act_size = 5;/* */ if (rank == src) { /* */ act_size = 1; MPI_Ssend( buffer, act_size, MPI_INT, dest, 1, MPI_COMM_WORLD ); /* tag 1*/ fprintf(stderr,"mpi_ssend %d data,tag=1\n", act_size); 75

} act_size = 4; MPI_Ssend( buffer, act_size, MPI_INT, dest, 2, MPI_COMM_WORLD ); /* 4 tag 2*/ fprintf(stderr,"mpi_ssend %d data,tag=2\n", act_size); } else if (rank == dest) {/* */ MPI_Recv( buffer, act_size, MPI_INT, src, 1, MPI_COMM_WORLD, &status1 ); /* act_size tag 1*/ MPI_Recv( buffer, act_size, MPI_INT, src, 2, MPI_COMM_WORLD, &status2 ); /* act_size tag 2*/ MPI_Get_count( &status1, MPI_INT, &count1 );/* 1 */ fprintf(stderr,"receive %d data,tag=%d\n",count1,status1.mpi_tag); MPI_Get_count( &status2, MPI_INT, &count2 );/* 2 */ fprintf(stderr,"receive %d data,tag=%d\n",count2,status2.mpi_tag); } MPI_Finalize(); 27 9.4 MPI_RSEND(buf, count, datatype, dest, tag, comm) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) int MPI_Rsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_RSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR MPI 19 MPI_RSEND 35 76

35 1 4 1 2 3 4 36 1 2 3 4 3 4 3 4 2 3 1 2 1 4 C program rsendtest include 'mpif.h' integer ierr call MPI_Init(ierr) call test_rsend 77

call MPI_Finalize(ierr) end subroutine test_rsend include 'mpif.h' integer TEST_SIZE parameter (TEST_SIZE=2000) integer ierr, prev, next, count, tag, index, i, outcount, $ requests(2), indices(2), rank, size, $ status(mpi_status_size), statuses(mpi_status_size,2) logical flag real send_buf( TEST_SIZE ), recv_buf ( TEST_SIZE ) call MPI_Comm_rank( MPI_COMM_WORLD, rank, ierr ) call MPI_Comm_size( MPI_COMM_WORLD, size, ierr ) if (size.ne. 2) then print *, 'This test requires exactly 2 processes' call MPI_Abort( 1, MPI_COMM_WORLD, ierr ) endif C C C C C C next = rank + 1 if (next.ge. size) next = 0 prev = rank - 1 if (prev.lt. 0) prev = size - 1 if (rank.eq. 0) then print *, " Rsend Test " end if tag = 1456 count = TEST_SIZE / 3 if (rank.eq. 0) then call MPI_Recv( MPI_BOTTOM, 0, MPI_INTEGER, next, tag, $ MPI_COMM_WORLD, status, ierr ) 0 0 MPI_BOTTOM MPI print *,"Process ",rank," post Ready send" call MPI_Rsend(send_buf, count, MPI_REAL, next, tag, $ MPI_COMM_WORLD, ierr) else print *, "process ",rank," post a receive call" call MPI_Irecv(recv_buf, TEST_SIZE, MPI_REAL, 78

C C C C C $ MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, $ requests(1), ierr) 1 call MPI_Send( MPI_BOTTOM, 0, MPI_INTEGER, next, tag, $ MPI_COMM_WORLD, ierr ) MPI_Irecv call MPI_Wait( requests(1), status, ierr ) print *,"Process ", rank," Receive Rsend message from ", $ status(mpi_source) end if end 28 MPI 79

10 MPICH MPI MPI MPICH MPICH Linux NT MPI MPICH MPI MPICH MPI MPICH MPI MPICH MPI MPICH Argonne National Laboratory Mississippi State University IBM MPI 10.1 Linux MPICH 10.1.1 1 MPICH mpich.tar.gz mpich.tar.z mpich.tar.gz gunzip http://www.mcs.anl.org/mpi/mpich/ ftp ftp://ftp.mcs.anl.org/pub/mpi ftp://ftp.mcs.anl.org/pub/mpisplit ftp://ftp.mcs.anl.org/pub/mpisplit cat 2 tar zxvf mpich.tar.gz gunzip c mpich.tar.gz tar xovf zcat mpich.tar.z tar xovf uncompress mpich.tar.z tar xvf mpich.tar 3 mpich cd mpich 1.1.1 1.1.2 4 Makefile./configure prefix./configure prefix=/usr/local/mpich-1.2.1 make configure MPI make MPI 80

5 cd examples/basic make cpi../../bin/mpirun np 4 cpi $(HOME)/mpich make testing 6 mpich make install prefix 10.1.2 $ HOME /mpich-1.2.1/mpi-2-c++ mpich C++ $ HOME /mpich-1.2.1/bin mpich $ HOME /mpich-1.2.1/doc mpich $ HOME /mpich-1.2.1/examples mpich $ HOME /mpich-1.2.1/f90modules mpich Fortran90 $ HOME /mpich-1.2.1/include mpich $ HOME /mpich-1.2.1/lib mpich $ HOME /mpich-1.2.1/man mpich $ HOME /mpich-1.2.1/mpe mpich $ HOME /mpich-1.2.1/mpid mpich $ HOME /mpich-1.2.1/romio mpich I/O $ HOME /mpich-1.2.1/share upshot jumpshot $ HOME /mpich-1.2.1/src mpich $ HOME /mpich-1.2.1/util mpich $ HOME /mpich-1.2.1/www mpich MPI 81

10.1.3 mpicc/mpicc/mpif77/mpif90 mpicc C++ MPI mpicc C mpif77 mpif90 FORTRAN77 Fortran90 MPI MPI mpicc C -mpilog MPE log -mpitrace MPI -mpilog -mpianim -show -help -echo C++/C/FORTRAN77/Fortran90 10.1.4 MPI SPMD Single Program Multiple Data MPI MASTER/SLAVER MPI MPI C FORTRAN MPI 1 2 N MPI 37 MPI 82

MPI 37 1 MPI MPI 2 3 mpirun MPI 10.1.5 MPI MPI /etc/hosts.equiv MPI tp5 16 MPI tp1,tp2,...,tp16 tp1,...,tp16 /etc/hosts.equiv tp5 tp5 /etc/hosts.equiv.rhosts MPI home.rhosts tp1 pact tp5 pact tp1 pact home.rhosts tp5 pact MPI 10.1.6 MPI mpirun np N program N program MPI $(HOME)/mpich/util/machines/machines.LINUX tp5.cs.tsinghua.edu.cn tp1.cs.tsinghua.edu.cn tp2.cs.tsinghua.edu.cn tp3.cs.tsinghua.edu.cn tp4.cs.tsinghua.edu.cn tp8.cs.tsinghua.edu.cn 83

6 MPI tp5.cs.tsinghua.edu.cn $(HOME)/mpich/examples/basic/ mpirun np 6 cpi {tp1,tp2,tp3,tp4,tp8} $(HOME)/mpich/examples/basic/ cpi mashines.linux mpirun machinefile hosts np 6 cpi hosts mpirun p4pg pgfile cpi pgfile 38 < > < > < > < > < > < > < > < > < > 38 39 tp5 0 /home/pact/mpich/examples/basic/cpi tp1 1 /home/pact/mpich/examples/basic/cpi tp2 1 /home/pact/mpich/examples/basic/cpi tp3 1 /home/pact/mpich/examples/basic/cpi tp4 1 /home/pact/mpich/examples/basic/cpi tp8 1 /home/pact/mpich/examples/basic/cpi 39 0 tp5 0 tp5 MPI mpirun MPI MPI MPI 84

mpirun -np <number of processes> <program name and arguments> MPI MPI chameleon ( chameleon/pvm, chameleon/p4,...) meiko ( meiko ) paragon (paragon ch_nx ) p4 ( ch_p4 ) ibmspx (IBM SP2 ch_eui) anlspx (ANLs SPx ch_eui) ksr (KSR 1 2 ch_p4) sgi_mp (SGI ch_shmem) cray_t3d (Cray T3D t3d) smp (SMPs ch_shmem) execer ( ) MPI MPI mpirun [mpirun_options...] <progname> [options...] -arch <architecture> ${MPIR_HOME}/util/machines machines.<arch> -h -machine <machine name> use startup procedure for <machine name> -machinefile <machine-file name> -np <np> -nolocal -stdin filename -t -v -dbx dbx -gdb gdb -xxgdb xxgdb -tv totalview NEC - CENJU-3 -batch -stdout filename -stderr filename Nexus -nexuspg filename -np -nolocal -leave_pg -nexusdb filename Nexus -e execer -pg p4 execer -leave_pg P4 -p4pg filename -np -nolocal 85

-leave_pg -tcppg filename tcp -np nolocal -leave_pg -p4ssport num p4 num num=0 MPI_P4SSPORT MPI_USEP4SSPORT MPI_P4SSPORT -p4ssport 0 -mvhome home -mvback files -maxtime min -nopoll -mem value -cpu time CPU IBM SP2 -cac name ANL Intel Paragon -paragontype name -paragonname name shells -paragonpn name Paragon -arch -np MPI sun4 rs6000 sun4 2 rs6000 3 mpirun -arch sun4 -np 2 -arch rs6000 -np 3 program sun4 program.sun4 rs6000 program.rs6000 %a mpirun -arch sun4 -np 2 -arch rs6000 -np 3 program.%a /tmp/me/sun4 /tmp/me/rs6000 mpirun -arch sun4 -np 2 -arch rs6000 -np 3 /tmp/me/%a/program 10.1.7 mpiman MPI UNIX man Web HTML mpiman xman, X -xmosaic xmosaic Web -mosaic mosaic Web -netscape netscape Web -xman X xman -man man program ( mpiman -man MPI_Send) mpireconfig 86

make MPICH make mpireconfig filename filename filename.in 10.2 Windows NT MPICH NT MPICH MPICH.NT.1.2.0.4 tcp/ip, VIA sockets VI MS Visual C++ 6.0 Digital Fortran 6.0 FORTRAN MPI PMPI C FORTRAN 10.2.1 ftp://ftp.mcs.anl.gov/pub/mpi/nt/mpich.nt.1.2.0.4.all.zip setup MPICH NT c:\program Files\Argonne National Lab\MPICH.NT.1.2.0.4 MPI launcher MPI sdk 10.2.2 C/C++ MPI MS Visual C++ makefile project include [MPICH Home]\include Debug - /MTd Release - /MT Debug - ws2_32.lib mpichd.lib pmpichd.lib romiod.lib Release - ws2_32.lib mpich.lib pmpich.lib romio.lib pmpich*.lib MPI PMPI_ * lib [MPICH Home]\lib MPI build 87

FORTRAN FORTRAN Visual Fortran 6+ mpif.h Visual Fortran 6+ /iface:cref /iface:nomixed_str_len_arg C/C++ NT MPICH VIA 10.2.3 NT MPICH Remote Shell Server MPIRun.exe Simple Launcher MPIRun.exe MPICH Remote Shell Server MPI DCOM server SYSTEM MPIRun Remote Shell Server MPIRun MPI Remote Shell Server MPIRun.exe MPI MPIRun -np MPIRun.exe c:\program Files\Argonne National Lab\MPICH.NT.1.2.0.4\RemoteShell\Bin MPI MPI MPIConfig MPIConfig c:\program Files\Argonne National Lab\MPICH.NT.1.2.0.4\RemoteShell\Bin MPI MPIConfig MPI Refresh: Find: Verify: DCOM server Set: "set HOSTS" MPIRun "set TEMP" remote shell service MPI C:\ 88

timeout Remote Shell Server MPIRun.exe MPI 40 MPIRun configfile [-logon] [args...] MPIRun -np #processes [-logon] [-env "var1=val1 var2=val2..."] executable [args...] MPIRun -localonly #processes [-env "var1=val1 var2=val2..."] executable [args...] 41 40 NT MPI exe c:\somepath\myapp.exe \\host\share\somepath\myapp.exe [args arg1 arg2 arg3...] [env VAR1=VAL1 VAR2=VAL2... VARn=VALn] hosts hosta #procs [path\myapp.exe] hostb #procs [\\host\share\somepath\myapp2.exe] hostc #procs... 41 NT MPI 8 NT01 NT02... NT08 MPI testmpint c:\mpint 42 mpiconf1 exe c:\mpint\testmpint.exe hosts NT01 1 NT02 1 NT03 1 NT04 1 NT05 1 NT06 1 NT07 1 NT08 1 42 NT MPI 1 mpirun mpiconf1 testmpint 8 89

43 mpiconf2 exe c:\mpint\testmpint.exe hosts NT01 1 c:\mpint\testmpint.exe NT02 1 d:\mpint\testmpint2.exe NT03 1 e:\mpint\testmpint1.exe NT04 1 c:\testmpint.exe NT05 1 c:\test\testmpint9.exe NT06 1 d:\abc\abc.exe NT07 1 c:\temp\testmpint7.exe NT08 1 c:\mpint\testmpint.exe 43 NT MPI 2 mpirun mpiconf2 testmpint 8 44 mpiconf3 exe c:\mpint\testmpint.exe hosts NT01 2 c:\mpint\testmpint.exe NT02 3 d:\mpint\testmpint2.exe NT03 1 e:\mpint\testmpint1.exe NT04 4 c:\testmpint.exe NT05 1 c:\test\testmpint9.exe NT06 1 d:\abc\abc.exe NT07 2 c:\temp\testmpint7.exe NT08 1 c:\mpint\testmpint.exe 44 NT MPI 3 mpirun mpiconf2 MPI 15 NT01 2 NT02 3... NT08 1 90

mpirun -localonly 8 testmpint 8 MPIRun.exe -localonly #procs -tcp -tcp sockets -env "var1=val1 var2=val2 var3=val3...varn=valn" -logon mpirun mpiregister.exe MPIRegister.exe c:\program Files\Argonne National Lab\ MPICH.NT.1.2.0.4\RemoteShell\Bin\MPIRegister.exe MPIRun.exe mpirun MPIRegister MPIRegister -remove mpirun MPIRegister -remove 10.2.4 MPI 91

11 MPI 11.1 ierr Fortran MPI ierr C Fortran Fortran status status MPI_Recv status Fortran C 10 string10 character*10 string10 character string10 10 10 Fortran MPI MPI_ MPI MPI_ argc argv MPI argc argv C MPI argc argv MPI_Init argc argv MPI argc MPI argv MPI argc argv MPI_Init MPI_Finalize MPI MPI_Init MPI_Finalize MPI MPI_Recv MPI_Bcast MPI_Bcast MPI_Bcast MPI_Send MPI MPI_Recv MPI_Bcast MPI_Bcast MPI_Recv MPI 92

MPI MPI MPI MPI pthread MPICH-1.1.1 MPICH MPI_Send MPI_Recv 1 2 MPI 1 2 MPI_Send 2 MPI_Send 1 MPI_Recv 2MPI_Recv 1 1 2 MPI_Sendrecv MPI_Send MPI_Recv MPI_Buffer_attach MPI_Isend MPI_Irecv C FORTRAN C FORTRAN, address( MPI_BOTTOM ) FORTRAN COMMON, C 11.2 MPI MPICH MPI SPMD 93

NT Lilux MPI 11.3 MPI 94

MPI MPI MPI 95

96 12 MPI 12.1 1 2 45 46 0 1 1

CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_BSEND(buf1, count, MPI_REAL, 1, tag, comm, ierr) CALL MPI_BSEND(buf2, count, MPI_REAL, 1, tag, comm, ierr) ELSE IF rank.eq.1 THEN CALL MPI_RECV(buf1, count, MPI_REAL, 0, MPI_ANY_TAG, comm, status, ierr) CALL MPI_RECV(buf2, count, MPI_REAL, 0, tag, comm, status, ierr) END IF 29 0 1 1 1 2 1 2 i 1 i 46 12.2 MPI I/O I/O 97

47 MPI + MPI MPI 7 MPI MPI_ISEND MPI_IRECV MPI_IBSEND MPI_ISSEND MPI_IRSEND MPI_SEND_INIT MPI_RECV_INIT MPI_BSEND_INIT MPI_SSEND_INIT MPI_RSEND_INIT 98

8 MPI_TEST MPI_TESTANY MPI_TESTSOME MPI_TESTALL MPI_WAIT MPI_WAITANY MPI_WAITSOME MPI_WAITALL 48 12.3 99

49 MPI_ISEND MPI_ISEND request MPI_IRECV request MPI_ISEND(buf, count, datatype, dest, tag, comm, request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Isend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request) MPI_ISEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR MPI 20 MPI_ISEND MPI_IRECV(buf, count, datatype, source, tag, comm, request) OUT buf ( ) IN count ( ) IN datatype ( ) IN source ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Irecv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request *request) MPI_IRECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR 100

MPI 21 MPI_IRECV 12.4 MPI B,S,R I(immediate) MPI_ISSEND MPI_ISSEND(buf, count, datatype, dest, tag, comm, request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Issend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request) MPI_ISSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR MPI 22 MPI_ISSEND MPI_IBSEND MPI_IBSEND(buf, count, datatype, dest, tag, comm, request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Ibsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request) MPI_IBSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) 101

MPI 23 MPI_IBSEND MPI_IRSEND MPI_IRSEND(buf, count, datatype, dest, tag, comm, request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Irsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request) MPI_IRSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR MPI 24 MPI_IRSEND 12.5 12.5.1 MPI MPI_WAIT MPI_TEST MPI_WAIT status MPI_WAIT MPI_TEST MPI_TEST MPI_WAIT flag=true MPI_TEST MPI_WAIT flag=false 102

MPI_WAIT(request, status) INOUT request ( ) OUT status ( ) int MPI_Wait(MPI_Request *request, MPI_Status *status) MPI_WAIT(REQUEST, STATUS, IERROR) INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR MPI 25 MPI_WAIT MPI_TEST(request, flag, status) INOUT request ( ) OUT flag ( ) OUT status ( ) int MPI_Test(MPI_Request*request, int *flag, MPI_Status *status) MPI_TEST(REQUEST, FLAG, STATUS, IERROR) LOGICAL FLAG INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR MPI 26 MPI_TEST C C C C CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_ISEND(a(1), 10, MPI_REAL, 1, tag, comm, request, ierr) CALL MPI_WAIT(request, status, ierr) ELSE (rank.eq. 1) THEN CALL MPI_IRECV(a(1), 15, MPI_REAL, 0, tag, comm, request, ierr) CALL MPI_WAIT(request, status, ierr) END IF 30 MPI_WAIT 103

12.5.2 MPI MPI_WAIT MPI_WAITANY MPI_WAITANY index=i MPI_WAITANY I MPI_WAIT(array_of_requests[I],status) MPI_WAITALL DO I=1,COUNT MPI_WAIT(array_of_requests[I],status) END DO MPI_WAITSOME MPI_WAITANY MPI_WAITALL outcount array_of_requests array_of_indices array_of_statuses MPI_WAITANY(count, array_of_requests, index, status) IN count ( ) INOUT array_of_requests ( ) OUT index ( ) OUT status ( ) int MPI_Waitany(int count, MPI_Request *array_of_requests, int *index, MPI_Status *status) MPI_WAITANY(COUNT, ARRAY_OF_REQUESTS, INDEX, STATUS, IERROR) INTEGER COUNT, ARRAY_OF_REQUESTS(*), INDEX, STATUS(MPI_STATUS_SIZE) IERROR MPI 27 MPI_WAITANY 104

MPI_WAITALL( count, array_of_requests, array_of_statuses) IN count ( ) INOUT array_of_requests ( ) OUT array_of_statuses ( ) int MPI_Waitall(int count, MPI_Request *array_of_requests, MPI_Status *array_of_statuses) MPI_WAITALL(COUNT, ARRAY_OF_REQUESTS, ARRAY_OF_STATUSES, IERROR) INTEGER COUNT, ARRAY_OF_REQUESTS(*) INTEGER ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR MPI 28 MPI_WAITALL MPI_WAITSOME(incount,array_of_requests,outcount,array_of_indices,array_of_statuses) IN incount ( ) INOUT array_of_requests ( ) OUT outcount ( ) OUT array_of_indices ( ) OUT array_of_statuses ( ) int MPI_Waitsome(int incount,mpi_request *array_of_request, int *outcount, int *array_of_indices, MPI_Status *array_of_statuses) MPI_WAITSOME(INCOUNT, ARRAY_OF_REQUESTS, OUTCOUNT, ARRAY_OF_INDICES,ARRAY_OF_STATUSES, IERROR) INTEGER INCOUNT, ARRAY_OF_REQUESTS(*), OUTCOUNT, ARRAY_OF_INDICES(*) ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR MPI 29 MPI_WAITSOME MPI_TESTANY flag=true,, flag=false MPI_TESTALL flag=true flag=false MPI_TESTSOME MPI_WAITSOME outcount array_of_requests array_of_indices array_of_statuses outcount=0 105

MPI_TESTANY(count, array_of_requests, index, flag, status) IN count ( ) INOUT array_of_requests ( ) OUT index MPI_UNDEFINED ( ) OUT flag ( ) OUT status ( ) int MPI_Testany(int count, MPI_Request *array_of_requests, int *index, int *flag, MPI_Status *status) MPI_TESTANY(COUNT, ARRAY_OF_REQUESTS, INDEX, FLAG, STATUS, IERROR) LOGICAL FLAG INTEGER COUNT, ARRAY_OF_REQUESTS(*), INDEX, STATUS(MPI_STATUS_SIZE) IERROR MPI 30 MPI_TESTANY MPI_TESTALL(count, array_of_requests, flag, array_of_statuses) IN count ( ) INOUT array_of_requests ( ) OUT flag ( ) OUT array_of_statuses ( ) int MPI_Testall(int count, MPI_Request *array_of_requests, int *flag, MPI_Status *array_of_statuses) MPI_TESTALL(COUNT, ARRAY_OF_REQUESTS, FLAG, ARRAY_OF_STATUSES, IERROR) LOGICAL FLAG INTEGER COUNT, ARRAY_OF_REQUESTS(*), ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR MPI 31 MPI_TESTALL 106

MPI_TESTSOME(incount,array_of_requests,outcount,array_of_indices,array_of_statuses) IN incount ( ) INOUT array_of_requests ( ) OUT outcount ( ) OUT array_of_indices ( ) OUT array_of_statuses ( ) int MPI_Testsome(int incount,mpi_request *array_of_request, int *outcount, int *array_of_indices, MPI_Status *array_of_statuses) MPI_TESTSOME(INCOUNT, ARRAY_OF_REQUESTS, OUTCOUNT, ARRAY_OF_INDICES,ARRAY_OF_STATUSES, IERROR) INTEGER INCOUNT, ARRAY_OF_REQUESTS(*), OUTCOUNT, ARRAY_OF_INDICES(*) ARRAY_OF_STATUSES(MPI_STATUS_SIZE,*), IERROR MPI 32 MPI_TESTSOME 12.6 MPI MPI 12.6.1 MPI_CANCEL 107

MPI_CANCEL(request) IN request ( int MPI_Cancel(MPI_Request *request) MPI_CANCEL(REQUEST,IERROR) INTEGER REQUEST,IERROR MPI 33 MPI_CANCEL MPI_WAIT MPI_TEST status MPI_TEST_CANCELLED(status,flag) IN status ( ) OUT flag ( ) int MPI_Test_cancelled(MPI_Status status, int *flag) MPI_TEST_CANCELLED(STATUS,FLAG,IERROR) LOGICAL FLAG INTEGER STATUS(MPI_STATUS_SIZE),IERROR MPI 34 MPI_TEST_CANCELLED MPI_TEST_CANCELLED MPI_TEST_CANCELLED flag=true MPI_Comm_rank( MPI_COMM_WORLD, &rank ); if (rank == 0) { MPI_Send(sbuf, 1, MPI_INT, 1, 99, MPI_COMM_WORLD ); /* 0 */ } else if (rank ==1) { MPI_Irecv( rbuf, 1, MPI_INT, 0, 99, MPI_COMM_WORLD, request); /* 1 */ MPI_Cancel( request); /* */ MPI_Wait(&request,&status);/* */ MPI_Test_cancelled(&status,&flag);/* */ if (flag) MPI_Irecv( rbuf, 1, MPI_INT, 0, 99 MPI_COMM_WORLD, request);/* */ } 31 108

12.6.2 MPI_REQUEST_FREE request MPI_REQUEST_NULL MPI_REQUEST_FREE(request) INOUT request int MPI_Request_free(MPI_Request * request) MPI_REQUEST_FREE(REQUEST, IERROR) INTEGER REQUEST, IERROR MPI 35 MPI_REQUEST_FREE C C C C C C CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank) IF(rank.EQ.0) THEN DO i=1, n CALL MPI_ISEND(outval, 1, MPI_real, 1, 0, req, ierr) CALL MPI_REQUEST_FREE(req, ierr) req MPI_ISEND CALL MPI_IRECV(inval, 1, MPI_REAL, 1, 0, req, ierr) req CALL MPI_WAIT(req, status, ierr) MPI_IRECV END DO ELSE IF(rank.EQ.1) THEN CALL MPI_IRECV(inva, 1, MPI_REAL, 0, 0, req, ierr) CALL MPI_WAIT(req, status) 0 DO I=1, n-1 CALL MPI_ISEND(outval, 1, MPI_REAL, 0, 0, req, ierr) CALL MPI_REQUEST_FREE(req, ierr) req MPI_ISEND CALL MPI_IRECV(inval, 1, MPI_REAL, 0, 0, req, ierr) req CALL MPI_WAIT(req, status, ierr) 109

C END IF MPI_IRECV END DO CALL MPI_ISEND(outval, 1, MPI_REAL, 0, 0, req, ierr) CALL MPI_WAIT(req, status) 32 MPI_REQUEST_FREE 12.7 MPI MPI_PROBE MPI_IPROBE MPI_IPROBE(source,tag,comm,flag,status) MPI_IPROBE <source, tag, comm>, flag=true status MPI_RECV(..., source, tag,cmm,status) status MPI_IPROBE status MPI_RECV status MPI_IPROBE MPI_IPROBE flag=false status MPI_IPROBE flag=true status source,tag MPI_IPROBE source MPI_ANY_SOURCE source tag tag MPI_ANY_TAG comm MPI_PROBE MPI_IPROBE. MPI_PROBE (source,tag,cmm,status) IN source MPI_ANY_SOURCE( ) IN tag tag tag MPI_ANY_TAG IN comm OUT status int MPI_Probe(int source,int tag,mpi_comm comm,mpi_status *status) MPI_PROBE(SOURCE,TAG,COMM,STATUS,IERROR) INTEGER SOURCE,TAG,COMM,STATUS(MPI_STATUS_SIZE),IERROR MPI 36 MPI_PROBE 110

MPI_IPROBE(source,tag,comm,flag,status) IN source MPI_ANY_SOURCE IN tag tag tag MPI_ANY_TAG IN comm OUT flag OUT status int MPI_Iprobe(int source,int tag,mpi_comm comm,int *flag, MPI_Status *status) MPI_IPROBE(SOURCE,TAG,COMM,FLAG,STATUS,TERROR) LOGICAL FLAG INTEGER SOURCE,TAG,COMM,STATUS(MPI_STATUS_SIZE),IERROR MPI 37 MPI_IPROBE C C C C C CALL MPI_COMM_RANK(comm,rank,ierr) IF (rank.eq. 0) THEN CALL MPI_SEND(i,1,MPI_INTEGER,2,0,comm,ierr) 0 2 ELSE IF (rank.eq.1) THEN CALL MPI_SEND(x,1,MPI_REAL,2,0,comm,ierr) 1 2 ELSE IF (rank.eq.2 ) THEN DO i=1,2 CALL MPI_PROBE(MPI_ANY_SOURCE,0, comm,status,ierr) 2 IF (status(mpi_source) = 0) THEN 0 CALL MPI_RECV(i,1,MPI_INTEGER,0,0,status,ierr) ELSE 1 CALL MPI_RECV(x,1,MPI_REAL,1,0,status,ierr) END IF END DO END IF 33 CALL MPI_COMM_RANK(comm,rank,ierr) IF (rank.eq.0) THEN CALL MPI_SEND(i,1,MPI_INTEGER,2,0,comm,ierr) ELSE IF(rank.EQ.1) THEN 111

C C C C CALL MPI_SEND(x,1,MPI_REAL,2,0,comm,ierr) ELSE IF ( rank.eq. 2) THEN DO i=1,2 CALL MPI_PROBE(MPI_ANY_SOURCE,0 comm,status,ierr) IF (status(mpi_source)=0) THEN CALL MPI_RECV(i,1,MPI_INTEGER,MPI_ANY_SOURCE $ 0,status,ierr) MPI_PROBE ELSE CALL MPI_RECV(x,1,MPI_REAL,MPI_ANY_SOURCE, 0,status,ierr) MPI_PROBE END IF END DO END IF 34 MPI_ANY_SOURCE source MPI_PROBE 12.8 A B B C C C C CALL MPI_COMM_RANK(comm, rank, ierr) IF (RANK.EQ.0) THEN CALL MPI_ISEND(a, 1, MPI_REAL, 1, 0, comm, r1, ierr) 0 1 a CALL MPI_ISEND(b, 1, MPI_REAL, 1, 0, comm, r2, ierr) 0 1 b ELSE IF ( rank.eq.1) CALL MPI_IRECV(a, 1, MPI_REAL, 0, MPI_ANY_TAG, comm, r1, ierr) 1 a b b CALL MPI_IRECV(b, 1, MPI_REAL, 0, 0, comm, r2, ierr) 1 b END IF CALL MPI_WAIT(r1,status) CALL MPI_WAIT(r2,status) 112

C 35 12.9 Jacobi Jacobi Jacobi Jacobi 1 2 3 4 program main implicit none include 'mpif.h' integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer left,right,tag1,tag2 integer status(mpi_status_size,4) integer req(4) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) print *, "Process ", myid, " of ", numprocs, " is alive" C 113

do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do do i=1,totalsize a(i,1)=8.0 a(i,mysize+2)=8.0 end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do tag1=3 tag2=4 C C if (myid.gt. 0) then left=myid-1 else left=mpi_proc_null end if if (myid.lt. 3) then right=myid+1 else right=mpi_proc_null end if begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 114

C endif if (myid.eq. 3) then end_col=mysize endif do n=1,steps C do i=2,totalsize-1 b(i,begin_col)=(a(i,begin_col+1)+a(i,begin_col-1)+ * a(i+1,begin_col)+a(i-1,begin_col))*0.25 b(i,end_col)=(a(i,end_col+1)+a(i,end_col-1)+ * a(i+1,end_col)+a(i-1,end_col))*0.25 end do C call MPI_ISEND(b(1,end_col),totalsize,MPI_REAL,right,tag1, * MPI_COMM_WORLD,req(1),ierr) call MPI_ISEND(b(1,begin_col),totalsize,MPI_REAL,left,tag2, * MPI_COMM_WORLD,req(2),ierr) C C C call MPI_IRECV(a(1,1),totalsize,MPI_REAL,left,tag1, * MPI_COMM_WORLD,req(3),ierr) call MPI_IRECV(a(1,mysize+2),totalsize,MPI_REAL,right,tag2, * MPI_COMM_WORLD,req(4),ierr) do j=begin_col+1,end_col-1 do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) end do end do do i=1,4 CALL MPI_WAIT(req(i),status(1,i),ierr) end do end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) 115

C end do call MPI_Finalize(rc) end 36 Jacobi 12.10 MPI MPI MPI 1 MPI_SEND_INIT 2 MPI_START 3 MPI_WAIT 4 MPI_REQUEST_FREE MPI_START MPI_START MPI_REQUEST_FREE MPI_SEND_INIT(buf,count,datatype,dest,tag,comm,request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Send_init(void* buf, int count, MPI_Data type,int dest, int tag, MPI_Comm comm, MPI_Request *request) MPI_SEND_INIT(BUF,COUNT,DATATYPE,DEST,TAG,COMM,REQUEST, IERRR) <type> BUF (*) INTEGER COUNT,DATATYPE,DEST,TAG,COMM,REQUEST,IERRROR MPI 38 MPI_SEND_INIT 116

MPI_SEND_INIT MPI_BSEND_INIT(buf,count,datatype,dest,tag,comm,request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Bsend_init(void* buf,int count,mpi_datatype datatype,int dest, int tag, MPI_Comm comm,mpi_request *request) MPI_BSEND_INIT(BUF,COUNT,DATATYPE,DEST,TAG,COMM,REQUEST,IERROR) <type> BUF (*) INTEGER,COUNT,DATATYPE,DEST,TAG,COMM,REQUEST,IERROR MPI 39 MPI_BSEND_INIT MPI_BSEND_INIT MPI_SSEND_INIT(buf,count,datatype,dest,tag,comm,request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Ssend_init(void* buf,int count,mpi_datatype datatype,int dest, int tag, MPI_Comm comm,mpi_request *request) MPI_SSEND_INIT(BUF,COUNT,DATATYPE,DEST,TAG,COMM,REQUEST,IERROR) <type> BUF (*) INTEGER COUNT,DATATYPE,DEST,TAG,COMM,REQUEST,IERROR MPI 40 MPI_SSEND_INIT MPI_SSEND_INIT 117

MPI_RSEND_INIT(buf,count,datatype,dest,tag,comm,request) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) OUT request ( ) int MPI_Rsend_init(void* buf,int count,mpi_datatype datatype,int dest, int tag, MPI_Comm comm,mpi_request *request) MPI_RSEND_INIT(BUF,COUNT,DATATYPE,DEST,TAG,COMM,REQUEST, IERROR) <type> BUF (*) INTEGER COUNT,DATATYPE,DEST,TAG,COMM,REQUEST,IERROR MPI_RSEND_INIT MPI 41 MPI_RSEND_INIT MPI_RECV_INIT(buf,count,datatype,source,tag,comm,request) OUT buf ( ) IN count ( ) IN datatype ( ) IN source MPI_ANY_SOURCE( ) IN tag MPI_ANY_TAG( ) IN comm ( ) OUT request ( ) int MPI_Recv_init(void* buf,int count,mpi_datatype datatype,int source, int tag, MPI_Comm comm,mpi_request *request) MPI_RECV_INIT(BUF,COUNT,DATATYPE,SOURCE,TAG,COMM,REQUEST, IERROR) <type> BUF (*) INTEGER COUNT,DATATYPE,SOURCE,TAG,COMM,REQUEST,IERROR MPI 42 MPI_RECV_INIT MPI_RECV_INIT buf OUT MPI_RECV_INIT ( ) MPI_START 118

MPI_START(request) INOUT request int MPI_Start(MPI_Request *request) MPI_START(REQUEST,IERROR) INTEGER REQUEST,IERROR ( ) MPI 43 MPI_START request MPI_START MPI_SEND_INIT MPI_START MPI_ISEND MPI_BSEND_INIT MPI_START MPI_IBSEND MPI_STARTALL(count,array_of_requests) IN count ( ) IN array_of_requests ( ) int MPI_Startall(int count, MPI_Request *array_of_requests) MPI_STARTALL(COUNT, ARRAY_OF_REQUESTS,IERROR) INTEGER COUNT, ARRAY_OF_REQUESTS(*),IERROR MPI 44 MPI_STARTALL MPI_STARTALL array_of_request MPI_START MPI_START MPI_STARTALL MPI_WAIT MPI_TEST MPI_START MPI_STARTALL MPI_REQUEST_FREE MPI_REQUEST_FREE MPI_START MPI_START 12.11 Jacobi Jacobi program main implicit none include 'mpif.h' 119

integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer left,right,tag1,tag2 integer status(mpi_status_size,4) integer req(4) C C call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) print *, "Process ", myid, " of ", numprocs, " is alive" do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do do i=1,totalsize a(i,1)=8.0 a(i,mysize+2)=8.0 end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do tag1=3 tag2=4 120

C if (myid.gt. 0) then left=myid-1 else left=mpi_proc_null end if if (myid.lt. 3) then right=myid+1 else right=mpi_proc_null end if C begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 endif if (myid.eq. 3) then end_col=mysize endif C call MPI_SEND_INIT(b(1,end_col),totalsize,MPI_REAL,right,tag1, * MPI_COMM_WORLD,req(1),ierr) call MPI_SEND_INIT(b(1,begin_col),totalsize,MPI_REAL,left,tag2, * MPI_COMM_WORLD,req(2),ierr) C C call MPI_RECV_INIT(a(1,1),totalsize,MPI_REAL,left,tag1, * MPI_COMM_WORLD,req(3),ierr) call MPI_RECV_INIT(a(1,mysize+2),totalsize,MPI_REAL,right,tag2, * MPI_COMM_WORLD,req(4),ierr) do n=1,steps do i=2,totalsize-1 b(i,begin_col)=(a(i,begin_col+1)+a(i,begin_col-1)+ * a(i+1,begin_col)+a(i-1,begin_col))*0.25 b(i,end_col)=(a(i,end_col+1)+a(i,end_col-1)+ * a(i+1,end_col)+a(i-1,end_col))*0.25 end do C 4 121

C C call MPI_STARTALL(4,req,ierr) do j=begin_col+1,end_col-1 do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) end do end do call MPI_WAITALL(4,req,status,ierr) end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) end do C do i=1,4 CALL MPI_REQUEST_FREE(req(i),ierr) end do call MPI_FINALIZE(rc) end 37 Jacobi 12.12 MPI 122

13 MPI MPI MPI 13.1 MPI MPI 13.1.1 ROOT 1 N 50 N ROOT 1 51 123

124 50 ROOT ROOT ROOT N N 52 52 13.1.2 0 1 N-1 53 MPI 53

0 0 N-1 -- 1 13.1.3 MPI MPI I II III result= recvbuf Op(message) result 54 MPI 125

13.2 MPI_BCAST(buffer,count,datatype,root,comm) IN/OUT buffer ( ) IN count / ( ) IN datatype / ( ) IN root ( ) IN comm ( ) int MPI_Bcast(void* buffer,int count,mpi_datatype datatype,int root, MPI_Comm comm) MPI_BCAST(BUFFER,COUNT,DATATYPE,ROOT,COMM,IERROR) <type> BUFFER(*) INTEGER COUNT,DATATYPE,ROOT,COMM,IERROR MPI 45 MPI_BCAST MPI_BCAST root root comm datatype count datatype count datatype MPI_BCAST A ROOT A A A A A 55 ROOT #include <stdio.h> #include "mpi.h" int main( argc, argv ) 126

int argc; char **argv; { int rank, value; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); do { if (rank == 0) /* 0 */ scanf( "%d", &value ); MPI_Bcast( &value, 1, MPI_INT, 0, MPI_COMM_WORLD );/* */ printf( "Process %d got %d\n", rank, value );/* */ } while (value >= 0); } MPI_Finalize( ); return 0; 38 13.3 MPI_GATHER rank N N sendcount sendtype recvcount recvtype, sendbuf sendcount sendtype root comm root comm 127

MPI_GATHER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) OUT recvbuf ( ) IN recvcount (, ) IN recvtype (, ) IN root ( ) IN comm ( ) int MPI_Gather(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) MPI_GATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE,ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR MPI 46 MPI_GATHER A B C D ROOT H A B C D... H ROOT 56 MPI_GATHERV MPI_GATHER recvcounts displs MPI_GATHERV ROOT MPI_GATHER MPI_GATHER sendbuf sendcount sendtype root comm root comm 128

MPI_GATHERV(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs,recvtype, root, comm) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) OUT recvbuf (, ) IN recvcounts ( ), IN displs, recvbuf IN recvtype ( ) IN root ( ) IN comm ( ) int MPI_Gatherv(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int *recvcounts, int *displs, MPI_Datatype recvtype, int root, MPI_Comm comm) MPI_GATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS, DISPLS, RECVTYPE, ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*), RECVTYPE, ROOT, COMM, IERROR MPI 47 MPI_GATHERV 100 MPI_Comm comm; int gsize,sendarray[100]; int root,*rbuf;... MPI_Comm_size(comm,&gsize); rbuf=(int *)malloc(gsize*100*sizeof(int)); MPI_Gather(sendarray,100,MPI_INT,rbuf,100,MPI_INT,root,comm); 39 MPI_Gather 100, (100 ), MPI_GATHERV displs 100 MPI_Comm comm; int gsize, sendarray[100]; int root, *rbuf, stride; int *displs, i, *rcounts;... MPI_Comm_size(comm, &gsize); 129

rbuf = (int *)malloc(gsize*stride*sizeof(int)); displs = (int *)malloc(gsize*sizeof(int)); rcounts = (int *)malloc(gsize*sizeof(int)); for (i=0; i<gsize; ++i) { displs[i] = i*stride; rcounts[i] = 100; } MPI_Gatherv(sendarray, 100, MPI_INT, rbuf, rcounts, displs, MPI_INT, root, comm); 40 MPI_Gatherv 13.4 MPI_SCATTER ROOT MPI_SCATTER MPI_GATHER A B C D... H ROOT A B C D ROOT H 57 sendcount sendtype recvcount recvtype recvbuf recvcount recvtype root comm root comm MPI_GATHER MPI_GATHERV MPI_SCATTER MPI_SCATTERV MPI_SCATTER MPI_GATHER MPI_SCATTERV MPI_GATHERV MPI_SCATTERV MPI_SCATTER ROOT sendcounts displs, sendcount[i] sendtype i recvcount recvtype recvbuf 130

recvcount recvtype root comm root comm MPI_SCATTER(sendbuf,sendcount,sendtype,recvbuf,recvcount,recvtype, root,comm) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) OUT recvbuf ( ) IN recvcount ( ) IN recvtype ( ) IN root ( ) IN comm ( ) int MPI_Scatter(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) MPI_SCATTER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR MPI 48 MPI_SCATTER MPI_SCATTERV(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm) IN sendbuf ( ) IN sendcounts ( ) IN displs ( ) IN sendtype ( ) OUT recvbuf ( ) IN recvcount ( ) IN recvtype ( ) IN root ( ) IN comm ( ) int MPI_Scatterv(void* sendbuf, int *sendcounts, int *displs, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) MPI_SCATTERV(SENDBUF, SENDCOUNTS, DISPLS, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNTS(*), DISPLS(*), SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR MPI 49 MPI_SCATTERV 131

100 MPI_Comm comm; int gsize,*sendbuf; int root,rbuf[100];... MPI_Comm_size(comm, &gsize); sendbuf = (int *)malloc(gsize*100*sizeof(int));... MPI_Scatter(sendbuf, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm); 41 MPI_Scatter 100 100 MPI_Comm comm; int gsize,*sendbuf; int root,rbuf[100],i,*displs,*scounts;... MPI_Comm_size(comm, &gsize); sendbuf = (int *)malloc(gsize*stride*sizeof(int));... displs = (int *)malloc(gsize*sizeof(int)); scounts = (int *)malloc(gsize*sizeof(int)); for (i=0; i<gsize; ++i) { displs[i] = i*stride; scounts[i] = 100; } MPI_Scatterv(sendbuf, scounts, displs, MPI_INT, rbuf, 100, MPI_INT, root, comm); 42 MPI_Scatterv 13.5 MPI_GATHER ROOT MPI_ALLGATHER ROOT MPI_GATHER MPI_ALLGATHER MPI_GATHER MPI_GATHER ROOT MPI_ALLGATHER MPI_ALLGATHER MPI_GATHER MPI_ALLGATHERV MPI_GATHERV MPI_ALLGATHERV j recvbuf j j sendcount sendtype 132

recvcounts[j] recvtype MPI_ALLGATHER(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype,comm) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) OUT recvbuf ( ) IN recvcount ( ) IN recvtype ( ) IN comm ( ) int MPI_Allgather(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm) MPI_ALLGATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR MPI 50 MPI_ALLGATHER 0 1... N-1 0 01 N-1 58 133

MPI_ALLGATHERV(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) OUT recvbuf ( ) IN recvcounts ( ) IN displs ( ) IN recvtype ( ) IN comm ( ) int MPI_Allgatherv(void* sendbuf, int sendcount,mpi_datatype sendtype, void* recvbuf, int *recvcounts, int *displs, MPI_Datatype recvtype, MPI_Comm comm) MPI_ALLGATHERV(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNTS, DISPLS, RECVTYPE, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNTS(*), DISPLS(*), RECVTYPE, COMM, IERROR MPI 51 MPI_ALLGATHERV MPI_Comm comm; int gsize,sendarray[100]; int *rbuf;... MPI_Comm_size(comm, &gsize); rbuf = (int *)malloc(gsize*100*sizeof(int)); MPI_Allgather(sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, comm); 100 43 MPI_Allgather MPI_Allgatherv MPI_Comm comm; int gsize, sendarray[100]; int root, *rbuf, stride; int *displs, i, *rcounts;... MPI_Comm_size(comm, &gsize); rbuf = (int *)malloc(gsize*stride*sizeof(int)); displs = (int *)malloc(gsize*sizeof(int)); 134

rcounts = (int *)malloc(gsize*sizeof(int)); for (i=0; i<gsize; ++i) { displs[i] = i*stride; rcounts[i] = 100; } MPI_Allgatherv(sendarray, 100, MPI_INT, rbuf, rcounts, displs, MPI_INT, root, comm); 44 MPI_Allgatherv 13.6 MPI_ALLTOALL MPI_ALLGATHER MPI_ALLTOALL MPI_ALLTOALL i j j recvbuf i sendcount sendtype recvcount recvtype MPI_ALLTOALL i j MPI_ALLTOALL(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) OUT recvbuf ( ) IN recvcount ( ) IN recvtype ( ) IN comm ( ) int MPI_Alltoall(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm) MPI_ALLTOALL(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR MPI 52 MPI_ALLTOALL 135

0 A 00 A 01 A 02 A 03 A 00 A 10 A 20 A 30 1 A 10 A 11 A 12 A 13 A 01 A 11 A 21 A 31 2 A 20 A 21 A 22 A 23 A 02 A 12 A 22 A 32 3 A 30 A 31 A 32 A 33 A 03 A 13 A 23 A 33 59 MPI_ALLTOALL MPI_ALLTOALL #include "mpi.h" #include <stdlib.h> #include <stdio.h> #include <string.h> #include <errno.h> int main( argc, argv ) int argc; char *argv[]; { int rank, size; int chunk = 2; /* */ int i,j; int *sb; int *rb; int status, gstatus; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&size); sb = (int *)malloc(size*chunk*sizeof(int));/* */ if (!sb ) { perror( "can't allocate send buffer" ); 136

} MPI_Abort(MPI_COMM_WORLD,EXIT_FAILURE); } rb = (int *)malloc(size*chunk*sizeof(int));/* */ if (!rb ) { perror( "can't allocate recv buffer"); free(sb); MPI_Abort(MPI_COMM_WORLD,EXIT_FAILURE); } for ( i=0 ; i < size ; i++ ) { for ( j=0 ; j < chunk ; j++ ) { sb[i*chunk+j] = rank + i*chunk+j;/* */ printf("myid=%d,send to id=%d, data[%d]=%d\n",rank,i,j,sb[i*chunk+j]); rb[i*chunk+j] = 0;/* 0*/ } } /* MPI_Alltoall */ MPI_Alltoall(sb,chunk,MPI_INT,rb,chunk,MPI_INT, MPI_COMM_WORLD); for ( i=0 ; i < size ; i++ ) { for ( j=0 ; j < chunk ; j++ ) { printf("myid=%d,recv from id=%d, data[%d]=%d\n",rank,i,j,rb[i*chunk+j]); /* */ } } free(sb); free(rb); MPI_Finalize(); 45 MPI_Alltoall MPI_ALLGATHERV MPI_ALLGATHER MPI_ALLTOALLV MPI_ALLTOALL sdispls rdispls comm MPI_ALLTOALL MPI_ALLTOALLV n 1) 2) 137

MPI_ALLTOALLV(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm) IN sendbuf ( ) IN sendcounts ( ) IN sdispls IN sendtype ( ) OUT recvbuf ( ) IN recvcounts ( ) IN rdispls IN recvtype ( ) IN comm ( ) int MPI_Alltoallv(void* sendbuf, int *sendcounts, int *sdispls, MPI_Datatype sendtype, void* recvbuf, int *recvcounts, int *rdispls, MPI_Datatype recvtype, MPI_Comm comm) MPI_ALLTOALLV(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE, RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNTS(*), SDISPLS(*), SENDTYPE, RECVCOUNTS(*), RDISPLS(*), RECVTYPE, COMM, IERROR MPI 53 MPI_ALLTOALLV 13.7 MPI_BARRIER(comm) IN comm ( ) int MPI_Barrier(MPI_Comm comm) MPI_BARRIER(COMM, IERROR) INTEGER COMM, IERROR MPI_BARRIER MPI 54 MPI_BARRIER #include "mpi.h" #include "test.h" #include <stdlib.h> 138

#include <stdio.h> int main( int argc, char **argv ) { int rank, size, i; int *table; int errors=0; MPI_Aint address; MPI_Datatype type, newtype; int lens; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); /* Make data table */ table = (int *) calloc (size, sizeof(int)); table[rank] = rank + 1; /* */ MPI_Barrier ( MPI_COMM_WORLD ); /* */ for ( i=0; i<size; i++ ) MPI_Bcast( &table[i], 1, MPI_INT, i, MPI_COMM_WORLD ); /* */ for ( i=0; i<size; i++ ) if (table[i]!= i+1) errors++; MPI_Barrier ( MPI_COMM_WORLD );/* */... /* */ MPI_Finalize(); } 46 13.8 MPI_REDUCE op root sendbuf count datatype recvbuf count datatype count datatype op root comm op MPI 139

MPI_REDUCE(sendbuf,recvbuf,count,datatype,op,root,comm) IN sendbuf ( ) OUT recvbuf ( ) IN count ( ) IN datatype ( ) IN op ( ) IN root ( ) IN comm ( ) int MPI_Reduce(void* sendbuf, void* recvbuf, int count, PI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm) MPI_REDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER COUNT, DATATYPE, OP, ROOT, COMM, IERROR MPI 55 MPI_REDUCE M Op Op Op Op Op Op 3 Op Op 2 Op Op 1 0 1 N-1 ROOT 60 MPI 140

13.9 MPI MPI MPI_ALLREDUCE op, MPI_REDUCE, MPI_REDUCE_SCATTER MPI_SCAN 9 MPI MPI_MAX MPI_MIN MPI_SUM MPI_PROD MPI_LAND MPI_BAND MPI_LOR MPI_BOR MPI_LXOR MPI_BXOR MPI_MAXLOC MPI_MINLOC MPI_MINLOC MPI_MAXLOC 4.9.3. MPI op datatype. : 10 C FORTRAN MPI C FORTRAN C Fortran MPI MPI_INT MPI_LONG MPI_SHORT MPI_UNSIGNED_SHORT MPI_UNSIGNED MPI_UNSIGNED_LONG MPI_INTEGER MPI_FLOAT MPI_DOUBLE MPI_REAL MPI_DOUBLE_PRECISION MPI_LONG_DOUBLE MPI_LOGICAL MPI_COMPLEX MPI_BYTE : 11 MPI_MAX, MPI_MIN MPI_SUM, MPI_PROD MPI_LAND, MPI_LOR, MPI_LXOR MPI_BAND, MPI_BOR, MPI_BXOR C,Fortran, C,Fortran, C, C,Fortran, 141

13.10 p π 1 1 1 arctan( ) arctan( 1) arctan( 0) arctan( 1) π dx = x = = = 01 + x 2 0 4 f(x)=4/(1+x 2 ) 1 f ( x ) dx = π 0 f(x) 4 f(x)=4/(1+x 2 ) 2 0 0.2 0.4 0.6 0.8 1.0 61 π f(x) 0 1 π 5 π 0 1 N π N i = 1 f ( N 2 i 1 1 1 i 0. 5 2 N ) N = N f ( ) N i = 1 62 π 142

π #include "mpi.h" #include <stdio.h> #include <math.h> double f(double); double f(double x) /* f(x) */ { return (4.0 / (1.0 + x*x)); } int main(int argc,char *argv[]) { int done = 0, n, myid, numprocs, i; double PI25DT = 3.141592653589793238462643; /* π */ double mypi, pi, h, sum, x; double startwtime = 0.0, endwtime; int namelen; char processor_name[mpi_max_processor_name]; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Get_processor_name(processor_name,&namelen); fprintf(stdout,"process %d of %d on %s\n", myid, numprocs, processor_name); n = 0 if (myid == 0) { printf("please give N="); scanf(&n); startwtime = MPI_Wtime(); } MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD);/* n */ h = 1.0 / (double) n;/* */ sum = 0.0; /* */ for (i = myid + 1; i <= n; i += numprocs) /* numprocs 4 0-1 100 143

0 1 5 9 13... 97 1 2 6 10 14... 98 2 3 7 11 15... 99 3 4 8 12 16... 100 */ { x = h * ((double)i - 0.5); sum += f(x); } mypi = h * sum;/* */ } MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); /* π */ if (myid == 0) /* 0 */ { printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi - PI25DT)); endwtime = MPI_Wtime(); printf("wall clock time = %f\n", endwtime-startwtime); fflush( stdout ); } MPI_Finalize(); 47 π 13.11 MPI_ALLREDUCE ROOT 144

MPI_ALLREDUCE(sendbuf, recvbuf, count, datatype, op, comm) IN sendbuf ( ) OUT recvbuf ( ) IN count ( ) IN datatype ( ) IN op ( ) IN comm ( ) int MPI_Allreduce(void* sendbuf, void* recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) MPI_ALLREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER COUNT, DATATYPE, OP, COMM, IERROR MPI 56 MPI_ALLREDUCE 13.12 MPI_REDUCE_SCATTER(sendbuf, recvbuf, recvcounts, datatype, op, comm) IN sendbuf ( ) OUT recvbuf ( ) IN recvcounts IN datatype ( ) IN op ( ) IN comm ( ) int MPI_Reduce_scatter(void* sendbuf, void* recvbuf, int *recvcounts MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) MPI_REDUCE_SCATTER(SENDBUF, RECVBUF, RECVCOUNTS, DATATYPE, OP, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER RECVCOUNTS(*), DATATYPE, OP, COMM, IERROR MPI 57 MPI_REDUCE_SCATTER MPI_REDUCE_SCATTER MPI ROOT MPI_REDUCE_SCATTER sendbuf count datatype 145

count= irecvcount[i] recvcounts[0] 0 recvcounts[1] 1 recvcounts[n-1] N-1 Op Op M-1 recvcounts[n-1] Op Op Op Op 2 recvcounts[2] Op Op 1 recvcounts[1] Op Op 0 recvcounts[0] 0 1 N-1 63 13.13 MPI_SCAN i 0,...,i i i i-1 i i i+1 0 146

MPI_SCAN(sendbuf, recvbuf, count, datatype, op, comm) IN sendbuf ( ) OUT recvbuf ( ) IN count ( ) IN datatype ( ) IN op ( ) IN comm ( ) int MPI_Scan(void* sendbuf, void* recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) MPI_SCAN(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER COUNT, DATATYPE, OP, COMM, IERROR MPI 58 MPI_SCAN 13.14 0 A2 B2 C2 2 A1 B1 C1 1 A0 B0 C0 0 A0+B0+C0 A1+B1+C1 A2+B2+C2 0 1 2 64 ROOT ROOT 147

ROOT A2 B2 C2 2 A0+B0+C0 A1+B1+C1 A2+B2+C2 A1 B1 C1 1 A0+B0+C0 A1+B1+C1 A2+B2+C2 A0 B0 C0 0 A0+B0+C0 A1+B1+C1 A2+B2+C2 0 1 2 65 ROOT ROOT 1/N N A2 B2 C2 2 A2+B2+C2 A1 B1 C1 1 A1+B1+C1 A0 B0 C0 0 A0+B0+C0 0 1 2 66 148

A2 B2 C2 2 A0+B0+C0 A1+B1+C1 A2+B2+C2 A1 B1 C1 1 A0+B0 A1+B1 A2+B2 A0 B0 C0 0 A0 B0 C0 0 1 2 67 13.15, : switch(rank) { case 0: MPI_Bcast(buf1, count, type, 0, comm); MPI_Bcast(buf2, count, type, 1, comm); break; case 1: MPI_Bcast(buf2, count, type, 1, comm); MPI_Bcast(buf1, count, type, 0, comm); break; }... switch(rank) { case 0: MPI_Bcast(buf1, count, type, 0, comm0); MPI_Bcast(buf2, count, type, 2, comm2); break; case 1: MPI_Bcast(buf1, count, type, 1, comm1); 149

} MPI_Bcast(buf2, count, type, 0, comm0); break; case 2: MPI_Bcast(buf1, count, type, 2, comm2); MPI_Bcast(buf2, count, type, 1, comm1); break; 48 comm {0,1}, comm0 {0,1}, comm1 {1,2}, comm2 {2,0}, comm2 comm0 comm0 comm1 comm1 comm2. switch(rank) { case 0: MPI_Bcast(buf1, count, type, 0, comm); MPI_Send(buf2, count, type, 1, tag, comm); break; case 1: MPI_Recv(buf2, count, type, 0, tag, comm, status); MPI_Bcast(buf1, count, type, 0, comm); break; } 49 0 1 0 0 0 1,, 1,,. switch(rank) { case 0: MPI_Bcast(buf1, count, type, 0, comm); MPI_Send(buf2, count, type, 1, tag, comm); break; 150

} case 1: MPI_Recv(buf2, count, type, MPI_ANY_SOURCE, tag, comm, status); /* 1 2 */ MPI_Bcast(buf1, count, type, 0, comm); MPI_Recv(buf2, count, type, MPI_ANY_SOURCE, tag, comm, status); break; case 2: MPI_Send(buf2, count, type, 1, tag, comm); MPI_Bcast(buf1, count, type, 0, comm); break; 50, 0 1, 2 1 1, 13.16 MINLOC MAXLOC MPI_MINLOC MPI_MAXLOC ( ) MPI_MAXLOC (u 0,0),(u 1,1),..,(u n-1,n-1) (u,r) u = u r = n max 1 i = 0 { u i }, u i < u r, i = 0,.., r 1 u r MPI_MINLOC MPI_MINLOC (u 0,0),(u 1,1),..,(u n-1,n-1) (u,r) u = u r = n min 1 i = 0 { u i }, u i > u r, i = 0,.., r 1 MPI_MINLOC MPI_MAXLOC, ( ) MPI,MPI_MAXLOC MPI_MINLOC : 151

12 MPI Fortran MPI_2REAL MPI_2DOUBLE_PRECISION MPI_2INTEGER 13 MPI C MPI_FLOAT_INT MPI_DOUBLE_INT MPI_LONG_INT MPI_2INT MPI_SHORT_INT MPI_LONG_DOUBLE_INT MPI_2REAL MPI_TYPE_CONTIGUOUS(2, MPI_REAL, MPI_2REAL) MPI_2INTEGER MPI_2DOUBLE_PRECISION MPI_2INT MPI_2REAL MPI_FLOAT_INT : type[0] = MPI_FLOAT type[1] = MPI_INT disp[0] = 0 disp[1] = sizeof(float) block[0] = 1 block[1] = 1 MPI_TYPE_STRUCT(2, block, disp, type, MPI_FLOAT_INT) MPI_LONG_INT MPI_DOUBLE_INT MPI_FLOAT_INT. 3 ( C ), 3. /* 3 ain[3] */ double ain[3],aout[3]; int ind[3]; struct { double val; int rank; } in[3], out[3];/* */ int i, myrank, root; MPI_Comm_rank(MPI_COMM_WORLD, &myrank); for (i=0; i<3; ++i) { 152

in[i].val = ain[i]; in[i].rank = myrank; }/* */ MPI_Reduce(in, out, 3, MPI_DOUBLE_INT, MPI_MAXLOC, root, comm); /* */ if (myrank == root) { /* */ for (i=0; i<3; ++i) { aout[i] = out[i].val; ind[i] = out[i].rank; } } 51 MPI_MAXLOC 14 MPI_MAXLOC 0 (30.5,0) (41.7,0) (35.9,0) 1 (12.1,1) (11.3,1) (13.5,1) 2 (100.7,2) (23.2,2) (98.4,2) MPI_MAXLOC(100.7,2) (41.7,0) (98.4,2) 13.17 MPI_OP_CREATE(function, commute, op) IN function ( ) IN commute true, false OUT op ( ) int MPI_Op_create(MPI_User_function *function,int commute,mpi_op *op) MPI_OP_CREATE(FUNCTION, COMMUTE, OP, IERROR) EXTERNAL FUNCTION LOGICAL COMMUTE INTEGER OP, IERROR MPI 59 MPI_OP_CREATE MPI MPI_OP_CREATE function op MPI commute=true 153

commute=false function : invec, inoutvec,len datatype C : typedef void MPI_User_function(void *invec, void *inoutvec, int *len, MPI_Datatype *datatype); Fortran : FUNCTION USER_FUNCTION(INVEC(*), INOUTVEC(*), LEN, TYPE) <type> INVEC(LEN), INOUTVEC(LEN) INTEGER LEN, TYPE datatype MPI_REDUCE : invec inoutvec,len,datatype u[0],...,u[len-1] invec len datatype v[0],...,v[len-1] inoutvec len datatype w[0],...,w[len-1] inoutvec len datatype w[i]= u[i] v[i],i 0 len-1, invec inoutvec len, inoutvec. len, MPI MPI_ABORT MPI_OP_FREE(op) IN op ( ) int MPI_Op_free(MPI_Op *op) MPI_OP_FREE(OP, IERROR) INTEGER OP, IERROR MPI 60 MPI_OP_FREE MPI_OP_FREE op MPI_OP_NULL typedef struct { double real,imag; } Complex; /* */ void myprod(complex *in, Complex *inout, int *len, MPI_Datatype *dptr) { int i; 154

Complex c; for (i=0; i < *len; ++i) { c.real = inout->real*in->real - inout->imag*in->imag; c.imag = inout->real*in->imag + inout->imag*in->real; *inout = c; in++; inout++; } } /* */ /* 100 */ Complex a[100], answer[100]; MPI_Op myop; MPI_Datatype ctype; /* MPI */ MPI_Type_contiguous(2, MPI_DOUBLE, &ctype); MPI_Type_commit(&ctype); /* */ MPI_Op_create(myProd, True, &myop); MPI_Reduce(a, answer, 100, ctype, myop, root, comm); /* ( 100 ) */ 52 13.18 155

14 MPI MPI 14.1 -- < > ={< 0 0>,< 1 1>,...,< n-1 n-1>} 0 1 i n-1 0 1 i n-1 68 ={ 0... n-1} 156

typemap={(type 0,disp 0 ),...,(type n-1,disp n-1 )}, lb(typemap)=min {disp j }, 0=<j<=n-1 ub(typemap)=max(disp j +sizeof(type j )), 0=<j<=n-1 extent(typemap)=ub(typemap)-lb(typemap)+ε ε type={(double,0),(char,8)}( 0, 8 ) double 8 extent 16( 9 8 ), extent 16 14.2 14.2.1 MPI_TYPE_CONTIGUOUS, MPI_TYPE_CONTIGUOUS(count,oldtype,newtype) IN count ( ) IN oldtype ( ) OUT newtype ( ) int MPI_Type_contiguous(int count,mpi_datatype oldtype, MPI_Datatype *newtype) MPI_TYPE_CONTIGUOUS(COUNT,OLDTYPE,NEWTYPE,IERROR) INTEGER COUNT,OLDTYPE,NEWTYPE,IERROR MPI 61 MPI_TYPE_CONTIGUOUS MPI_TYPE_CONTIGUOUS oldtype {(doubel,0),(char,8)}, extent=16, count=3, newtype {(double,0),(char,8),(double,16),(char,24),(double,32),(char,40)} 157

69 MPI_TYPE_CONTIGUOUS oldtype {(type 0, disp 0 ), (type n-1,disp n-1 )}, extent = ex. count newtype {(type 0, disp 0 ),..., (type n-1,disp n-1 ), (type 0, disp 0 +ex),..., (type n-1, disp n-1 +ex),..., (type 0, disp 0 +ex(count-1)),..., (type n-1,disp n-1 +ex(count-1))}. 14.2.2 MPI_TYPE_VECTOR extent MPI_TYPE_VECTOR(count,blocklength,stride,oldtype,newtype) IN count ( ) IN blocklength ( ) IN stride ( ) IN oldtype ( ) OUT newtypr ( ) int MPI_Type_vector(int count,int blocklength,int stride, MPI_Datatype oldtype,mpi_datatype *newtype) MPI_TYPE_VECTOR(COUNT,BLOCKLENGTH,STRIDE,OLDTYPE, NEWTYPE,IERROR) INTEGER COUNT,BLOCKLENGTH,STRIDE,OLDTYPE,NEWTYPE,IERROR MPI 62 MPI_TYPE_VECTOR 158

oldtype {(double,0),(char,8)},extent=16. MPI_TYPE_VECTOR(2,3,4,oldtype,newtype) {(double,0),(char,8), (double,16),(char,24), (double,32),(char,40), (double,64),(char,72), (double,80),(char,88),(double,96),(char,104)}., stride 4 70 MPI_TYPE_VECTOR MPI_TYPE_VECTOR(3,1,-2,oldtype,newtype) : {(double,0),(char,8),(double,-32),(char,-24),(double,-64),(char,-56)}., oldtype {(type 0, disp 0 ), (type n-1,disp n-1 )}, extent = ex. bl blocklength. count*bl, {(type 0, disp 0 ),..., (type n-1,disp n-1 type n-1,disp n-1 +ex.(stride+bl-1)),..., (type 0, disp 0 +ex.(count-1)),..., (type n-1, disp n-1 +ex.(count-1)),..., (type 0, disp 0 +ex.(count-1).stride),..., (type n-1, disp n-1 +ex.(stride.(count-1)+bl-1)},..., (type n-1, disp n-1 +ex.(stride.(count-1)+bl-1)}. MPI_TYPE_CONTIGUOUS( count, oldtype, newtype ) MPI_TYPE_VECTOR( count, 1, 1, oldtype, newtype ), MPI_TYPE_VECTOR(1, count, n, 159

oldtype, newtype), n. MPI_TYPE_HVECTOR(count,blocklength,stride,oldtype,newtype) IN count ( ) IN blocklength ( ) IN stride ( ) IN oldtype ( ) OUT newtype ( ) int MPI_Type_hvector(int count,int blocklength,mpi_aint stride,mpi_datatype oldtype, MPI_Datatype *newtype) MPI_TYPE_HVECTOR(COUNT,BLOCKLENGTH,STRIDE,OLDTYPE,NEWTYPE,IERROR) INTEGER COUNT,BLOCKLENGTH,STRIDE,OLDTYPE,NEWTYPE,IERROR MPI 63 MPI_TYPE_HVECTOR MPI_TYPE_HVECTOR MPI_TYPE_VECTOR, stride, oldtype {(type 0, disp 0 ), (type n-1,disp n-1 )},extent = ex. bl blocklength. count.bl.n, {(type 0, disp 0 ),..., (type n-1,disp n-1 ), (type 0, disp 0 +ex),..., (type n-1, disp n-1 +ex),..., (type 0, disp 0 +ex.(bl-1)),..., (type n-1, disp n-1 +ex.(bl-1)), (type 0, disp 0 +stride),..., (type n-1, disp n-1 +stride),..., (type 0, disp 0 +stride+ex.(bl-1)),..., (type n-1,disp n-1 +stride+ex.(bl-1)),..., (type 0, disp 0 + stride.(count-1)),..., (type n-1,disp n-1 +stride.(count-1)),..., (type 0, disp 0 +stride.(count-1)+(bl-1).ex),..., (type n-1, dispn-1+stride.(count-1)+(bl-1).ex }. 14.2.3 MPI_TYPE_INDEXED ( ),. extent. 160

MPI_TYPE_INDEXED(count,array_of_blocklengths,array_of_displacemets,oldtype,newtype) IN count IN array_of_blocklengths ( ) IN array_of_displacements ( ) IN oldtype ( ) OUT newtypr ( ) int MPI_Type_indexed(int count,int *array_of_blocklengths, int *array_of_displacements, MPI_Datatype oldtype, MPI_Datatype *newtype) MPI_TYPE_INDEXED(COUNT,ARRAY_OF_BLOCKLENGTHS,ARRAY_OF_DISPLACE MENTS,OLDTYPE,NEWTYPE,IERROR) INTEGER COUNT,ARRAY_OF_BLOCKLENGTHS(*), ARRAY_OF_DISPLACEMENTS(*),OLDTYPE,NEWTYPE,IERROR MPI 64 MPI_TYPE_INDEXED oldtype {(double,0),(char,8)},extent=16. B=(3,1),D=(4,0), MPI_TYPE_INDEXED( 2, B, D, oldtype, newtype ) : {(double,64),(char,72), (double,80),(char,88),(double,96),(char,104), (double,0),(char,8)}. 2 1 71 MPI_TYPE_INDEXED 161

, 64 0., oldtype {(type, disp ), (type,disp )}, extent = ex. B array_of_blocklengths,d array_of_displacements. n.sum(b[i],i=0,...,count-1), {(type 0, disp 0 +D[0].ex),..., (type n-1,disp n-1 +D[0].ex),..., (type 0, disp 0 +(D[0]+B[0]-1).ex),..., (type n-1,disp n-1 +(D[0]+B[0]-1).ex),..., (type 0, disp 0 +D[count-1].ex),..., (type n-1,disp n-1 +D[count-1].ex),..., (type 0, disp 0 +(D[count-1]+B[count-1]-1).ex),..., (type n-1, disp n-1 +(D[count-1]+B[count-1]-1).ex.)}. MPI_TYPE_VECTOR(count,blocklength,stride,oldtype,newtype) MPI_TYPE_INDEXED(count,B,D,oldtype,newtype), D[j]=j.stride, j=0,...,count-1, B[j]=blocklength, j=0,...,count-1. MPI_TYPE_HINDEXED MPI_TYPE_INDEXED, array_of_displacements extent,. MPI_TYPE_HINDEXED(count,array_of_blocklengths,array_of_displacemets,oldtype,newtype) IN count ( ) IN array_of_blocklengths ( ) IN array_of_displacements ( ) IN oldtype ( ) OUT newtypr ( ) int MPI_Type_hindexed(int count,int *array_of_blocklengths, MPI_Aint* array_of_displacements, MPI_Datatype oldtype, MPI_Datatype *newtype) MPI_TYPE_HINDEXED(COUNT,ARRAY_OF_BLOCKLENGTHS, ARRAY_OF_DISPLACEMENTS,OLDTYPE,NEWTYPE,IERROR) INTEGER COUNT, ARRAY_OF_BLOCKLENGTHS(*), ARRAY_OF_DISPLACEMENTS(*), OLDTYPE, NEWTYPE, IERROR MPI 65 MPI_TYPE_HINDEXED oldtype {(type 0, disp 0 ), (type n-1,disp n-1 )},extent = ex. B array_of_blocklengths,d array_of_displacements. n.sum(b[i],i=0,...,count-1), {(type 0, disp 0 +D[0]),..., (type n-1,disp n-1 +D[0]),..., (type 0, disp 0 +(D[0]+B[0]-1).ex),..., (type n-1, disp n-1 +D[0]+ B[0]-1).ex),..., (type 0, disp 0 +D[count-1]),..., (typen-1, disp n-1 +D[count-1]),..., (type 0, disp 0 +D[count-1]+(B[count-1]-1).ex),..., 162

(type n-1, disp n-1 +D[count-1]+ B[count-1]-1).ex.)}. 14.2.4 MPI_TYPE_STRUCT MPI_TYPE_STRUCT(count,array_of_blocklengths,array_of_displacemets,array_of_types, newtype) IN count ( ) IN array_of_blocklengths ( ) IN array_of_displacements ( ) IN array_of_types ( ) OUT newtypr ( ) int MPI_Type_struct(int count,int *array_of_blocklengths, MPI_Aint *array_of_displacements, MPI_Datatype array_of_types, MPI_Datatype *newtype) MPI_TYPE_STRUCT(COUNT,ARRAY_OF_BLOCKLENGTHS,ARRAY_OF_DISPLACEMEN TS, ARRAY_OF_TYPES *,NEWTYPE,IERROR) INTEGER COUNT, ARRAY_OF_BLOCKLENGTHS(*), ARRAY_OF_DISPLACEMENTS(*),ARRAY_OF_TYPES *,NEWTYPE, IERROR MPI 66 MPI_TYPE_STRUCT type1 { double,0 char,8 extent=16. B=(2,1,3),D=(0,16,26),T=(MPI_FLOAT, type1, MPI_CHAR). MPI_TYPE_STRUCT(3,B,D,T,newtype) {(float,0),(float,4),(double,16),(char,24),(char,26),(char,27),(char,28)}. float type1 char 1 2 3 0 8 16 24 26 72 MPI_TYPE_STRUCT 163

0 MPI_FLOAT 16 type1, 26 MPI_CHAR ( 4 ), T array_of_types, T[i], typemapi ={(type0, disp0), (typen-1,dispn-1 )}, extent = ex. B array_of_blocklengths,d array_of_displacements. n.sum(b[i],i=0,...,count-1), {(type 0, disp 0 +D[0].ex),..., (type n-1,disp n-1 +D[0].ex),..., (type 0, disp 0 +(D[0]+B[0]-1).ex),..., (type n-1, disp n-1 +(D[0]+B[0]-1).ex),..., (type 0, disp 0 +D[count-1].ex),..., (type n-1, disp n-1 +D[count-1].ex),..., (type 0, disp 0 +(D[count-1]+B[count-1]-1).ex),..., (type n-1, disp n-1 +(D[count-1]+B[count-1]-1).ex.)}. MPI_TYPE_HINDEXED(count,B,D,oldtype,newtype) MPI_TYPE_STRUCT( count, B, D, T, newtype), T oldtype 14.2.5 MPI MPI_TYPE_COMMIT(datatype) INOUT datatype ( ) int MPI_Type_commit(MPI_Datatype *datatype) MPI_TYPE_COMMIT(DATATYPE,IERROR) INTEGER DATATYPE,IERROR MPI 67 MPI_TYPE_COMMIT MPI_TYPE_FREE(datatype) INOUT datatype ( ) int MPI_Type_free(MPI_Datatype *datatype) MPI_TYPE_FREE(DATATYPE,IERROR) INTEGER DATATYPE,IERROR MPI 68 MPI_TYPE_FREE MPI_TYPE_FREE 164

MPI_DATATYPE_NULL C C C C C INTEGER type1, type2 CALL MPI_TYPE_CNTIGUOUS(5, MPI_REAL, type1, ierr) CALL MPI_TYPE_COMMIT(type1, ierr) type1 type2 = type1 type2 type1 CALL MPI_TYPE_VECTOR(3,5,4,MPI_REAL,type1,ierr) type1 CALL MPI_TYPE_COMMIT(type1,ierr) type1 53 #include <stdio.h> #include <stdlib.h> #include "mpi.h" #define NUMBER_OF_TESTS 10 int main( argc, argv ) int argc; char **argv; { MPI_Datatype vec1, vec_n; int blocklens[2]; MPI_Aint indices[2]; MPI_Datatype old_types[2]; double *buf, *lbuf; register double *in_p, *out_p; int rank; int n, stride; double t1, t2, tmin; int i, j, k, nloop; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); n = 1000; 165

stride = 24; nloop = 100000/n; buf = (double *) malloc( n * stride * sizeof(double) ); if (!buf) { fprintf( stderr, "Could not allocate send/recv buffer of size %d\n", n * stride ); MPI_Abort( MPI_COMM_WORLD, 1 ); } lbuf = (double *) malloc( n * sizeof(double) ); if (!lbuf) { fprintf( stderr, "Could not allocated send/recv lbuffer of size %d\n", n ); MPI_Abort( MPI_COMM_WORLD, 1 ); } if (rank == 0) printf( "Kind\tn\tstride\ttime (sec)\trate (MB/sec)\n" ); /* */ MPI_Type_vector( n, 1, stride, MPI_DOUBLE, &vec1 ); MPI_Type_commit( &vec1 ); tmin = 1000; for (k=0; k<number_of_tests; k++) { if (rank == 0) { /* */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_COMM_WORLD, &status ); t1 = MPI_Wtime(); for (j=0; j<nloop; j++) { MPI_Send( buf, 1, vec1, 1, k, MPI_COMM_WORLD ); MPI_Recv( buf, 1, vec1, 1, k, MPI_COMM_WORLD, &status ); } t2 = (MPI_Wtime() - t1) / nloop; if (t2 < tmin) tmin = t2; } else if (rank == 1) { /* */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_COMM_WORLD, &status ); for (j=0; j<nloop; j++) { MPI_Recv( buf, 1, vec1, 0, k, MPI_COMM_WORLD, &status ); MPI_Send( buf, 1, vec1, 0, k, MPI_COMM_WORLD ); } 166

} } /* */ tmin = tmin / 2.0; if (rank == 0) { printf( "Vector\t%d\t%d\t%f\t%f\n", n, stride, tmin, n * sizeof(double) * 1.0e-6 / tmin ); } MPI_Type_free( &vec1 ); /* */ blocklens[0] = 1; blocklens[1] = 1; indices[0] = 0; indices[1] = stride * sizeof(double); old_types[0] = MPI_DOUBLE; old_types[1] = MPI_UB; MPI_Type_struct( 2, blocklens, indices, old_types, &vec_n ); MPI_Type_commit( &vec_n ); tmin = 1000; for (k=0; k<number_of_tests; k++) { if (rank == 0) { /* */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_COMM_WORLD, &status ); t1 = MPI_Wtime(); for (j=0; j<nloop; j++) { MPI_Send( buf, n, vec_n, 1, k, MPI_COMM_WORLD ); MPI_Recv( buf, n, vec_n, 1, k, MPI_COMM_WORLD, &status ); } t2 = (MPI_Wtime() - t1) / nloop; if (t2 < tmin) tmin = t2; } else if (rank == 1) { /* */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_COMM_WORLD, &status ); for (j=0; j<nloop; j++) { MPI_Recv( buf, n, vec_n, 0, k, MPI_COMM_WORLD, &status ); MPI_Send( buf, n, vec_n, 0, k, MPI_COMM_WORLD ); } } } 167

/* */ tmin = tmin / 2.0; if (rank == 0) { printf( "Struct\t%d\t%d\t%f\t%f\n", n, stride, tmin, n * sizeof(double) * 1.0e-6 / tmin ); } MPI_Type_free( &vec_n ); /* Use user-packing with known stride */ tmin = 1000; for (k=0; k<number_of_tests; k++) { if (rank == 0) { /* Make sure both processes are ready */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_COMM_WORLD, &status ); t1 = MPI_Wtime(); for (j=0; j<nloop; j++) { /* If the compiler isn't good at unrolling and changing multiplication to indexing, this won't be as good as it could be */ for (i=0; i<n; i++) lbuf[i] = buf[i*stride]; MPI_Send( lbuf, n, MPI_DOUBLE, 1, k, MPI_COMM_WORLD ); MPI_Recv( lbuf, n, MPI_DOUBLE, 1, k, MPI_COMM_WORLD, &status ); for (i=0; i<n; i++) buf[i*stride] = lbuf[i]; } t2 = (MPI_Wtime() - t1) / nloop; if (t2 < tmin) tmin = t2; } else if (rank == 1) { /* Make sure both processes are ready */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_COMM_WORLD, &status ); for (j=0; j<nloop; j++) { MPI_Recv( lbuf, n, MPI_DOUBLE, 0, k, MPI_COMM_WORLD, &status ); for (i=0; i<n; i++) buf[i*stride] = lbuf[i]; for (i=0; i<n; i++) lbuf[i] = buf[i*stride]; MPI_Send( lbuf, n, MPI_DOUBLE, 0, k, MPI_COMM_WORLD ); } } 168

} /* Convert to half the round-trip time */ tmin = tmin / 2.0; if (rank == 0) { printf( "User\t%d\t%d\t%f\t%f\n", n, stride, tmin, n * sizeof(double) * 1.0e-6 / tmin ); } /* Use user-packing with known stride, using addition in the user copy code */ tmin = 1000; for (k=0; k<number_of_tests; k++) { if (rank == 0) { /* Make sure both processes are ready */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_BOTTOM, 0, MPI_INT, 1, 14, MPI_COMM_WORLD, &status ); t1 = MPI_Wtime(); for (j=0; j<nloop; j++) { /* If the compiler isn't good at unrolling and changing multiplication to indexing, this won't be as good as it could be */ in_p = buf; out_p = lbuf; for (i=0; i<n; i++) { out_p[i] = *in_p; in_p += stride; } MPI_Send( lbuf, n, MPI_DOUBLE, 1, k, MPI_COMM_WORLD ); MPI_Recv( lbuf, n, MPI_DOUBLE, 1, k, MPI_COMM_WORLD, &status ); out_p = buf; in_p = lbuf; for (i=0; i<n; i++) { *out_p = in_p[i]; out_p += stride; } } t2 = (MPI_Wtime() - t1) / nloop; if (t2 < tmin) tmin = t2; } else if (rank == 1) { /* Make sure both processes are ready */ MPI_Sendrecv( MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_BOTTOM, 0, MPI_INT, 0, 14, MPI_COMM_WORLD, &status ); for (j=0; j<nloop; j++) { MPI_Recv( lbuf, n, MPI_DOUBLE, 0, k, MPI_COMM_WORLD, &status ); in_p = lbuf; out_p = buf; 169

for (i=0; i<n; i++) { *out_p = in_p[i]; out_p += stride; } out_p = lbuf; in_p = buf; for (i=0; i<n; i++) { out_p[i] = *in_p; in_p += stride; } MPI_Send( lbuf, n, MPI_DOUBLE, 0, k, MPI_COMM_WORLD ); } } } /* Convert to half the round-trip time */ tmin = tmin / 2.0; if (rank == 0) { printf( "User(add)\t%d\t%d\t%f\t%f\n", n, stride, tmin, n * sizeof(double) * 1.0e-6 / tmin ); } } MPI_Finalize( ); /************ *****************************************/ #include "mpi.h" #include <stdio.h> int main(argc, argv) int argc; char **argv; { int rank, size, i, buf[1]; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); if (rank == 0) { for (i=0; i<100*(size-1); i++) { MPI_Recv( buf, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status ); printf( "Msg from %d with tag %d\n", status.mpi_source, status.mpi_tag ); } } 170

} else { for (i=0; i<100; i++) MPI_Send( buf, 1, MPI_INT, 0, i, MPI_COMM_WORLD ); } MPI_Finalize(); return 0; 14.3 MPI MPI_ADDRESS MPI_BOTTOM MPI_ADDRESS(location,address) IN location ( ) OUT address MPI_BOTTOM ( ) int MPI_ADdress(void* location, MPI_Aint *address) MPI_ADDRESS(LOCATION,ADDRESS,IERROR) <type> LOCATION(*) INTEGER ADDRESS,IERROR MPI 69 MPI_ADDRESS REAL A(100,100) INTEGER I1, I2, DIFF CALL MPI_ADDRESS(A(1,1), I1, IERROR) CALL MPI_ADDRESS(A(10,10), I2, IERROR) DIFF = I2 - I1 54 MPI_ADDRESS DIFF [(10-1)*10-(10-1)]*sizeof(real);I1 I2 MPI MPI_Type_struct MPI_ADDRESS #include <stdio.h> #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank; 171

struct { int a; double b } value; /* */ MPI_Datatype mystruct; int blocklens[2]; MPI_Aint indices[2]; MPI_Datatype old_types[2]; */ MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); /* One value of each type */ blocklens[0] = 1; /* */ blocklens[1] = 1; /* */ /* The base types */ old_types[0] = MPI_INT;/* */ old_types[1] = MPI_DOUBLE;/* */ /* */ MPI_Address( &value.a, &indices[0] ); MPI_Address( &value.b, &indices[1] ); /* */ indices[1] = indices[1] - indices[0]; indices[0] = 0; MPI_Type_struct( 2, blocklens, indices, old_types, &mystruct );/* MPI MPI_Type_commit( &mystruct );/* */ do { if (rank == 0) scanf( "%d %lf", &value.a, &value.b ); /* 0 */ MPI_Bcast( &value, 1, mystruct, 0, MPI_COMM_WORLD );/* */ printf( "Process %d got %d and %lf\n", rank, value.a, value.b ); } while (value.a >= 0); } /* Clean up the type */ MPI_Type_free( &mystruct );/* */ MPI_Finalize( ); 55 MPI 14.4 172

MPI_TYPE_EXTENT(datatype,extent) IN datatype ( ) OUT extent extent( ) int MPI_Type_extent(MPI_Datatype datatype, int *extent) MPI_TYPE_EXTENT(DATATYPE,SIZE,IERROR) INTEGER DATATYPE,EXTENT,IERROR MPI 70 MPI_TYPE_EXTENT MPI_TYPE_EXTENT extent MPI_TYPE_SIZE(datatype,size) IN datatype ( ) OUT size ( ) int MPI_Type_size(MPI_Datatype datatype, int *size) MPI_TYPE_SIZE(DATATYPE,SIZE,IERROR) INTEGER DATATYPE,SIZE,IERROR MPI 71 MPI_TYPE_SIZE MPI_TYPE_SIZE MPI_TYPE_EXTENT MPI_TYPE_SIZE MPI_RECV( buf, count, datatype, dest, tag, comm, status ), datatype : {(type0, disp0),..., (typen-1,dispn-1 )}, status MPI_GET_ELEMENTS MPI_GET_COUNT MPI_GET_ELEMENTS status, datatype, count ) IN status ( ) IN datatype ( ) OUT count ( ) int MPI_Get_elements( MPI_Status status, MPI_Datatype datatype, int *count) MPI_GET_ELEMENTS(STATUS,DATATYPE,COUNT,IERROR) INTEGER STATUS(MPI_STATUS_SIZE),DATATYPE,COUNT,IERROR MPI 72 MPI_GET_ELEMENTS MPI_CET_COUNT 173

MPI_GET_ELEMENTS MPI_GET_COUNT MPI_GET_ELEMENT MPI_GET_COUNT(status, datatype,count) IN status ( ) IN datatype ( ) OUT count ( ) int MPI_Get_count(MPI_Status * status, MPI_Datatype datatype, int * count) MPI_GET_COUNT(STATUS, DATATYPE,COUNT,IERROR) INTEGER STATUS(MPI_STATUS_SIZE), DATATYPE,COUNT IERROR MPI 73 MPI_GET_COUNT MPI_GET_COUNT status datatype MPI_GET_COUNT MPI_GET_ELEMENT... CALL MPI_TYPE_CONTIGUOUS(2, MPI_REAL, Type2, ierr) C CALL MPI_TYPE_COMMIT(Type2,ierr)... CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_SEND(a, 2, MPI_REAL, 1, 0, comm, ierr) C 1 2 CALL MPI_SEND(a, 3, MPI_REAL, 1, 0, comm, ierr) C 1 3 ELSE CALL MPI_RECV(a, 2, Type2, 0, 0, comm, stat, ierr) C 0 Type2 CALL MPI_GET_COUNT(stat, Type2, i, ierr) C Type2 i=1 CALL MPI_GET_ELEMENTS(stat, Type2, i, ierr) C REAL i=2 CALL MPI_RECV(a, 2, Type2, 0, 0, comm, stat, ierr) C 0 CALL MPI_GET_COUNT(stat, Type2, i, ierr) C Type2 i=mpi_undefined C 3 REAL Type2 CALL MPI_GET_ELEMENTS(stat, Type2, i, ierr)! returns i=3 C REAL i=3 END IF 56 174

MPI_GET_ELEMENTS 14.5 MPI MPI_UB MPI_LB extent(mpi_lb)= extent(mpi_ub) =0), typemap ={(type 0, disp 0 ),..., (type n-1,disp n-1 )}, typemap lb(typemap)= min disp lb min {disp such that type =lb} lb,typemap ub(typemap)= max disp +sizeof(type )+ε ub max {disp such that type =ub} ub typemap extent(typemap) =ub(typemap)-lb(typemap)+e MPI_TYPE_LB(datatype,displacement) IN datatype ( ) OUT displacement ( ) int MPI_Type_lb (MPI_Datatype datatype, int *displacement) MPI_TYPE_LB (DATATYPE,DISPLACEMENT,IERROR) INTEGER DATATYPE,DISPLACEMENT,IERROR MPI 74 MPI_TYPE_LB MPI_TYPE_UB(datatype,displacement) IN datatype ( ) OUT displacement ( ) int MPI_Type_ub (MPI_Datatype datatype, int *displacement) MPI_TYPE_UB (DATATYPE,DISPLACEMENT,IERROR) INTEGER DATATYPE,DISPLACEMENT,IERROR MPI 75 MPI_TYPE_UB 175

D=(-3,0,6); T=(MPI_LB,MPI_INT,MPI_UB),B=(1,1,1). MPI_TYPE_STRUCT(3,B,D,T,type1) extent 9, 0 {(lb,- 3),(int,0),(ub,6)} MPI_TYPE_CONTIGUOUS(2,type1,type2), {(lb,-3),(int,0),(int,9),(ub,15)} ub REAL a(100,100),b(100,100) INTEGER disp(100),blocklen(100),ltype,myrank,ierr INTEGER status(mpi_status_size) C a b CALL MPI_COMM_RANK(MPI_COMM_WORLD,myrank) C i a(i-1,i) 100-i C 100*(i-1)+i FORTRAN DO i=1, 100 disp(i) = 100*(i-1) + i block(i) = 100-i END DO C CALL MPI_TYPE_INDEXED(100,block,disp,MPI_REAL,1type,ierr CALL MPI_TYPE_COMMTT(1type,ierr) CALL MPE_SENDRECV(a,1,1type,myrank,0,b,1 1type,myrank,0, MPI_COMM_WORLD,statusierr) C 0 a b 57 REAL a(100,100),b(100,100) INTEGER row,xpose,sizeofreal,myrank,ierr INTEGER status(mpi_status_size) C a b CALL MPI_COMM_RANK(MPI_COMM_WORLD,myrank) CALL MPI_TYPE_EXTENT(MPI_REAL,sizeofreal,ierr) C C row 100 C CALL MPI_TYPE_VECTOR(100,1,100,MPI_REAL,row,ierr) C row xpose CALL MPI_TYPE_HVECTOR(100,1,sizeofreal,row,xpose,lerr) CALL MPI_TYPE_COMMIT(xpose,ierr) 176

C a b CALL MPI_SENDRECV(a,1,xpose,myrank,0,b,100*100,MPI_REAL,MYRANK,0, _COMM_WORLD,status,ierr) 58 14.6 (Pack) (Unpack) MPI_PACK inbuf,incount,datatype inbount datatype outbuf outcount MPI_SEND position position comm MPI_PACK(inbuf, incount, datatype, outbuf, outcount, position, comm ) IN inbuf ( ) IN incount ( ) IN datatype ( ) OUT outbuf ( ) IN outcount ( ) INOUT position ( ) IN comm ( ) int MPI_Pack(void* inbuf, int incount, MPI_datatype, void *outbuf, int outcount, int *position, MPI_Comm comm) MPI_PACK(INBUF,INCOUNT,DATATYPE,OUTBUF,OUTCOUNT,POSITION,COMM, IERROR) INBUF(*),OUTBUF(*) INTEGER INCOUNT,DATATYPE,OUTCOUNT,POSITION,COMM,IERROR MPI 76 MPI_PACK 177

MPI_UNPACK(inbuf, insize, position, outbuf, outcount, datatype, comm ) IN inbuf ( ) IN insize ( ) INOUT position, ( ) OUT outbuf ( ) IN outcount, ( ) IN datatype ( ) IN comm ( ) int MPI_Unpack(void* inbuf, int insize, int *position, void *outbuf, int outcount, MPI_Datatype datatype, MPI_Comm comm) MPI_UNPACK(INBUF,INSIZE, POSITION,OUTBUF,OUTCOUNT, DATATYPE, COMM, IERROR) INBUF(*),OUTBUF(*) INTEGER INSIZE, POSITION,OUTCOUNT,, DATATYPE COMM,IERROR MPI 77 MPI_UNPACK MPI_UNPACK MPI_PACK inbuf insize outbuf,outcount,datatype MPI_RECV insize inbuf position position comm MPI_RECV MPI_UNPACK : MPI_RECV,count MPI_UNPACK,count position MPI_PACKED MPI_PACKED ( MPI_PACKED ) MPI_PACKED MPI_UNPACK MPI_UNPACK position=0 position inbuf insize comm 178

MPI_PACK_SIZE( incount, datatype, comm, size ) IN incount ( ) IN datatype ( ) IN comm ( ) OUT size incount datatype ( ) int MPI_Pack_size(int incount, MPI_Datatype datatype, MPI_Comm comm, int *size) MPI_PACK_SIZE(INCOUNT,DATATYPE,COMM,SIZE,IERROR) INTEGER INCOUNT,DATATYPE,COMM,SIZE,IERROR MPI 78 MPI_PACK_SIZE MPI_PACK_SIZE size incount datatype ( ) MPI_PACKED MPI_INT */ int position, i,j,a[2]; char buff[1000];... MPI_Comm_rank(MPI_COMM_WORLD,&myrank); if (myrank ==0) { /* 0 */ position =0;/* */ MPI_Pack(&i,1,MPI_INT,buff,1000,&position, MPI_COMM_WORLD); /* i */ MPI_Pack(&j,1,MPI_INT,buff,1000,&position, MPI_COMM_WORLD); /* j */ MPI_Send(buff,position, MPI_PACKED,1,0,MPI_COMM_WORLD); } /* */ else if(myrank==1) { /* 1 */ MPI_Recv(a,2, MPI_INT,0,0,MPI_COMM_WORLD)/* 0 } 59 100 int position, i; float a[1000]; char buff[1000]; MPI_Status status;... 179

MPI_Comm_rank(MPI_Comm_world,&myrank); if (myrank ==0) { /* 0 */ int len[2]; MPI_Aint disp[2]; MPI_Datatype type[2], newtype; i=100 /* */ len[0]=1; len[1]=i; MPI_Address( &i,disp);/*i MPI_BOTTOM */ MPI_Address( a,disp+1); /*a MPI_BOTTOM */ type[0]=mpi_int;/* */ type[1]=mpi_float; /* */ MPI_Type_struct(2,len,disp,type,&newtype);/* 1000 */ MPI_Type_commit(&newtype);/* */ /* */ position =0;/* */ MPI_Pack(MPI_BOTTOM, 1,newtype, buff, 1000,&position,MPI_COMM_WORLD);/* i a buff*/ /* */ MPI_Send(buff,postion, MPI_PACKED,1,0, MPI_COMM_WORLD) } else if(myrank ==1) { MPI_Recv(buff, 1000,MPI_PACKED,0,0,&status); /* */ position =0; MPI_Unpack(buff,1000,&position,&i,1,MPI_INT,MPI_COMM_WORLD); /* */ MPI_Unpack(buff,1000,&position,a,i,MPI_FLOAT,MPI_COMM_WORLD); /* */ } 60 ROOT #include <stdio.h> #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank; 180

int packsize, position; int a; double b; char packbuf[100]; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); do { if (rank == 0) {/* 0 */ scanf( "%d %lf", &a, &b ); packsize = 0;/* */ MPI_Pack( &a, 1, MPI_INT, packbuf, 100, &packsize, MPI_COMM_WORLD );/* a */ MPI_Pack( &b, 1, MPI_DOUBLE, packbuf, 100, &packsize, MPI_COMM_WORLD );/* b */ } MPI_Bcast( &packsize, 1, MPI_INT, 0, MPI_COMM_WORLD );/* */ MPI_Bcast( packbuf, packsize, MPI_PACKED, 0, MPI_COMM_WORLD );/* */ if (rank!= 0) { position = 0; MPI_Unpack( packbuf, packsize, &position, &a, 1, MPI_INT, MPI_COMM_WORLD );/* a */ MPI_Unpack( packbuf, packsize, &position, &b, 1, MPI_DOUBLE, MPI_COMM_WORLD );/* b */ } printf( "Process %d got %d and %lf\n", rank, a, b ); } while (a >= 0);/* a */ MPI_Finalize( ); return 0; } 61 14.7 MPI MPI-2 I/O 181

15 MPI MPI 15.1 0 1... N-1 MPI rank 0 MPI_GROUP_EMPTY MPI_GROUP_NULL MPI_GROUP_EMPTY MPI_GROUP_NULL MPI MPI_INIT MPI_COMM_WORLD MPI_COMM_SELF MPI_COMM_NULL MPI MPI_COMM_WORLD MPI_COMM_GROUP 15.2 MPI MPI_GROUP_SIZE(group,size) IN group OUT size int MPI_Group_size(MPI_Group group,int *size) MPI_GROUP_SIZE(GROUP,SIZE,IERROR) INTEGER GROUP,SIZE,IERROR MPI 79 MPI_GROUP_SIZE MPI_GROUP_SIZE 182

MPI_GROUP_RANK(group,rank) IN group OUT rank /MPI_UNDEFINED int MPI_Group_rank(MPI_Group group,int *rank) MPI_GROUP_RANK(GROUP,RANK,IERROR) INTEGER GROUP,RANK,IERROR MPI 80 MPI_GROUP_RANK MPI_GROUP_RANK rank MPI_COMM_RANK MPI_UNDEFINED MPI_GROUP_TRANSLATE_RANKS(group1,n,ranks1,group2,ranks2) IN group1 1 IN n rank1 rank2 IN ranks1 group1 IN group2 2 OUT ranks2 ranks1 group2 int MPI_Group_translate_ranks(MPI_Group group1,int n,int *ranks1, MPI_Group group2,int *ranks2) MPI_GROUP_TRANSLATE_RANKS(GROUP1,N,RANKS1,GROUP2,RANKS2, IERROR) INTEGER GROUP1,N,RANDS1(*),GROUP2,RANKS2,IERROR MPI 81 MPI_GROUP_TRANSLATE_RANKS MPI_GROUP_TRANSLATE_RANKS group1 n rank1 group2 rank2 group2 group1 MPI_UNDEFINED MPI_COMM_WORLD MPI_GROUP_COMPARE(group1,group2,result) IN group1 IN group2 OUT result int MPI_Group_compare(MPI_Group group1,mpi_group group2,int *result) MPI_GROUP_COMPARE(GROUP1,GROUP2,RESULT,IERROR) INTEGER GROUP1,GROUP2,RESULT,IERROR MPI 82 MPI_GROUP_COMPARE 183

MPI_GROUP_COMPARE group1 group2 group2 MPI_IDENT group1 group2 MPI_SIMILAR MPI_UNEQUAL MPI_COMM_GROUP(comm,group) IN comm OUT group comm int MPI_Comm_group(MPI_Comm comm, MPI_Group * group) MPI_COMM_GROUP(COMM,GROUP,IERROR) INTEGER COMM,GROUP,IERROR MPI_COMM_GROUP MPI 83 MPI_COMM_GROUP MPI_COMM_GROUP MPI_GROUP_UNION(group1,group2,newgroup) IN group1 IN group2 OUT newgroup int MPI_Group_union(MPI_Group group1,mpi_group group2,mpi_group *newgroup) MPI_GROUP_UNION(GROUP1,GROUP2, NEWGROUP, IERROR) INTEGER GROUP1,GROUP2,NEWGROUP,IERROR MPI 84 MPI_GROUP_UNION MPI_GROUP_UNION newgroup group1 group2 group1 MPI_GROUP_INTERSECTION(group1,group2,newgroup) IN group1 IN group2 OUT newgroup int MPI_Group_intersection(MPI_Group group1,mpi_group group2,mpi_group *newgroup) MPI_GROUP_INTERSECTION(GROUP1,GROUP2,NEWGROUP,IERROR) INTGETER GROUP1,GROUP2,NEWGROUP,IERROR MPI 85 MPI_GROUP_INTERSECTION 184

MPI_GROUP_INTERSECTION newgroup group1 group2 MPI_GROUP_DIFFERENCE(group1,group2,newgroup) IN group1 IN group2 OUT newgroup int MPI_Group_difference(MPI_Group group1,mpi_group group2,mpi_group *newgroup) MPI_GROUP_DIFFERENCE(GROUP1,GROUP2,NEWGROUP,IERROR) INTEGER GROUP1,GROUP2,NEWGROUP,IERROR MPI 86 MPI_GROUP_DIFFERENCE MPI_GROUP_DIFFERENCE newgroup group1 group2 MPI_GROUP_EMPTY MPI_GROUP_INCL(group,n,ranks,newgroup) IN group IN n ranks IN ranks OUT newgroup int MPI_Group_incl(MPI_Group group,int n,int *ranks,mpi_group *newgroup) MPI_GROUP_INCL(GROUP,N,RANKS,NEWGROUP,IERROR) INTEGER GROUP,NRANKS(*),NEWGROUP,IERROR MPI 87 MPI_GROUP_INCL MPI_GROUP_INCL n rank[0]... rank[n-1] newgroup n=0 newgroup MPI_GROUP_EMPTY MPI_GROUP_EXCL(group,n,ranks,newgroup) IN group ( ) IN n ranks ( ) IN ranks newgroup OUT newgroup int MPI_Group_excl(MPI_Group group, int n, int *ranks,mpi_group *newgroup) MPI_GROUP_EXCL(GROUP,N,RANKS,NEWGROUP,IERROR) INTEGER GROUP,N,RANKS(*),NEWGROUP,IERROR MPI 88 MPI_GROUP_EXCL 185

MPI_GROUP_EXCL newgroup group n ranks[0],...,ranks[n-1] ranks n group n=0, newgroup group MPI_GROUP_RANGE_INCL(group,n,ranges,newgroup) IN group ( ) IN n ranges ( ) IN ranges OUT newgroup int MPI_Group_range_incl(MPI_Group group, int n, int ranges[][3],mpi_group *newgroup) MPI_GROUP_RANGE_INCL(GROUP,N,RANGES,NEWGROUP,IERROR) INTEGER GROUP,N,RANGES(3,*),NEWGROUP,IERROR MPI 89 MPI_GROUP_RANGE_INCL MPI_GROUP_RANGE_INCL group n ranges newgroup ranges (first,last,stride ),...,(first,last,stride ), newgroup group first, first +stride,..., first +(last -first )/stride *stride,... first, first +stride,..., first +(last -first )/stride *stride group (1,9,2) (15,20,3),(21,30,2) 1, 3, 5, 7, 9,15,18,21,23,25,27,29 MPI_GROUP_RANGE_EXCL(group,n,ranges,newgroup) IN group ( ) IN n ranges ( ) IN ranges OUT newgroup int MPI_Group_range_excl(MPI_Group group,int n, int ranges[][3], MPI_Group *newgroup) MPI_GROUP_RANGE_EXCL(GROUP,N,RANGES,NEWGROUP,IERROR) INTEGER GROUP,N,RANGES(3,*),NEWGROUP,IERROR MPI 90 MPI_GROUP_RANGE_EXCL MPI_GROUP_RANGE_EXCL group n rangs newgroup MPI_GROUP_INCL 186

MPI_GROUP_FREE(group) IN/OUT group ( ) int MPI_Group_free(MPI_Group *group) MPI_GROUP_FREE(GROUP,IERROR) INTEGER GROUP,IERROR MPI 91 MPI_GROUP_FREE MPI_GROUP_FREE group MPI_GROUP_NULL 15.3 MPI MPI_COMM_SIZE(comm,size) IN comm ( ) OUT size comm ( ) int MPI_Comm_size(MPI_Comm comm, int *size) MPI_COMM_SIZE(COMM,SIZE,IERROR) INTEGER COMM,SIZE,IERROR MPI_COMM_RANK(comm,rank) IN comm ( ) OUT rank int MPI_Comm_rank(MPI_Comm comm, int *rank) MPI_COMM_RANK(COMM,RANK,IERROR) INTEGER COMM,RANK,IERROR rank 187

MPI_COMM_COMPARE(comm1,comm2,result) IN comm1 ( ) IN comm2 ( ) OUT result ( ) int MPI_Comm_compare(MPI_Comm comm1,mpi_comm comm2,int *result) MPI_COMM_COMPARE(COMM1,COMM2,RESULT,IERROR) INTEGER COMM1,COMM2,RESULT,IERROR MPI_COMM_COMPARE MPI_IDENT MPI_CONGRUENT MPI 92 MPI_COMM_COMPARE comm1 comm2 MPI_SIMILAR MPI_UNEQUAL MPI MPI_COMM_WORLD MPI MPI_COMM_DUP(comm,newcomm) IN comm ( ) OUT newcomm comm ( ) int MPI_Comm_dup(MPI_Comm comm,mpi_comm *newcomm) MPI_COMM_DUP(COMM,NEWCOMM,IERROR) INTEGER COMM, NEWCOMM,IERROR MPI 93 MPI_COMM_DUP MPI_COMM_DUP comm newcomm newcomm MPI_COMM_CREATE(comm,group,newcomm) IN comm ( ) IN group ( ) OUT newcomm ( ) int MPI_Comm_create(MPI_Comm comm,mpi_group group,mpi_comm *newcomm) MPI_COMM_CREATE(COMM,GROUP,NEWCOMM,IERROR) INTEGER COMM,GROUP,NEWCOMM,IERROR MPI 94 MPI_COMM_CREATE 188

MPI_COMM_CREATE group group MPI_COMM_NULL, group group comm MPI_COMM_SPLIT(comm,color,key,newcomm) IN comm ( ) IN color ( ) IN key ( ) OUT newcomm ( ) int MPI_Comm_split(MPI_Comm comm,int color, int key,mpi_comm *newcomm) MPI_COMM_SPLIT(COMM,COLOR,KEY,NEWCOMM,IERROR) INTEGER COMM,COLOR,KEY,NEWCOMM,IERROR MPI 95 MPI_COMM_SPLIT MPI_COMM_SPLIT comm color color color key key color MPI_UNDEFINED newcomm MPI_COMM_NULL color MPI_COMM_FREE(comm) IN/OUT comm int MPI_Comm_free(MPI_Comm *comm) MPI_COMM_FREE(COMM,IERROR) INTEGER COMM,IERROR MPI 96 MPI_COMM_FREE MPI_COMM_FREE MPI_COMM_NULL 0 (commslave) MPI_COMM_WORLD MPI_COMM_WORLD commslave main(int argc,char **argv) { int me,count,count2; 189

void *send_buf,*recv_buf,*send_buf2,*recv_buf2; MPI_Group MPI_GROUP_WORLD,grprem; MPI_Comm commslave; static int rank[]={0};... MPI_Init(&argc,&argv); MPI_Comm_group(MPI_COMM_WORLD,&MPI_GROUP_WORLD); /* MPI_COMM_WORLD */ MPI_Comm_rank(MPI_COMM_WORLD,&me); MPI_Group_excl(MPI_GROUP_WORLD,1,ranks,&grprem);/* 0 */ MPI_Comm_create(MPI_COMM_WORLD,grprem,&commslave);/* 0 */ if((me!=0) {/* 0 */... MPI_Reduce(send_buf,recv_buff,count,MPI_INT,MPI_SUM,1,commslave);/* 0*/... } /* MPI_COMM_WORLD */ MPI_Reduce(send_buf2,recv_buff2,count2,MPI_INT,MPI_SUM,0, MPI_COMM_WORLD); MPI_Comm_free(&commslave); MPI_Group_free(&MPI_GROUP_WORLD); MPI_Group_free(&grprem); /* */ MPI_Finalize(); } 62 15.4 190

MPI_COMM_TEST_INTER(comm,flag) IN comm ( ) OUT flag ( ) int MPI_Comm_test_inter(MPI_Comm comm,int *flag) MPI_COMM_TEST_INTER(COMM,FLAG,IERROR) INTEGER COMM,IERROR LOGICAL FLAG MPI_COMM_TEST_INTER true false MPI 97 MPI_COMM_TEST_INTER MPI_COMM_REMOTE_SIZE(comm,size) IN comm ( ) OUT size comm ( ) int MPI_COMM_Comm_remote_size(MPI_Comm comm,int *size) MPI_COMM_REMOTE_SIZE(COMM,SIZE,IERROR) INTEGER COMM,SIZE,IERROR MPI 98 MPI_COMM_REMOTE_SIZE MPI_COMM_REMOTE_SIZE MPI_COMM_REMOTE_GROUP(comm,group) IN comm ( ) OUT group comm ( ) int MPI_Comm_remote_group(MPI_Comm comm,mpi_group *group) MPI_COMM_REMOTE_GROUP(COMM,GROUP,IERROR) INTEGER COMM,GROUP,IERROR MPI 99 MPI_COMM_REMOTE_GROUP MPI_COMM_REMOTE_GROUP 191

MPI_INTERCOMM_CREATE(local_comm,local_leader,peer_comm, remote_leader,tag,newintercomm ) IN local_comm ( ) IN local_leader ( ) IN peer_comm local_leader ( ) IN remote_leader peer_comm IN tag ( ) OUT newintercomm ( ) int MPI_Intercomm_create(MPI_Comm local_comm,int local_leader,mpi_comm peer_comm,int remote_leader,int tag,mpi_comm *newintercomm) MPI_INTERCOMM_CREATE(LOCAL_COMM,LOCAL_LEADER,PEER_COMM,REM OTE_LEADER,TAG,NEWINTERCOMM,IERROR) INTEGER LOCAL_COMM,LOCAL_LEADER,PEER_COMM,REMOTE_LEADER, TAG,NEWINTERCOMM,IERROR MPI 100 MPI_INTERCOMM_CREATE MPI_INTERCOMM_CREATE local_comm local_leader local_leader peer_comm remote_leader remote_leader tag tag MPI_WILD_TAG MPI_COMM_WORLD peer_comm MPI_INTERCOMM_MERGE(intercomm,high,newintracomm) IN intercomm ( ) IN high ( ) OUT newintracomm ( ) int MPI_Intercomm_merge(MPI_Comm intercomm,int high,mpi_comm *newintracomm) MPI_INTERCOMM_MERGE(INTERCOMM,HIGH,INTRACOMM,IERROR) INTEGER INTERCOMM,INTRACOMM,IERROR LOGICAL HIGH MPI 101 MPI_INTERCOMM_MERGE MPI_INTERCOMM_MERGE high high=true high=false true high 192

0 1 1 20 1 2 main(int argc,char**argv) { MPI_Comm mycomm;/* */ MPI_Comm myfirstcomm;/* */ MPI_Comm mysecondcomm;/* */ int membershipkey; int rank; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); membershipkey=rank%3; /* MPI_COMM_WORLD membershipkey */ MPI_Comm_split(MPI_COMM_WORLD,membershipKey,rank,&myComm); /* 15 */ 0 1 2 membershipkey=0 membershipkey=1 membershipkey=2 rank=0 3 6 rank=1 4 7 rank=2 5 8 if(membershipkey==0) {/* 0 */ /* 0 1 myfirstcomm */ MPI_Intercomm_create(myComm,0,MPI_COMM_WORLD,1, 1,&myFirstComm); } else if (membershipkey==1) {/* 1 */ /* 1 0 myfirstcomm */ MPI_Intercomm_create(myComm,0,MPI_COMM_WORLD,0, 1,&myFirstComm); /* 1 2 mysecondcomm */ MPI_Intercomm_ereate(myComm,0,MPI_COMM_WORLD,2, 12,&mySecondComm); } else if (membershipkey==2) {/* 2 */ /* 2 1 myfirstcomm */ MPI_Intercomm_create(myComm,0,MPI_COMM_WORLD,1, 12,&myFirstComm); } 193

... } switch(membershipkey)/* */ { case 1: MPI_COMM_free(&mySecondComm);/* 1 */ case 0: case 2: MPI_COMM_free(&myFirstComm); break; } MPI_Finalize(); 63 15.5 MPI Cache, MPI MPI_COMM_DUP * " " MPI * MPI_KEYVAL_CREATE(copy_fn,delte_fn,keyval,extra_state) IN copy_fn keyval IN delete_fn keyval OUT keyval OUT extra_state int MPI_Keyval_create(MPI_Copyfunction *copy_fn,mpi_delete_function *delete_fn,int *keyval,void* extra_state) MPI_KEYVAL_CREATE(COPY_FN,DELETE_FN,KEYVAL,EXTRA_STATE,IERROR) EXTERNAL COPY_FN,DELETE_FN INTEGER KEYVAL,EXTRA_STATE,IERROR MPI 102 MPI_KEYVAL_CREATE MPI_KEYVAL_CREATE,, 194

MPI_COMM_DUP, copy_fn copy_fn MPI_Copy_function, : typedef int MPI_Copy_function(MPI_Comm *oldcomm,int *keyval, void *extra_state,void *attribute_val_in, void **attribute_val_out,int *flag) Fortran : FUNCTION COPY_FUNCTION(OLDCOMM,KEYVAL,EXTRA_STATE,ATTRIBUTE_VAL_IN, ATTRIBUTE_VAL_OUT,FLAG) INTEGER OLDCOMM,KEYVAL,EXTRA_STATE,ATTRIBUTE_VAL_IN, ATTRIBUTE_VAL_OUT LOGICAL FLAG oldcomm flag=0 flag=1 attribute_val_out MPI_SUCCESS ( MPI_COMM_DUP ) copy_fn C FORTRAN MPI_NULL_COPY_FN MPI_DUP_FN MPI_NULL_COPY_FN flag=0 MPI_SUCCESS flag=1 MPI_DUP_FN attribute_val_out attribute_val_in MPI_SUCCESS copy_fn MPI_COMM_FREE MPI_ATTR_DELETE delete_fn delete_fn MPI_Delete_function C typedef int MPI_Delete_function(MPI_Comm *comm,int *keyval, void*attribute_val,void *extra_state) Fortran : FUNCTION DELETE_FUNCTION(COMM,KEYVAL,ATTRIBUTE_VAL,EXTRA_STATE) INTEGER COMM,KEYVAL,ATTRIBUTE_VAL,EXTRA_STATE MPI_COMM_FREE MPI_ATTR_DELETE MPI_ATTR_PUT C FORTRAN delete_fn MPI_NULL_DELETE_FN MPI_SUCCESS MPI_KEYVAL_FREE(keyval) IN keyval ( ) int MPI_Keyval_free(int *keyval) MPI_KEYVAL_FREE(KEYVAL,IERROR) INTEGER KEYVAL,IERROR MPI 103 MPI_KEYVAL_FREE MPI_KEYVAL_FREE keyval MPI_KEYVAL_INVALID 195

MPI_ATTR_PUT(comm,keyval,attribute_val) IN comm ( ) IN keyval, MPI_KEY_CREATE ( ) IN attribute_val int MPI_Attr_put(MPI_Comm comm,int keyval,void* attribute_val) MPI_ATTR_PUT(COMM,KEYVAL,ATTRIBUTE_VAL,IERROR) INTEGER COMM,KEYVAL,ATTRIBUTE_VAL,IERROR MPI 104 MPI_ATTR_PUT MPI_ATTR_PUT keyval MPI_ATTR_GET MPI_ATTR_GET(comm,keyval,attribute_val,flag) IN comm ( ) IN keyval ( ) OUT attribute_val OUT flag int MPI_Attr_get(MPI_Comm comm,int keyval,void **attribute_val,int *flag) MPI_ATTR_GET(COMM,KEYVAL,ATTRIBUTE_VAL,FLAG,IERROR) INTEGER COMM,KEYVAL,ATTRIBUTE_VAL,IERROR LOGICAL FLAG MPI 105 MPI_ATTR_GET MPI_ATTR_GET keyval, comm,, flag=false attribute_val flag=true MPI_ATTR_DELETE(comm,keyval) IN comm ( ) IN keyval int MPI_Attr_delete(MPI_Comm comm,int keyval) MPI_ATTR_DELETE(COMM,KEYVAL,IERROR) INTEGER COMM,KEYVAL,IERROR MPI 106 MPI_ATTR_DELETE MPI_ATTR_DELETE keyval delete_fn delete_fn 196

MPI_SUCCESS MPI_COMM_DUP MPI_COMM_FREE PROGRAM MAIN include 'mpif.h' integer PM_MAX_TESTS parameter (PM_MAX_TESTS=3) integer PM_TEST_INTEGER, fuzzy, Error, FazAttr integer PM_RANK_SELF integer Faz_World parameter (PM_TEST_INTEGER=12345) logical FazFlag external FazCreate, FazDelete call MPI_INIT(PM_GLOBAL_ERROR) C C C C C C C PM_GLOBAL_ERROR = MPI_SUCCESS call MPI_COMM_SIZE (MPI_COMM_WORLD,PM_NUM_NODES, $ PM_GLOBAL_ERROR) call MPI_COMM_RANK (MPI_COMM_WORLD,PM_RANK_SELF, $ PM_GLOBAL_ERROR) call MPI_keyval_create ( FazCreate, FazDelete, FazTag, & fuzzy, Error ) call MPI_attr_get (MPI_COMM_WORLD, FazTag, FazAttr, & FazFlag, Error) if (FazFlag ) then print *, "True,get attr=",fazattr else print *, "False no attr" end if FazAttr = 120 call MPI_attr_put (MPI_COMM_WORLD, FazTag, FazAttr, Error) call MPI_Comm_Dup (MPI_COMM_WORLD, Faz_WORLD, Error) call MPI_Attr_Get ( Faz_WORLD, FazTag, FazAttr, & FazFlag, Error) if (FazFlag) then 197

print *, "True,dup comm get attr=",fazattr else print *,"error" end if call MPI_Comm_free( Faz_WORLD, Error ) C call MPI_FINALIZE (PM_GLOBAL_ERROR) end C C C SUBROUTINE FazCreate (comm, keyval, fuzzy, & attr_in, attr_out, flag, ierr ) INTEGER comm, keyval, fuzzy, attr_in, attr_out LOGICAL flag include 'mpif.h' attr_out = attr_in + 1 flag =.true. ierr = MPI_SUCCESS END C C C SUBROUTINE FazDelete (comm, keyval, attr, extra, ierr ) INTEGER comm, keyval, attr, extra, ierr include 'mpif.h' ierr = MPI_SUCCESS if (keyval.ne. MPI_KEYVAL_INVALID)then attr = attr - 1 end if END 64 15.6 MPI MPI MPI 198

16 MPI MPI 16.1 (inter-communicator), MPI 16 MPI_CART_CREATE MPI_GRAPH_CREATE MPI_CARTDIM_GET MPI_GRAPHDIMS_GET MPI_CART_GET MPI_GRAPH_GET MPI_CART_MAP MPI_GRAPH_MAP 16.2 MPI_CART_CREATE MPI_CART_CREATE reorder = false ndims dims[0] dims[1]... dims[ndims-1] dims[1]*dims[1]*...*dims[ndims-1] comm_old MPI_COMM_NULL MPI_COMM_SPLIT 199

comm_old MPI_CART_CREATE(comm_old, ndims, dims, periods, reorder, comm_cart) IN comm_old IN ndims IN dims ndims IN periods ndims IN reorder OUT comm_cart int MPI_Cart_create(MPI_Comm comm_old, int ndims, int *dims, int *periods, int reorder, MPI_Comm *comm_cart) MPI_CART_CREATE(COMM_OLD, NDIMS, DIMS, PERIODS, REORDER, COMM_CART, IERROR) INTEGER COMM_OLD, NDIMS, DIMS(*), COMM_CART, IERROR LOGICAL PERIODS(*), REORDER MPI 107 MPI_CART_CREATE MPI_DIMS_CREATE(nnodes, ndims,dims) IN nnodes IN ndims INOUT dims ndims int MPI_Dims_create(int nnodes, int ndims, int *dims) MPI_DIMS_CREATE(NNODE, NDIMS, DIMS, IERROR) INTEGER NNODES, NDIMS, DIMS(*), IERROR MPI 108 MPI_DIMS_CREATE MPI_DIMS_CREATE ndims nnodes dims MPI_CART_CREATE i dims[i]=k>0 dims[i] dims[i]=0 dims[i] MPI_TOPO_TEST STATUS MPI_GRAPH MPI_CART MPI_UNDEFINED 200

MPI_TOPO_TEST(comm, status) IN comm OUT status comm ( ) int MPI_Topo_test(MPI_Comm comm, int *status) MPI_TOPO_TEST(COMM, STATUS, IERROR) INTEGER COMM, STATUS, IERROR MPI 109 MPI_TOPO_TEST MPI_CART_GET(comm, maxdims, dims, periods, coords) IN comm IN maxdims OUT dims OUT periods OUT coords int MPI_Cart_get(MPI_Comm comm, int maxdims, int *dims, int *periods, int *coords) MPI_CART_GET(COMM, MAXDIMS, DIMS, PERIODS, COORDS, IERROR) INTEGER COMM, MAXDIMS, DIMS(*), COORDS(*), IERROR LOGICAL PERIODS(*) MPI 110 MPI_CART_GET MPI_CART_GET dims periods coords MPI_CART_RANK(comm, coords, rank) IN comm IN coords OUT rank int MPI_Cart_rank(MPI_Comm comm, int *coords, int *rank) MPI_CART_RANK(COMM, COORDS, RANK, IERROR) INTEGER COMM, COORDS(*), RANK, IERROR MPI 111 MPI_CART_RANK MPI_CART_RANK MPI_COMM_RANK 201

MPI_CARTDIM_GET(comm, ndims) IN comm OUT ndims int MPI_Cartdim_get(MPI_Comm comm, int *ndims) MPI_CARTDIM_GET(COMM, NDIMS, IERROR) INTEGER COMM, NDIMS, IERROR MPI 112 MPI_CARTDIM_GET MPI_CARTDIM_GET comm ndims MPI_CART_SHIFT(comm, direction, disp, rank_source, rank_dest) IN comm IN direction IN disp OUT rank_source OUT rank_dest int MPI_Cart_shift(MPI_Comm comm, int direction, int disp, int *rank_source, int *rank_dest) MPI_CART_SHIFT(COMM, DIRECTION, DISP, RANK_SOURCE, RANK_DEST, IERROR) INTEGER COMM, DIRECTION, DISP, RANK_SOURCE, RANK_DEST, IERROR MPI 113 MPI_CART_SHIFT MPI_CART_SHIFT comm rank_source direction disp rank_dest rank_source rank_dest MPI_PROC_NULL MPI_CART_COORDS(comm, rank, maxdims, coords) IN comm IN rank IN maxdims OUT coords int MPI_Cart_coords(MPI_Comm comm, int rank, int maxdims, int *coords) MPI_CART_COORDS(COMM, RANK, MAXDIMS, COORDS, IERROR) INTEGER COMM, RANK, MAXDIMS, COORDS(*), IERROR MPI 114 MPI_CART_COORDS 202

MPI_CART_COORDS rank coords maxdims MPI_CART_SUB(comm, remain_dims, newcomm) IN comm IN remain_dims OUT newcomm int MPI_Cart_sub(MPI_Comm com, int *remain_dims, MPI_Comm *newcomm) MPI_CART_SUB(COMM, REMAIN_DIMS, NEWCOMM, IERROR) INTEGER COMM, NEWCOMM, IERROR LOGICAL REMAIN_DIMS(*) MPI 115 MPI_CART_SUB MPI_CART_SUB remain_dims remain_dims[i] true remain_dims[i] false comm 2 3 4 remain_dims= false,true,true <1,1,1><1,1,2><1,1,3><1,1,4> <1,1><1,2><1,3><1,4> <1,2,1><1,2,2><1,2,3><1,2,4> <2,1><2,2><2,3><2,4> <1,3,1><1,3,2><1,3,3><1,3,4> <3,1><3,2><3,3><3,4> 1 <2,1,1><2,1,2><2,1,3><2,1,4> <1,1><1,2><1,3><1,4> <2,2,1><2,2,2><2,2,3><2,2,4> <2,1><2,2><2,3><2,4> <2,3,1><2,3,2><2,3,3><2,3,4> <3,1><3,2><3,3><3,4> 2 73 203

MPI_CART_MAP IN comm IN ndims IN dims IN periods comm, ndims, dims, periods, newrank ndims ndims OUT newrank int MPI_Cart_map(MPI_comm comm, int ndims, int * dims, int * periods, int *newrank) MPI_CART_MAP(COMM, NDIMS, DIMS, PERIODS, NEWRANK, IERROR) INTEGER COMM, NDIMS, DIMS(*), NEWRANK, IERROR LOGICAL PERIODS(*) MPI 116 MPI_CART_MAP ndims dims MPI_CART_MAP newrank MPI_UNDEFINED #include <stdio.h> #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank, value, size, false=0; int right_nbr, left_nbr; MPI_Comm ring_comm; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &size ); MPI_Cart_create( MPI_COMM_WORLD, 1, &size, &false, 1, &ring_comm );/* false MPI_PROC_NULL 1 MPI_PROC_NULL, 0, 1,..., size-1, MPI_PROC_NULL */ MPI_Cart_shift( ring_comm, 0, 1, &left_nbr, &right_nbr );/* */ MPI_Comm_rank( ring_comm, &rank );/* */ 204

MPI_Comm_size( ring_comm, &size );/* */ do { if (rank == 0) {/* 0 */ scanf( "%d", &value ); MPI_Send( &value, 1, MPI_INT, right_nbr, 0, ring_comm );/* */ } else { MPI_Recv( &value, 1, MPI_INT, left_nbr, 0, ring_comm, &status );/* */ MPI_Send( &value, 1, MPI_INT, right_nbr, 0, ring_comm );/* */ } printf( "Process %d got %d\n", rank, value );/* */ } while (value >= 0);/* */ } MPI_Finalize( ); 65 16.3 MPI_GRAPH_CREATE nnodes index edges reorder = false, nnodes comm MPI_COMM_NULL MPI_COMM_SPLIT comm nnodes-1 0 nnodes-1 C index[i] 0 i 0 index[0],1 index[1]-index[0],i index[i]-index[i-1] i 1 nnodes-1 edges 205

MPI_GRAPH_CREATE(comm_old, nnodes, index, edges, reorder, comm_graph) IN comm_old IN nnodes IN index IN edges IN reorder OUT comm_graph int MPI_Graph_create(MPI_Comm comm_old, int nnodes, int *index, int *edges, int reorder, MPI_Comm *comm_graph) MPI_GRAPH_CREATE(COMM_OLD, NNODES, INDEX, EDGES, REORDER, COMM_GRAPH,IERROR) INTEGER COMM_OLD, NNODES, INDEX(*), EDGES(*), COMM_GRAPH, IERROR LOGICAL REORDER MPI 117 MPI_GRAPH_CREATE 0 1 3 2 74 17 0 2 1 3 1 1 0 2 1 3 3 2 0 2 nnodes, index edges 18 nnodes = 4 index = 2, 3, 4, 6 edges = 1, 3, 0, 3, 0, 2 206

, C, index[0] 0, index[i] - index[i-1] i, i=1,..., nnodes-1; 0 edges[j], 0 j index[0]-1, i, i > 0, edges[j], index[i-1] j index[i]-1 MPI_GRAPHDIMS_GET(comm, nnodes, nedges) IN comm OUT nnodes OUT nedges int MPI_Graphdims_get(MPI_Comm comm, int *nnodes, int *nedges) MPI_GRAPHDIMS_GET(COMM NNODES, NEDGES, IERROR) INTEGER COMM, NNODES, NEDGES, IERROR MPI 118 MPI_GRAPHDIMS_GET MPI_GRAPHDIMS_GET comm nnodes nedges MPI_GRAPH_GET(comm, maxindex, maxedges, index, edges) IN comm IN maxindex index IN maxedges edges OUT index OUT edges int MPI_Graph_get(MPI_Comm comm, int maxindex, int maxedges, int *index,int *edges) MPI_GRAPH_GET(COMM, MAXINDEX, MAXEDGES, INDEX, EDGES, IERROR) INTEGER COMM, MAXINDEX, MAXEDGES, INDEX(*),EDGES(*), IERROR MPI 119 MPI_GRAPH_GET MPI_GRAPH_GET index edges MPI_GRAPH_NEIGHBORS_COUNT(comm, rank, nneighbors) IN comm IN rank comm OUT nneighbors int MPI_Graph_neighbors_count(MPI_Comm comm, int rank, int *nneighbors) MPI_GRAPH_NEIGHBORS_COUNT(COMM, RANK, NNEIGHBORS, IERROR) INTEGER COMM, RANK, NNEIGHBORS, IERROR MPI 120 MPI_GRAPH_NEIGHBORS_COUNT 207

MPI_GRAPH_NEIGHBORS_COUNT rank nneighbors MPI_GRAPH_NEIGHBORS(comm, rank, maxneighbors, neighbors) IN comm IN rank comm IN maxneighbors neighbors OUT neighbors int MPI_Graph_neighbors_count(MPI_Comm comm, int rank, int maxneighbors, int *neighbors) MPI_GRAPH_NEIGHBORS(COMM, RANK, MAXNEIGHBORS, NEIGHBORS, IERROR) INTEGER COMM, RANK, MAXNEIGHBORS, NEIGHBORS *, IERROR MPI 121 MPI_GRAPH_NEIGHBORS MPI_GRAPH_NEIGHBORS rank neighbors MPI_GRAPH_MAP(comm, nnodes, index, edges, newrank) IN comm IN nnodes IN index IN edges OUT newrank int MPI_Graph_map(MPI_Com comm, int nnodes, int *index, int *edges, int*newrank) MPI_GRAPH_MAP(COMM, NNODES, INDEX, EDGES, NEWRANK, IERROR) INTEGER COMM, NNODES, INDEX(*), EDGES(*), NEWRANK, IERROR MPI 122 MPI_GRAPH_MAP MPI_GRAPH_MAP MPI_CART_MAP nnodes index edges MPI newrank 16.4 Jacobi Jacobi MPI Jacobi C FORTRAN 208

2 2 256 256 128 128 75 (0,0) (0,1) (1,0) (1,1) A [0:127] A [0 :127] [0:127] [128:255] A [128:255] A [128:255] [0 :127] [128:255] 75 76 129 129 ( ) 128 128 ( ) 76 1 2 3 4 32 32 C 209

(0,0) (0,1) (1,0) (1,1) 77 #include "mpi.h" #define arysize 256 #define arysize2 (arysize/2) int main(int argc, char *argv[]) { int n, myid, numprocs, i, j, nsteps=10; float a[arysize2+2][arysize2+2],b[arysize2+2][arysize2+2];/* */ double starttime,endtime; int col_tag,row_tag,send_col,send_row,recv_col,recv_row; int col_neighbor,row_neighbor; MPI_Comm comm2d; MPI_Datatype newtype; int right,left,down,top,top_bound,left_bound,down_bound,right_bound; int periods[2]; int dims[2],begin_row,end_row; MPI_Status status; MPI_Init(&argc,&argv); dims[0] = 2; dims[1] = 2; periods[0]=0; periods[1]=0; MPI_Cart_create( MPI_COMM_WORLD, 2, dims, periods, 0,&comm2d);/* 2 2 comm2d*/ 210

MPI_Comm_rank(comm2d,&myid); MPI_Type_vector( arysize2, 1, arysize2+2,mpi_float,&newtype);/* */ MPI_Type_commit( &newtype );/* */ MPI_Cart_shift( comm2d, 0, 1, &left, &right);/* */ MPI_Cart_shift( comm2d, 1, 1, &down, &top);/* */ /* */ for(i=0;i<arysize2+2;i++) for(j=0;j<arysize2+2;j++) a[i][j]=0.0; if (top == MPI_PROC_NULL) { for ( i=0;i<arysize2+2;i++) a[1][i]=8.0; } if (down == MPI_PROC_NULL) { for ( i=0;i<arysize2+2;i++) a[arysize2][i]=8.0; } if (left == MPI_PROC_NULL) { for ( i=0;i<arysize2+2;i++) a[i][1]=8.0; } if (right == MPI_PROC_NULL) { for ( i=0;i<arysize2+2;i++) a[i][arysize2]=8.0; } col_tag = 5; row_tag = 6; printf("laplace Jacobi#C(BLOCK,BLOCK)#myid=%d#step=%d#total arysize=%d*%d\n",myid,nsteps,arysize,arysize); top_bound=1; left_bound=1; down_bound=arysize2; right_bound=arysize2; if (top == MPI_PROC_NULL) top_bound=2; if (left == MPI_PROC_NULL) left_bound=2; if (down == MPI_PROC_NULL) down_bound=arysize2-1; if (right == MPI_PROC_NULL) left_bound= arysize2-1; starttime=mpi_wtime(); for (n=0; n<nsteps; n++) { MPI_Sendrecv( &a[1][1], arysize2, MPI_FLOAT, top, row_tag,& a[arysize2+1][1], arysize2, MPI_FLOAT, down, row_tag, comm2d, &status );/* */ 211

MPI_Sendrecv( &a[arysize2][1], arysize2, MPI_FLOAT, down, row_tag,& a[0][1], arysize2, MPI_FLOAT, top, row_tag, comm2d, &status );/* */ MPI_Sendrecv( &a[1][1], 1,newtype, left, col_tag,& a[1][arysize2+1], 1, newtype, right, col_tag, comm2d, &status );/* */ MPI_Sendrecv( &a[1][arysize2], 1, newtype, right, col_tag, &a[1][0], 1, newtype, left, col_tag, comm2d, &status );/* */ for ( i=left_bound;i<right_bound;i++) for (j=top_bound;j<down_bound;j++) b[i][j] = (a[i][j+1]+a[i][j-1]+ a[i+1][j]+a[i-1][j])*0.25; } for ( i=left_bound;i<right_bound;i++) for (j=top_bound;j<down_bound;j++) a[i][j] = b[i][j]; } endtime=mpi_wtime(); printf("elapse time=%f\n",endtime-starttime); MPI_Type_free( &newtype ); MPI_Comm_free( &comm2d ); MPI_Finalize(); 66 Jacobi 16.5 MPI 212

17 MPI MPI MPI MPI 17.1 MPI_ERRHANDLER_CREATE ( function, errhandler) IN function OUT errhandler MPI int MPI_Errhandler_create (MPI_Handler_function *function, MPI_Errhandler *errhandler) MPI_ERRHANDLER_CREATE ( FUNCTION, ERRHANDLER, IERROR) EXTERNAL FUNCTION INTEGER ERRHANDLER, IERROR MPI 123 MPI_ERRHANDLER_CREATE MPI_ERRHANDLER_CREATE function MPI MPI errhandler MPI_Handler_function C typedef void (MPI_Handler_function) (MPI_Comm *, int *, ) MPI MPI_ERRHANDLER_SET ( comm, errhandler) IN comm IN errhandler MPI int MPI_Errhandler_set (MPI_Comm comm, MPI_Errhandler errhandler) MPI_ERRHANDLER_SET ( COMM, ERRHANDLER, IERROR) INTEGER COMM, ERRHANDLER, IERROR MPI 124 MPI_ERRHANDLER_SET MPI_ERRHANDLER_SET errhandler comm 213

MPI_ERRHANDLER_GET(comm, errhandler) IN comm OUT errhandler MPI int MPI_Errhandler_get (MPI_Comm comm, MPI_Errhandler *errhandler) MPI_ERRHANDLER_GET ( COMM, ERRHANDLER, IERROR) INTEGER COMM, ERRHANDLER, IERROR MPI 125 MPI_ERRHANDLER_GET MPI_ERRHANDLER_GET comm errhandler MPI_ERRHANDLER_FREE ( errhandler) IN errhandler MPI int MPI_Errhandler_free (MPI_Errhandler *errhandler) MPI_ERRHANDLER_FREE ( ERRHANDLER, IERROR) INTEGER ERRHANDLER, IERROR MPI 126 MPI_ERRHANDLER_FREE MPI_ERRHANDLER_FREE errhandler errhandler MPI_ERRHANDLERNULL MPI_ERROR_STRING (errorcode, string, resultlen) IN errorcode MPI OUT string errorcode OUT resultlen string int MPI_Error_string (int errorcode, char *string, int *resultlen) MPI_ERROR_STRING (ERRORCODE, STRING, RESULTLEN, IERROR) INTEGER ERRORCODE, RESULTLEN, IERROR CHARACTER *(*) STRING MPI 127 MPI_ERROR_STRING MPI_ERROR_STRING string string MPI_MAX_ERROR_STRING 214

MPI_ERROR_CLASS (errorcode, errorclass) IN errorcode MPI OUT errorclass errorcode int MPI_Error_class (int errorcode, int *errorclass) MPI_ERROR_CLASS (ERRORCODE, ERRORCLASS, IERROR) INTEGER ERRORCODE, ERRORCLASS, IERROR MPI_ERROR_CLASS MPI 128 MPI_ERROR_CLASS 19 MPI_SUCCESS MPI_ERR_BUFFER MPI_ERR_COUNT MPI_ERR_TYPE MPI_ERR_ TAG MPI_ERR_COMM MPI_ERR_RANK MPI_ERR_REQUEST MPI_ERR_ROOT MPI_ERR_GROUP MPI_ERR_OP MPI_ERR_TOPOLOGY MPI_ERR_DIMS MPI_ERR_ARG MPI_ERR_UNKNOWN MPI_ERR_TRUNCATE MPI_ERR_OTHER MPI_ERR_INTERN MPI_ERR_LASTCODE MPI 0 = MPI_SUCCESS MPI_ERR_ MPI_ERR_LASTCODE 17.2 MPI 215

18 MPI MPI / 18.1 MPI-1 C int MPI_Abort(MPI_Comm comm, int errorcode) MPI MPI int MPI_Address(void * location, MPI_Aint * address) MPI_Get_address int MPI_Allgather(void * sendbuff, int sendcount, MPI_Datatype sendtype, void * recvbuf, int * recvcounts, int * displs, MPI_Datatype recvtype, MPI_Comm comm) MPI_Gather int MPI_Allgatherv(void * sendbuff, int sendcount, MPI_Datatype sendtype, void * recvbuf, int recvcounts, int * displs, MPI_Datatype recvtype, MPI_Comm comm) MPI_Gatherv int MPI_Allreduce(void * sendbuf, void * recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) MPI_Reduce int MPI_Alltoall(void * sendbuf, void * recvbuf, int count, MPI_Datatype datatype, void * recvbuf, int * recvcounts, int * rdispls, MPI_Datatype recvtype, MPI_Comm comm) int MPI_Alltoallv(void * sendbuf, int * sendcount, int * sdispls, MPI_Datatype sendtype, void * recvbuf, int * recvcounts, int * rdispls, MPI_Datatype recvtype, MPI_Comm comm), Int MPI_Attr_delete(MPI_Comm comm, int keyval) MPI_Comm_delete_attr int MPI_Attr_get(MPI_Comm comm, int keyval, void * attribute_val, int * flag) MPI_Comm_get_attr int MPI_Attr_put(MPI_Comm comm, int keyval, void * attribute_val) MPI_Comm_set_attr int MPI_Barrier(MPI_Comm comm) int MPI_Bcast(void * buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm) root int MPI_Bsend(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) 216

int MPI_Bsend_init(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Buffer_attach(void * buffer, int size) int MPI_Buffer_detach(void * buffer, int * size) int MPI_Cancel(MPI_Request * request) int MPI_Cart_coords(MPI_Comm comm, int rank, int maxdims, int * coords) int MPI_Cart_create(MPI_Comm comm_old, int ndims, int * dims, int * periods, int reorder, MPI_Comm * comm_cart ) int MPI_Cart_get(MPI_Comm comm, int maxdims, int * dims, int *periods, int * coords) int MPI_Cart_map(MPI_Comm comm, int * ndims, int * periods, int * newrank) int MPI_Cart_rank(MPI_Comm comm, int * coords, int * rank) int MPI_Cart_shift(MPI_Comm comm, int direction, int disp, int * rank_source, int * rank_dest) int MPI_Cart_sub(MPI_Comm comm, int * remain_dims, MPI_Comm * newcomm) int MPI_Cartdim_get(MPI_Comm comm, int* ndims) int MPI_Comm_compare(MPI_comm comm1, MPI_Comm comm2, int * result) int MPI_Comm_create(MPI_Comm comm, MPI_Group group, MPI_Comm * newcomm) Int MPI_Comm_dup(MPI_Comm comm, MPI_Comm *new_comm) int MPI_Comm_free(MPI_Comm* comm) int MPI_Comm_group(MPI_Comm comm, MPI_Group * group) int MPI_Comm_rank(MPI_Comm comm, int * rank) int MPI_Comm_remote_group(MPI_Comm comm, MPI_Group * group) int MPI_Comm_remote_size(MPI_Comm comm, int * size) int MPI_Comm_set_attr(MPI_Comm comm, int keyval, void * attribute_val) 217

int MPI_Comm_size(MPI_Comm comm, int * size) int MPI_Comm_split(MPI_Comm comm, int color, int key, MPI_Comm * newcomm) int MPI_Comm_test_inter(MPI_Comm comm, int * flag) int MPI_Dims_create(int nnodes, int ndims, int * dims) int MPI_Errhandler_create(MPI_handler_function * function, MPI_Errhandler * errhandler) MPI MPI_Comm_create_errhandler int MPI_Errhandler_free(MPI_Errhandler * errhandler) MPI int MPI_Errhandler_get(MPI_Comm comm, MPI_Errhandler * errhandler) MPI_Comm_get_errhandler int MPI_Errhandler_set(MPI_Comm comm, MPI_Errhandler errhandler) MPI MPI_Comm_set_errhandler int MPI_Error_class(int errorcode, int * errorclass) int MPI_Error_string(int errorcode, char * string, int * resultlen) int MPI_Finalize(void) MPI int MPI_Gather(void * sendbuf, int sendcount, MPI_Datatype sendtype, void * recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) int MPI_Gatherv(void * sendbuf, int sendcount, MPI_Datatype sendtype, void * recvbuf, int * recvcounts, int * displs, MPI_Datatype recvtype, int root, MPI_Comm comm) int MPI_Get_count(MPI_Status * status, MPI_Datatype datatype, int * count) int MPI_Get_elements(MPI_Statue * status, MPI_Datatype datatype, int * elements) int MPI_Get_processor_name(char * name, int * resultlen) int MPI_Get_version(int * version, int * subversion) MPI int MPI_Graph_create(MPI_Comm comm_old, int nnodes, int * index, int * edges, int reorder, MPI_Comm * comm_graph) int MPI_Graph_get(MPI_Comm comm, int maxindex, int maxedges, int * index, int * edges) int MPI_Graph_map(MPI_Comm comm, int nnodes, int * index, int * edges, int * newrank) 218

int MPI_Graph_neighbors_count(MPI_Comm comm, int rank, int * nneighbors) int MPI_Graph_neighbors(MPI_Comm comm, int rank, int * maxneighbors, int * neighbors) int MPI_Graphdims_Get(MPI_Comm comm, int * nnodes, int * nedges) int MPI_Group_compare(MPI_Group group1, MPI_Group group2, int * result) int MPI_Group_diffenence(MPI_Group group1, MPI_Group group2, MPI_Group * newgroup) int MPI_Group_excl(MPI_Group group, int n, int * ranks, MPI_Group * newgroup), int MPI_Group_free(MPI_Group * group) int MPI_Group_incl(MPI_Group group, int n, int * ranks, MPI_Group * newgroup), int MPI_Group_intersection(MPI_Group group1, MPI_Group group2, MPI_Group * newgroup) int MPI_Group_range_excl(MPI_Group group, int n, int ranges[][3], MPI_group * newgroup),, int MPI_Group_range_incl(MPI_Group group, int n, int ranges[][3], MPI_Group * newgroup),, int MPI_Group_rank(MPI_Group group, int * rank) int MPI_Group_size(MPI_Group group, int * size) int MPI_Group_translate_ranks(MPI_Group group1, int n, int * ranks1, MPI_Group group2, int * ranks2) int MPI_Group_union(MPI_Group group1, MPI_Group group2, MPI_Group * newgroup) int MPI_Ibsend(void * buf, int count, MPI_Datatype datatype, int dest, int tga, MPI_Comm comm, MPI_Request * request) int MPI_Init(int * argc, char *** argv) MPI Int MPI_Initialized(int * flag) MPI_Init int MPI_Intercomm_create(MPI_Comm local_comm, int local_leader, MPI_Comm peer_comm, int remote_leader, int tag, MPI_Comm * newintercomm) int MPI_Intercomm_merge(MPI_Comm intercomm, int high, MPI_Comm * newintracomm) int MPI_Iprobe(int source, int tag, MPI_Comm comm, int * flag, MPI_Status * status) 219

int MPI_Irecv(void * buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Irsend(viud * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Isend(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Issend(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Keyval_create(MPI_Copy_function * copy_fn, MPI_Delete_function * delete_fn, int * keyval, void * extra_state), MPI_Comm_create_keyval int MPI_Keyval_free(int * keyval) int MPI_Op_create(MPI_Uop function, int commute, MPI_Op * op) int MPI_Op_free(MPI_Op * op) int MPI_Pack(void * inbuf, int incount, MPI_Datatype datetype, void * outbuf, int outcount, int * position, MPI_Comm comm), int MPI_Pack_size(int incount, MPI_Datatype datatype, MPI_Comm comm, int * size) int MPI_Pcontrol(const int level) int MPI_Probe(int source, int tag, MPI_Comm comm, MPI_Status * status) int MPI_Recv(void * buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status * status) int MPI_Recv_init(void * buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Reduce(void * sendbuf, void * recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm) root, int MPI_Reduce_scatter(void * sendbuf, void * recvbuf, int * recvcounts, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) int MPI_Request_free(MPI_Request * request) 220

int MPI_Rsend(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) int MPI_Rsend_init(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Scan(void * sendbuf, void * recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) int MPI_Scatter(void * sendbuf, int sendcount, MPI_Datatype sendtype, void * recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) int MPI_Scatterv(void * sendbuf, int * sendcounts, int * displs, MPI_Datatype sendtype, void * recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) int MPI_Send(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) int MPI_Send_init(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Sendrecv(void * sendbuf, int sendcount, MPI_Datatype sendtype, int dest, int sendtag, void * recvbuf, int recvcount, MPI_Datatype recvtype, int source, int recvtag, MPI_Comm comm, MPI_Status * status) int MPI_Sendrecv_replace(void * buf, int count, MPI_Datatype datatype, int dest, int sendtag, int source, int recvtag, MPI_Comm comm, MPI_Status * status) int MPI_Ssend(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) int MPI_Ssend_init(void * buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request * request) int MPI_Start(MPI_Request * request) int MPI_Startall(int count, MPI_Request * array_of_requests) int MPI_Test(MPI_Request * request, int * flag, MPI_Status * status) int MPI_Testall(int count, MPI_Request * array_of_requests, int * flag, MPI_Status * array_of_statuses) int MPI_Testany(int count, MPI_Request * array_of_requests, int * index, int * flag, MPI_Status * status) 221

int MPI_Testsome(int incount, MPI_Request * array_of_requests, int * outcount, int * array_of_indices, MPI_Status * array_of_statuses) int MPI_Test_cancelled(MPI_Status * status, int * flag) int MPI_Topo_test(MPI_Comm comm, int * top_type) int MPI_Type_commit(MPI_Datatype * datatype) int MPI_Type_contiguous(int count, MPI_Datatype oldtype, MPI_Datatype * newtype) int MPI_Type_extent(MPI_Datatype datatype, MPI_Aint * extent), MPI_type_get_extent int MPI_Type_free(MPI_Datatype * datatype) int MPI_Type_hindexed(int count, int * array_of_blocklengths, MPI_Aint * array_of_displacements, MPI_Datatype oldtype, MPI_Datatype * newtype),, MPI_type_create_hindexed int MPI_Type_hvector(int count, int blocklength, MPI_Aint stride, MPI_Datatype oldtype, MPI_Datatype * newtype),, MPI_type_create_hvector int MPI_Type_indexed(int cont, int * array_of_blocklengths, int * array_of_displacements, MPI_Datatype oldtype, MPI_Datatype * newtype) int MPI_Type_lb(MPI_Datatype datatype, MPI_Aint * displacement), MPI_type_get_extent int MPI_Type_size(MPI_Datatype datatype, int * size), int MPI_Type_struct(int count, int * array_of_blocklengths, MPI_Aint * array_of_displacements, MPI_Datatype * array_of_types, MPI_Datatype * newtype), MPI_type_create_struct int MPI_Type_ub(MPI_Datatype datatype, MPI_Aint * displacement), MPI_type_get_extent int MPI_Type_vector(int count, int blocklength, int stride, MPI_Datatype oldtype, MPI_Datatype * newtype) int MPI_Unpack(void * inbuf, int insize, int * position, void * outbuf, int outcount, MPI_Datatype datatype, MPI_Comm comm) int MPI_Wait(MPI_Request * request, MPI_Status * status) MPI int MPI_Waitall(int count, MPI_Request * array_of_requests, MPI_Status * array_of_status) 222

int MPI_Waitany(int count, MPI_Request *array_of_requests, int *index,mpi_status *status) int MPI_Waitsome(int incount, MPI_Request * array_pf_requests, int * outcount, int * array_of_indices, MPI_Status * array_of_statuses) double MPI_Wtick(void) MPI_Wtime double MPI_Wtime(void) 18.2 MPI-1 Fortran MPI_Abort(comm, errorcode, ierror) integer comm, errorcode, ierror MPI MPI MPI_Address(location, address, eerror) <type>location integer address, ierror MPI_Get_address MPI_Allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, ierror ) <type> sendbuf(*), recvbuf(*) integer sendcount, sendtype, recvcount, recvtype, comm, ierror MPI_Gather MPI_Allgatherv(sendbuf, sendcount, sendtype, recvbuf, recbcounts, displs, recvtype, comm, ierror) <type>sendbuf(*), recvbuf(*) integer sendcount, sendtype, recvcounts(*), displs(*), recvtype, comm, ierror MPI_Gatherv MPI_Allreduce(sendbuf, recvbuf, count, datatype, op, comm, ierror) <type> sendbuf(*), recvbuf(*) integer count, datatype, op, comm, ierror MPI_Reduce MPI_Alltoall(sendbuf, sendcount, sendtyupe, recvbuf, recvcount, recvtype, comm, ierror) <type> sendbuf(*), recvbuf(*) integer sendcount, sendtype, recvcount, recvtype, comm, ierror MPI_Alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, ierror) <type> sendbuf(*), recvbuf(*) integer sendcounts(*), sdispls(*), sendtype, recvcounts(*), rdispls(*), recvtype, comm, ierror, MPI_Attr_delete(comm, keyval, ierror) 223

integer comm, keyval, ierror MPI_Comm_delete_attr MPI_Attr_get(comm, keyval, attribute_val, flag, ierror) integer comm, keyval, attribute_val, ierror Logical flag MPI_Comm_get_attr MPI_Attr_put(comm, keyval, attribute_val, ierror) integer comm, keyval, attribute_val, ierror MPI_Comm_set_attr MPI_Barrier(comm, ierror) integer comm, ierror MPI_Bcast(buffer, count, datatype, root, comm, ierror) <type> buffer(*) integer count, datatype, root, comm, ierror root MPI_Bsend(buf, count, datatype, dest, tag, comm, ierror ) <type> buf(*) integer cont, datatype, dest, tag, comm, ierror MPI_Bsend_init(buf, cont, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Biffer_attch(buffer, size, ierror) <type> buffer(*) integer size, ierror MPI_Biffer_detach(buffer, size, ierror) <type> buffer(*) integer size, ierror MPI_Cancel(request, ierror) integer request, ierror MPI_Cart_coords(comm, rank, maxdims, coords, ierror) integer comm, rank, maxdims, coords(*), ierror MPI_Cart_creat(comm_old, ndims, dims, periods, reorder, comm_cart, ierror) integer comm_old, ndims, dims(*), comm_cart, ierror Logical periods(*), reorder MPI_Cart_get(comm, maxdims, dims, periods, coords, ierror) 224

integer comm, maxdims, dims(*), coords(*), ierror Logical periods(*) MPI_Cart_map(comm, ndims, dims, periods, newrank, ierror) integer comm, ndims, dims(*), newrank, ierror Logical periods(*) MPI_Cart_rank(comm, coords, rank, ierror) integer comm, coords(*), rank, ierror MPI_Cart_shift(comm, direction, disp, rank_source, rank_dest, ierror MPI_Cart_sub(comm, remain_dims, newcomm, ierror) integer comm, newcomm, ierror Logical remain_dims(*) MPI_Cartdim_get(comm, ndism, ierror) integer comm, ndims, ierror MPI_Comm_compare(comm1, comm2, result, ierror) integer comm, group, newcomm, ierror MPI_Comm_creat(comm, group, newcomm, ierror) integer comm, group, newcomm, ierror MPI_Comm_dup(comm, newcomm, ierror) integer comm, newcomm, ierror MPI_Comm_free(comm, ierror) integer comm, ierror MPI_Comm_group(comm, group, ierror) integer comm, group, ierror MPI_Comm_rank(comm, rank, ierror) integer comm, rank, ierror MPI_comm_remote_group(comm, group, ierror) integer comm, group, ierror MPI_comm_remote_size(comm, size, ierror) integer comm, size, ierror MPI_Comm_set_attr(comm, keyval, attribute_val, ierror) 225

integer comm, keyval, ierror integer (kind=mpi_address_kind) attribute_val MPI_Comm_size(comm, size, ierror) integer comm, size, ierror MPI_Comm_split(comm, color, key, newcomm, ierror) integer comm, color, key, newcomm, ierror MPI_Comm_test_inter(comm, flag, ierror) integer comm, ierror Logical flag MPI_Dims_create(nnodes, ndims, dims, ierror) integer nnodes, ndims, dims(*), ierror MPI_Errhandler_create(function, errhandler, ierror) External function integer errhandler, ierror MPI MPI_Comm_create_errhandler MPI_Errhandler_free(comm, errhandler, ierror) integer comm, errhandler, ierror MPI MPI_Errhandler_get(comm, errhandler, ierror) integer errhandler, ierror MPI_Comm_get_errhandler MPI_Errhandler_set(comm, errhandler, ierror) integer comm, errhandler, ierror MPI MPI_Comm_set_errhandler MPI_Error_class(errorcode, errorclass, ierror) integer errorcode, errorclass, ierror MPI_Error_string(errorcode, string, resultlen, ierror) integer errorcode, resultlem, ierror character *(MPI_MAX_ERROR_STRING) string MPI_Finalize(ierror) integer ierror MPI MPI_Gather(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, ierror) <type> sendbuf(*), recvbuf(*) integer sendcount, sendtype, recvcount, recvtype, root, comm, ierror 226

MPI_Gatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, ierror) <type> sendbuf(*), recvbuf(*) integer sendcount, sendtype, recvcounts(*), displs(*), recvtype, root, comm, ierror MPI_Get_count(status, datatype, count, ierror) integer status(*), datatype, count, ierror MPI_Get_elements(status, datatype, elements, ierror) integer status(*), datatype, elements, ierror MPI_Get_processor_name(name, resultlen, ierror) character * (MPI_MAX_PROCESSOR_NAME) name integer resultlen, ierror MPI_Get_version(version, subversion, ierror) integer version, subversion, ierror MPI MPI_Graph_create(comm_old, nnodes, index, edges, reorder, comm_graph, ierror) integer comm_old, nnodes, index(*), edges(*), comm_graph, ierror Logical reorder MPI_Graph_get(comm, maxindex, maxedges, index, edges, ierror) integer comm, maxindex, maxedges, index(*), edges(*), error MPI_Graph_map(comm, nnodes, index, edges, newrank, error) integer comm, nnodes, index(*), edges(*), newrank, error MPI_Graph_neighbors_count(comm, rank, nneighbors, ierror) integer comm, rank, nneighbors, ierror MPI_Graph_neighbors(comm, rank, maxneighbors, neighbors, ierror) integer comm, rank, maxneighbors, neighbors(*), ierror MPI_Graphdims_Get(comm, nnodes, nedges, ierror) integer comm, nnodes, nedges, ierror MPI_Group_compare(group1, group2, result, ierror) integer group1, group2, result, ierror MPI_Group_difference(group1, group2, newgroup, ierror) integer group1, group2, newgroup, ierror MPI_Gropu_excl(gropu, n, ranks, newgroup, ierror) 227

integer group, n, ranks(*), newgroup, ierror, MPI_Group_free(group, ierror) integer group, ierror MPI_Group_incl(group, n, ranks, newgroup, ierror) integer group, n, ranks(*), newgroup, ierror, MPI_Group_intersection(group1, group2, newgroup, ierror) integer group1, group2, newgroup, ierror MPI_Group_range_excl(group, n, ranges, newgroup, ierror) integer group, n, ranges(3, *), newgroup, ierror,, MPI_Group_range_incl(group, n, ranges, newgroup, ierror) integer group, n, ranges(3, *), newgroup, ierror,, MPI_Group_rank(group, rank, ierror) integer group, rank, ierror MPI_Group_size(group, size, ierror) integer group, size, ierror MPI_Group_translate_ranks(group1, n, ranks1, group2, ranks2, ierror) integer group1, n, ranks1(*), group2, ranks2(*), ierror MPI_Group_union(group1, group2, newgroup, ierror) integer group1, group2, newgroup, ierror MPI_Ibsend(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Init(ierror) integer ierror MPI MPI_Initialized(flag, ierror) logical flag integer ierror MPI_Init MPI_Intercomm_create(local_comm, local_leader, peer_comm, remote_leader, tag, newintercomm, ierror) integer local_comm, local_leader, peer_comm, remote_leader, tag, newintercomm, ierror 228

MPI_Intercomm_merge(intercomm, high, intracomm, ierror) integer intercomm, intracomm, ierror logical high MPI_Iprobe(source, tag, comm, flag, status, ierror) integer source, tag, comm, status(*), ierror MPI_Irecv(buf, count, datatype, source, tag, comm, request, ierror) <type> buf(*) integer count, datatype, source, tag, comm, request, ierror MPI_Irsend(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Isend(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Issend(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Keyval_create(copy_fn, delete_fn, keyval, extra_state, ierror) external copy_fn, delete_fn integer keyval, extra_state, ierror, MPI_Comm_create_keyval MPI_Keyval_free(keyval, ierror) integer keyval, ierror MPI_Op_create(function, commute, op, ierror) exterval function logical commute integer op, ierror MPI_Op_free(op, ierror) integer op, ierror MPI_Pack(inbuf, incount, datatype, outbuf, outcount, position, comm, ierror) <type>inbuf(*), outbuf(*) integer incount, datatype, outcount, position, comm, ierror, MPI_Pack_size(incount, datatype, size, ierror) 229

integer incount, datatype, size, ierror MPI_Pcontrol(level) integer level MPI_Probe(cource, tag, comm, status, ierror) integer source, tag, comm, status(*), ierror MPI_Recv(buf, count, datatype, source, tag, comm, status, ierror) <type> buf(*) integer count, datatype, source, tag, comm, status(*), ierror MPI_Recv_init(buf, count, datatype, source, tag, comm, request, ierror) <type> buf(*) integer count, datatype, source, tag, comm, request, ierror MPI_Reduce(sendbuf, recvbuf, count, datatype, op, root, comm, ierror) <type> sendbuf(*), recvbuf(*) integer count, datatype, op, root, comm, ierror root, MPI_Reduce_scatter(sendbuf, recvbuf, recvcounts, datatype, op, comm, ierror) <type> sendbuf(*), recvbuf(*) integer recvcounts(*), datatype, op, comm, ierror MPI_Request_free(request, ierror) integer request, ierror MPI_Rsend(buf, count, datatype, dest, tag, comm, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, ierror MPI_Rsend_init(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Scan(sendbuf, recvbuf, count, datatype, op, comm, ierror) <type> sendbuf(*), recvbuf(*) integer count, datatype, ip, comm, ierror MPI_Scatter(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, ierror) <type> sendbuf(*), recvbuf(*) integer sendcount, sendtype, recvcount, recvtype, root, comm, ierror MPI_Scatterv(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, 230

ierror) <type> sendbuf(*), recvbuf(*) integer sendcounts(*), displs(*), sendtype, recvcount, recvtype, root, comm, ierror MPI_Send(buf, count, datatype, dest, tag, comm, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, ierror MPI_Send_init(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Sendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtyep, source, recvtag, comm, status, ierror) <type> sendbuf(*), recvbuf(*) integer sendcount, sendtype, dest, sendtag, recvcount, recvtype, source, recvtag, comm, status(*), ierror MPI_Sendrecv_replace(buf, count, datatype, dest, sendtag, source, recvtag, comm, status, ierror) <type> buf(*) integer count, datatype, dest, sendtag, source, recvtag, comm, status(*), ierror MPI_Ssend(buf, count, datatype, dest, tag, comm, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, ierror MPI_Ssend_init(buf, count, datatype, dest, tag, comm, request, ierror) <type> buf(*) integer count, datatype, dest, tag, comm, request, ierror MPI_Start(request, ierror) integer request, ierror MPI_Startall(count, array_of_requests, ierror) integer count, array_of_requests(*), ierror MPI_Test(request, flag, status, ierror) integer request, status(*), ierror logical flag MPI_Testall(count, array_of_requests, flag, array_of_statuses, ierror) integer count, array_of_request(*), 231

array_of_statuses(mpi_status_size, *), ierror logical flag MPI_Testany(count, array_of_request, index, flag, status, ierror) integer count, array_of_requests(*), index, status(*), ierror logical flag MPI_Testsome(incount, array_of_requests, outcount, array_of_indices, array_of_statuses, ierror) integer incount, array_of_requests(*), outcount, array_of_indices(*), array_of_statuses(mpi_status_size, *), ierror MPI_Test_cancelled(status, flag, ierror) integer status(*), ierror MPI_Topo_test(comm, top_type, ierror) integer comm, top_type, ierror MPI_Type_commit(datatype, ierror) integer datatype, ierror MPI_Type_contiguous(count, oldtype, newtype, ierror) integer count, oldtype, newtype, ierror MPI_Type_extent(datatype, extent, ierror) integer datatype, extent, ierror, MPI_type_get_extent MPI_Type_free(datatype, ierror) integer datatype, ierror MPI_Type_hindexed(count, array_of_blocklenghths, array_of_displacements, oldtype, newtype, ierror) integer count, array_of_blocklengths(*), array_of_displacements(*), oldtype, newtype, ierror,, MPI_type_create_hindexed MPI_Type_hvector(count, blocklength, stride, oldtype, newtype, ierror) integer count, blocklength, stride, oldtype, newtype, ierror,, MPI_type_create_hvector MPI_Type_indexed(count, array_of_blocklengths, array_of_displacements, oldtype, newtype, ierror) integer count, array_of_blocklengths(*), array_of_displacements(*), oldtype, newtype, ierror 232

MPI_Type_lb(datatype, displacement, ierror) integer datatype, displacement, ierror, MPI_type_get_extent MPI_Type_size(datatype, size, ierror) integer datatype, size, ierror, MPI_Type_struct(count, array_of_blocklengths, array_of_displacements, array_of_types, newtype, ierror) integer count, array_of_blocklengths(*), array_of_displacements(*) array_of_type(*), newtype, ierror, MPI_type_create_struct int MPI_Type_ub(MPI_Datatype datatype, MPI_Aint * displacement), MPI_type_get_extent MPI_Type_ub(datatype, displacement, ierror) integer datatype, displacement, ierror MPI_Type_vector(count, blocklength, stride, oldtype, newtype, ierror) integer count, blocklength, stride, oldtype, newtype, ierror MPI_Unpack(inbuf, insize, position, outbuf, outcount, datatype, comm, ierror) <type> inbuf(*), outbuf(*) integer insize, position, outcount, datatype, comm, ierror MPI_Wait(request, status, ierror) integer request, status(*), ierror MPI MPI_Waitall(count, array_of_requests, array_of_statuses, ierror) integer count, array_of_requests(*), array_of_statuses(mpi_status_size, *), ierror MPI_Waitany(count, array_of_request, index, status, ierror) integer count, array_of_requests(*), index, status(*), ierror MPI_Waitsome(incount, array_of_requests, outcount, array_of_indices, array_of_statuses, ierror) integer incount, array_of_requests(*), outcount, array_of_indices(*), array_of_statuses(mpi_status_size, *), ierror MPI_Wtick() MPI_Wtime MPI_Wtime() 233

18.3 MPI-2 C int MPI_Accumulate(void * origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Op op, MPI_Win win) int MPI_Add_error_class(int * errorclass) int MPI_Add_error_code(int errorclass, int * error) int MPI_Add_error_string(int errorcode, char * string) int MPI_Alloc_mem(MPI_Aint size, MPI_Info info, void * baseptr) int MPI_Alltoallw(void * sendbuf, int sendcounts[], int sdispls[], MPI_Datatype sendtypes[], void * recvbuf, int recvcounts[], int rdispls[], MPI_Datatype recvtypes[], MPI_Comm comm) int MPI_Close_port(char * port_name) int MPI_Comm_accept(char * port_name, MPI_Info info, int root, MPI_Comm comm, MPI_Comm * newcomm) MPI_Fint MPI_Comm_c2f(MPI_Comm comm) C Fortran int MPI_Comm_call_errhandler(MPI_Comm comm, int error) int MPI_Comm_connect(char * portname, MPI_Info info, int root, MPI_Comm comm, MPI_Comm * newcomm) int MPI_Comm_create_errhandler(MPI_Comm_errhandler_fn *function, MPI_Errhandler *errhandler) int MPI_Comm_create_keyval(MPI_Comm_copy_attr_function *comm_copy_attr_fn, MPI_Comm_delete_attr_function *comm_delete_attr_fn, int *comm_keyval, void *extra_state) int MPI_Comm_delete_attr(MPI_Comm comm, int comm_keyval) int MPI_Comm_disconnect(MPI_Comm *comm) MPI_Comm MPI_Comm_f2c(MPI_Fint comm) Fortran C int MPI_Comm_free_keyval(int *comm_keyval) 234

MPI_Comm_create_keyval int MPI_Comm_get_attr(MPI_Comm comm, int comm_keyval, void attribute_val, int *flag) int MPI_Comm_get_errhandler(MPI_comm comm, MPI_Errhandler *errhandler) int MPI_Comm_get_name(MPI_Comm comm, char *comm_name, int *resultlen) int MPI_Comm_get_parent(MPI_Comm *parent) int MPI_Comm_join(int fd, MPI_Comm *intercom) MPI int MPI_Comm_set_attr(MPI_Comm comm, int comm_keyval, void *attribute_val) int MPI_Comm_set_errhandler(MPI_Comm comm, MPI_Errhandler errhandler) int MPI_Comm_set_name(MPI_Comm comm, char *comm_name) int MPI_Comm_spawn(char *command, char *argv[], int maxprocs, MPI_Info info, int root, MPI_Comm comm, MPI_Comm *intercom, int array_of_errcodes[]) MPI int MPI_Comm_spawn_multiple(int count, char *array_of_commands[], Char **array_of_argv[], int array_of_maxprocs[], MPI_Info array_of_info[], int root, MPI_Comm comm, MPI_Comm *intercom, int array_of_errcodes[]) MPI int MPI_Exscan(void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) MPI_Scan MPI_Fint MPI_File_c2f(MPI_File file) C Fortran int MPI_File_call_errhandler(MPI_File fh, int error) int MPI_File_close(MPI_File *fh) int MPI_File_create_errhanlder(MPI_File_errhandler_fn *function, MPI_Errhandler *errhandler) int MPI_File_delete(char *filename, MPI_Info info) MPI_File MPI_File_f2c(MPI_Fint file) Fortran C int MPI_File_get_amode(MPI_File fh, int *amode) int MPI_File_get_atomicity(MPI_File fh, int *flag) fh flag int MPI_File_get_byte_offset(MPI_File fh, MPI_Offset offset, MPI_Offset *disp) 235

int MPI_File_get_errhandler(MPI_File file, MPI_Errhandler *errhandler) int MPI_File_get_group(MPI_file fh, MPI_Group *group) int MPI_File_get_info(MPI_File fh, MPI_Info *info_used) INFO int MPI_File_get_position(MPI_File fh, MPI_Offset *offset) int MPI_File_get_position_shared(MPI_File fh, MPI_Offset *offset) int MPI_File_get_size(MPI_File fh, MPI_Offset *size) int MPI_File_get_type_extent(MPI_File fh, MPI_Datatype datatype, MPI_Aint *extent) int MPI_File_get_view(MPI_File fh, MPI_Offset *disp, MPI_Datatype *etype, MPI_Datatype *filetype, char *datarep) int MPI_File_iread(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Request *request) int MPI_File_iread_at(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Request *request) int MPI_File_iread_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Request *request) int MPI_File_iwrite(MPI_File fh, void *buf, int count, MPI_Datatype, MPI_Request *request) int MPI_File_iwrite_at(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Request *request) int MPI_File_iwrite_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Request *request) int MPI_File_open(MPI_Comm comm, char *filename, int amode, MPI_Info info, MPI_File *fh) int MPI_File_preallocate(MPI_File fh, MPI_Offset size) int MPI_File_read(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_File_read_all(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) MPI_FILE_READ 236

int MPI_File_read_all_begin(MPI_File fh, void *buf, int count, MPI_Datatype datatype) int MPI_File_read_all_end(MPI_File fh, void *buf, MPI_Status *status) int MPI_File_read_at(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_File_read_at_all(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) MPI_FILE_READ_AT int MPI_File_read_at_all_begin(MPI_File fh, MPI_Offset offset, Void *buf, int count, MPI_Datatype datatype) int MPI_file_read_at_all_end(MPI_File fh, void *buf, MPI_Status *status) int MPI_File_read_ordered(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) MPI_FILE_READ_SHARED int MPI_File_read_ordered_begin(MPI_File fh, void *buf, int count, MPI_datatype datatype) int MPI_File_read_ordered_end(MPI_File fh, void *buf, MPI_status *status) int MPI_File_read_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_file_seek(MPI_File fh, MPI_Offset offset, int whence) int MPI_file_seek_shared(MPI_File fh, MPI_Offset offset, int whence) int MPI_File_set_atomicity(MPI_File fh, int flag) int MPI_File_set_errhandler(MPI_File file, MPI_Errhandler errhandler) int MPI_File_set_info(MPI_File fh, MPI_Info info) INFO int MPI_File_set_size(MPI_File fh, MPI_Offset size) int MPI_File_set_view(MPI_File fh, MPI_Offset disp, MPOI_Datatype etype, MPI_Datatype filetype, char *datarep, MPI_info info) int MPI_File_sync(MPI_File fh) 237

int MPI_File_write(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_File_write_all(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) MPI_File_write int MPI_File_write_all_begin(MPI_File fh, void *buf, int count, MPI_Datatype datatype) int MPI_File_write_all_end(MPI_file fh, void *buf, MPI_Status *status) int MPI_File_write_at(MPI_file fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_File_write_at_all(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_File_write_at_all_begin(MPI_File fh, MPI_Offset offset, Void *buf, int count, MPI_Datatype datatype) int MPI_File_write_at_all_end(MPI_File fh, void *buf, MPI_Status *status) int MPI_File_write_ordered(MPI_file fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) MPI_File_write_shared int MPI_File_write_ordered_begin(MPI_File fh, void *buf, int count, MPI_Datatype datatype) int MPI_File_write_ordered_end(MPI_File fh, viod *buf, MPI_Status *status) int MPI_File_write_shared(MPI_File fh, void *buf, int count, MPI_Datatype datatype, MPI_Status *status) int MPI_Finalized(int *flag) MPI_Finalize int MPI_Free_mem(void *base) MPI_Alloc_mem int MPI_Get(void *origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Win win) int MPI_Get_address(void *location, MPI_Aint *address) int MPI_Grequest_complete(MPI_Request request) MPI int MPI_Grequest_start(MPI_Grequest_query_function *query_fn, MPI_Grequest_free_function 238

*free_fn, MPI_Grequest_cancel_function *cancel_fn, void *extra_state, MPI_Request *request) MPI_Fint MPI_Group_c2f(MPI_Group group) C Fortran MPI_Group MPI_Group_f2c(MPI_Fint group) Fortran C MPI_Fint MPI_Info_c2f(MPI_Info info) C Fortran int MPI_Info_create(MPI_Info *info) INFO int MPI_Info_delete(MPI_Info info, char *key) INFO < > int MPI_Info_dup(MPI_Info info, MPI_Info *newinfo) INFO MPI_Info MPI_Info_f2c(MPI_Fint info) Fortran INFO C int MPI_Info_free(MPI_Info *info) INFO int MPI_Info_get(MPI_Info info, char *key, int valuelen, char *value, int *flag) int MPI_Info_get_nkeys(MPI_Info info, int *nkeys) INFO int MPI_Info_get_nthkey(MPI_Info info, int n, char *key) INFO n int MPI_Info_get_valuelen(MPI_Info info, char *key, int *valuelen, int *flag) int MPI_Info_set(MPI_Info info, char *key, char *value) INFO <, > int MPI_Init_thread(int *argc, char *((*argv)[]), int required, int *provided) MPI MPI int MPI_Is_thread_main(int *flag) int MPI_Lookup_name(char *service_name, MPI_Info info, Char *prot_name) MPI_Fint MPI_Op_c2f(MPI_Op op) C Fortran MPI_Op MPI_Op_f2c(MPI_Fint op) Fortran C int MPI_Open_port(MPI_Info info, char *port_name) int MPI_Pack_external(char *datarep, void *inbuf, int incount, MPI_Datatype datatype, void *outbuf, MPI_Aint outsize, MPI_Aint *position) 239

int MPI_Pack_external_size(char *datarep, int incount, MPI_Datatype datatype, MPI_Aint *size) int MPI_Publish_name(char *service_name, MPI_Info info, Char *port_name) int MPI_Put(void *origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Win win) int MPI_Query_thread(int *provided) int MPI_Register_datarep(char *datarep, MPI_Datarep_conversion_function *read_conversion_fn, MPI_Datarep_conversion_function *write_conversion_fn, MPI_Datarep_extent_function *dtype_file_extent_fn, Void *extra_state) MPI MPI_Fint MPI_Request_c2f(MPI_Request request) C Fortran MPI_Request MPI_Request_f2c(MPI_Fint request) Fortran C int MPI_Request_get_status(MPI_Request request, int *flag, MPI_Status *status) int MPI_Status_c2f(MPI_Status *c_status, MPI_Fint *f_status) C Fortran int MPI_Status_f2c(MPI_Fint *f_status, MPI_Status *c_status) Fortran C int MPI_Status_set_cancelled(MPI_Status *status, int flag) MPI_Test_cancelled int MPI_Status_set_elements(MPI_Status *status, MPI_Datatype datatype, int count) MPI_Get_elements MPI_Fint MPI_Type_c2f(MPI_Datatype datatype) C Fortran int MPI_Type_create_darray(int size, int rank, int ndims, int array_of_gsizes[], int array_of_distribs[], int array_of_dargs[], int array_of_psizes[], int order, MPI_Datatype oldtype, MPI_Datatype *newtype) int MPI_type_create_f90_complex(int p, int r, MPI_Datatype *newtype) MPI Fortran 90 int MPI_Type_create_f90_integer(int r, MPI_Datatype *newtype) MPI Fortran 90 int MPI_type_create_f90_real(int p, int r, MPI_Datatype *newtype) MPI Fortran 90 int MPI_Type_create_hindexed(int count, int array_of_blocklengths[], MPI_Ainyt array_of_displacements[], MPI_Datatype oldtype, MPI_Datatype *newtype) 240

int MPI_Type_create_hvector(int count, int blocklength, MPI_Aint stride, MPI_Datatype oldtype, MPI_Datatype *newtype) int MPI_Type_create_indexed_block(int count, int blocklength, int array_of_displacements[], MPI_Datatype oldtype, MPI_Datatype *newtype) int MPI_Type_create_keyval(MPI_Type_copy_attr_function *type_copy_attr_fn, MPI_Type_delete_attr_function *type_delete_attr_fn, int *type_keyval, void *extra_state) int MPI_Type_create_resized(MPI_Datatype oldtype, MPI_Aint lb, MPI_Aint extent, MPI_Datatype *newtype) int MPI_Type_create_struct(int count, int array_of_blocklengths[], MPI_Aint array_of_displacements[], MPI_Datatype array_of_types[], MPI_Datatype *newtype) int MPI_type_create_subarray(int ndims, int array_of_sizes[], int array_of_subsizes[], int array_of_starts[], int order, MPI_Datatype oldtype, MPI_Datatype *newtype) int MPI_type_delete_attr(MPI_Datatype type, int type_keyval) int MPI_Type_dup(MPI_Datatype type, MPI_Datatype *newtype) MPI_datatype MPI_Type_f2c(MPI_Fint datatype) Fortran C int MPI_Type_free_keyval(int *type_keyval) MPI_Type_create_keyval int MPI_Type_get_attr(MPI_Datatype type, int type_keyval, Void *attribute_val, int *flag) int MPI_type_get_contents(MPI_Datatype datatype, int max_integers, int max_addresses, int max_datatypes, int array_of_integers[], MPI_Aint array_of_addresses[], MPI_Datatype array_of_datatypes[]) int MPI_Type_get_envelope(MPI_Datatype datatype, int *num_integers, int *num_addresses, int *num_datatypes, int *combiner) int MPI_Type_get_extent(MPI_Datatype datatype, MPI_Aint *lb, MPI_Aint *extent) int MPI_Type_get_name(MPI(_Datatype type, char *type_name, int *resultlen) int MPI_Type_get_true_extent(MPI_Datatype datatype, MPI_Aint *true_lb, MPI_Aint *true_extent) int MPI_Type_match_size(int typeclass, int size, MPI_Datatype *type) 241

MPI int MPI_Type_set_attr(MPI_Datatype type, int type_keyval, void *attribute_val) int MPI_Type_set_name(MPI_Datatype type, char *type_name) int MPI_Unpack_external(char *datarep, void *inbuf, MPI_Aint insize, MPI_Aint *position, void *outbuf, int outocunt, MPI_Datatype datatype) int MPI_Unpublish_name(char *service_name, MPI_Info info, Char *port_name) MPI_Fint MPI_Win_c2f(MPI_Win win) C Fortran int MPI_Win_call_errhandler(MPI_Win win, int error) int MPI_Win_complete(MPI_Win win) MPI_Win_start RMA int MPI_Win_create(void *base, MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, MPI_Win *win) int MPI_Win_create_errhandler(MPI_Win_errhandler_fn *function, MPI_Errhandler *errhandler) int MPI_Win_create_keyval(MPI_Win_copy_attr_function *win_copy_attr_fn, MPI_Win_delete_attr_function *win_delete_attr_fn, int *win_keyval, void *extra_state) int MPI_Win_delete_attr(MPI_Win win, int win_keyval) MPI_Win MPI_Win_f2c(MPI_Fint win) Fortran C int MPI_Win_fence(int assert, MPI_Win win) RMA int MPI_Win_free(MPI_Win *win) int MPI_Win_free_keyval(int *win_keyval) MPI_Win_create_keyval int MPI_Win_get_attr(MPI_Win win, int win_keyval, void *attribute_val, int *flag) int MPI_Win_get_errhandler(MPI_Win win, MPI_Errhandler *errhandler) int MPI_Win_get_group(MPI_Win win, MPI_Group *group) int MPI_Win_get_name(MPI_Win win, char *win_name, int *resultlen) int MPI_Win_lock(int lock_type, int rank, int assert, MPI_Win win) 242

int MPI_Win_post(MPI_Group group, int assert, MPI_Win win) int MPI_Win_set_attr(MPI_Win win, int win_keyval, void *attribute_val) int MPI_Win_set_errhandler(MPI_Win win, MPI_Errhandler errhandler) int MPI_Win_set_name(MPI_Win win, char *win_name) int MPI_Win_start(MPI_Group group, int assert, MPI_Win win) MPI_Win_post int MPI_Win_test(MPI_Win win, int *flag) RMA int MPI_Win_unlock(int rank, MPI_Win win) int MPI_Win_wait(MPI_Win win) MPI_Win_post RMA 18.4 MPI-2 Fortran MPI_Accumulate(origin_addr, origin_count, origin_datatype, Target_rank, target_disp, target_count, target_datatype, op, Win, ierror) <type>origin_addr(*) integer(kind=mpi_address_kind) target_disp integer origin_count, origin_datatype, target_rank, target_count, target_datatype, op, win, ierror MPI_Add_error_class(errorclass, ierror) integer errorclass, ierror MPI_Add_error_code(errorclass, errorcode, ierror) integer errorclass, errorcode, ierror MPI_Add_error_string(errorcode, string, ierror) integer errorcode, ierror character*(*) string MPI_Alloc_mem(size, info, baseptr, ierror) integer info, ierror integer(kind=mpi_address_kind) size, baseptr MPI_Alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, Recvcounts, rdispls, recvtypes, comm, ierror) <type> sendbuf(*), recvbuf(*) integer sendcounts(*), sdispls(*), sendtypes(*), recvcounts(*), 243

rdispls(*), recvtypes(*), comm, ierror MPI_Close_port(port_name, ierror) character*(*) port_name integer ierror MPI_Comm_accept(port_name, info, root, comm, newcomm, ierror) character*(*) port_name integer info, root, comm, newcomm, ierror MPI_Comm_call_errhandler(comm, errorcode, ierror) integer comm, errorcode, ierror MPI_Comm_connect(port_name, info, root, comm, newcomm, ierror) character*(*) port_name integer info, root, comm, newcomm, ierror MPI_Comm_create_errhandler(function, errhandler, ierror) External function integer errhandler, ierror MPI_Comm_create_keyval(comm_copy_attr_fn, comm_delete_attr_fn, Comm_keyval, extra_state, ierror) External comm_copy_attr_fn, comm_delete_attr_fn integer comm_keyval, ierror integer(kind=mpi_address_kind) extra_state MPI_Comm_delete_attr(comm, comm_keyval, ierror) integer comm, comm_keyval_ierror MPI_Comm_disconnect(comm, ierror) integer comm, ierror MPI_Comm_free_keyval(comm_keyval, ierror) integer comm_keyval, ierror MPI_Comm_create_keyval MPI_Comm_get_attr(comm, comm_keyval, attribute_val, flag, ierror) integer comm, comm_keyval, ierror integer(kind=mpi_address_kind) attribute_val Logical flag MPI_Comm_get_errhandler(comm, errhandler, ierror) integer comm, errhandler, ierror 244

MPI_Comm_get_name(comm, comm_name, resultlen, ierror) integer comm, resultlen, ierror character*(*) comm_name MPI_Comm_get_parent(parent, ierror) integer parent, ierror MPI_Comm_join(fd, intercom, ierror) integer fd, intercom, ierror MPI MPI_Comm_set_attr(com, comm_keyval, attribute_val, ierror) integer comm, comm_keyval, ierror integer(kind=mpi_address_kind) attribute_val MPI_Comm_set_errhandler(comm, errhandler, ierror) integer comm, errhandler, ierror MPI_Comm_set_name(comm, comm_name, ierror) integer comm, ierror character*(*) comm_name MPI_Comm_spawn(command, argv, maxprocs, info, root, comm, intercom, array_of_errcodes, ierror) character*(*) command, argv(*) integer info, maxprocs, root, comm, intercomm, array_of_errcodes(*), ierror MPI MPI_Comm_spwan_multiple(count, array_of_commands, array_of_argv, aray_of_maxprocs, array_of_info, root, comm, intercomm, array_of_errcodes, ierror) integer count, array_of_info(*), array_of_maxprocs(*), root, comm, intercomm, array_of_errcodes(*), ierror character*(*) array_of_commands(*), array_of_argv(count, *) MPI MPI_Exscan(sendbuf, recvbuf, count, datatype, op, comm, ierror) <type> sendbuf(*), recvbuf(*) integer count, datatype, op, comm, ierror MPI_Scan MPI_File_call_errhandler(fh, errorcode, ierror) integer fh, errorcode, ierror MPI_File_close(fh, ierror) integer fh, ierror MPI_File_create_errhandler(function, errhandler, ierror) External function 245

integer errhandler, ierror MPI_File_delete(filename, info, ierror) Character*(*) filename integer info, ierror MPI_File_get_amode(fh, amode, ierror) integer fh, amode, ierror MPI_File_get_atomicity(fh, flag, ierror) integer fh, ferror Logical flag fh flag MPI_File_get_byte_offset(fh, offset, disp, ierror) integer fh, ierror integer(kind=mpi_offset_kind) offset, disp MPI_File_get_errhandler(file, errhandler, ierror) integer file, errhandler, ierror MPI_File_get_group(fh, group, ierror) integer fh, group, ierror MPI_File_get_info(fh, info_used, ierror) integer fh, info_used, ierror INFO MPI_File_get_position(fh, offset, ierror) integer fh, ierror integer(kind=mpi_offset_kind) offset MPI_File_get_position_shared(fh, offset, ierror) integer fh, ierror integer(kind=mpi_offset_kind) offset MPI_File_get_size(fh, size, ierror) integer fh, ierror integer(kind=mpi_offset_kind) size MPI_File_get_type_extent(fh, datatype, extent, ierror) integer fh, datatype, ierror integer(kind=mpi_address_kind) extent MPI_File_get_view(fh, disp, etype, filetype, datarep, ierror) integer fh, etype, filetype, ierror 246

Character*(*) datarep, integer(kind=mpi_offset_kind) disp MPI_File_iread(fh, buf, count, datatype, request, ierror) <type> buf(*) integer fh, count, datatype, request, ierror MPI_File_iread_at(fh, offset, buf, count, datatype, request, ierror) <type> buf(*) integer fh, count, datatype, request, ierror integer(kind=mpi_offset_kind) offset MPI_File_iread_shared(fh, buf, count, datatype, request, ierror) <type> buf(*) integer fh, count, datatype, request, ierror MPI_File_iwrite(fh, buf, count, datatype, request, ierror) <type> buf(*) integer fh, count, datatype, request, ierror MPI_File_iwrite_at(fh, offset, buf, count, datatype, request, ierror) <type> buf(*) integer fh, count, datatype, request, ierror integer(kind=mpi_offset_kind) offset MPI_File_iwrite_share(fh, buf, count, datatype, request, ierror) <type> buf(*) integer fh, count, datatype, request, ierror MPI_File_open(comm., filename, amode, info, fh, ierror) Character*(*) filename integer comm., amode, info, fh, ierror MPI_File_preallocate(fh, size, ierror) integer fh, ierror integer(kind=mpi_offset_kind) size MPI_File_read(fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_File_read_all(fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_FILE_READ 247

MPI_File_read_all_begin(fh, buf, count, datatype, ierror) <type> buf(*) integer fh, count, datatype, ierror MPI_File_read_all_end(fh, buf, status, ierror) <type> buf(*) integer fh, status(mpi_status_size), ierror MPI_File_read_at(fh, offset, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror integer(kind=mpi_offset_kind) offset MPI_File_read_at_all(fh, offset, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror integer(kind=mpi_offset_kind) offset MPI_FILE_READ_AT MPI_File_read_at_all_begin(fh, offset, buf, count, datatype, ierror) <type> buf(*) integer fh, count, datatype, ierror integer(kind=mpi_offset_kind) offset MPI_File_read_at_all_end(fh, buf, status, ierror) <type> buf(*) integer fh, status(mpi_status_size), ierror MPI_File_read_ordered(fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_FILE_READ_SHARED MPI_File_read_ordered_begin(fh, buf, count, datatype, ierror) <type> buf(*) integer fh, count, datatype, ierror MPI_File_read_ordered_end(fh, buf, status, ierror) <type> buf(*) integer fh, status(mpi_status_size), ierror MPI_File_read_shared (fh, buf, count, datatype, status, ierror) <type> buf(*) 248

integer fh, count, datatype, status(mpi_status_size), ierror MPI_File_seek(fh, offset, whence, ierror) integer fh, whence, ierror integer(kind=mpi_offset_kind) offset MPI_File_seek_shared(fh, offset, whence, ierror) integer fh, whence, ierror integer(kind=mpi_offset_kind) offset MPI_File_set_atomicity(fh, flag, ierror) integer fh, ierror logical flag MPI_File_set_errhandler(file, errhandler, ierror) integer file, errhandler, ierror MPI_File_set_info(fh, info, ierror) integer fh, info, ierror INFO MPI_File_set_size(fh, size, ierror) integer fh, ierror integer(kind=mpi_offset_kind) size MPI_File_set_view(fh, disp, etype, filetype, datarep, info, ierror) integer fh, etype, filetype, info, ierror character*(*) datarep integer(kind=mpi_offset_kind) disp MPI_File_sync(fh, ierror) integer fh, ierror MPI_File_write(fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_File_write_all(fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_File_write MPI_File_write_all_begin(fh, buf, count, datatype, ierror) <type> buf(*) integer fh, count, datatype, ierror 249

MPI_File_write_all_end(fh, buf, status, ierror) <type> buf(*) integer fh, status(mpi_status_size), ierror MPI_File_write_at(fh, offset, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror integer(kind=mpi_offset_kind) offset MPI_File_write_at_all(fh, offset, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror integer(kind=mpi_offset_kind) offset MPI_File_write_at_all_begin(fh, offset, buf, count, datatype, ierror) <type> buf(*) integer fh, count, datatype, ierror integer(kind=mpi_offset_kind) offset MPI_File_write_at_all_end(fh, buf, status, ierror) <type> buf(*) integer fh, status(mpi_status_size), ierror MPI_File_write_ordered (fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_File_write_shared MPI_File_write_ordered _begin(fh, buf, count, datatype, ierror) <type> buf(*) integer fh, count, datatype, ierror MPI_File_write_ordered _end(fh, buf, status, ierror) <type> buf(*) integer fh, status(mpi_status_size), ierror MPI_File_write_shared (fh, buf, count, datatype, status, ierror) <type> buf(*) integer fh, count, datatype, status(mpi_status_size), ierror MPI_Finalized(flag, ierror) logical flag integer ierror MPI_Finalize 250

MPI_Free_mem(base, ierror) <type> base(*) integer ierror MPI_Alloc_mem MPI_Get(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, ierror) <type> origin_addr(*) integer(kind=mpi_address_kind) target_disp integer origin_count, origin_datatype, target_rank, target_count, target_datatype, win, ierror MPI_Get_address(location, address, ierror) <type> location(*) integer ierror integer(kind=mpi_address_kind) address MPI_Grequest_complete(request, ierror) integer request, ierror MPI MPI_Grequest_start(query_fn, free_fn, cancel_fn, extra_state, request, ierror) integer request, ierror external query_fn, free_fn, cancel_fn integer (kind=mpi_address_kind) extra_state MPI_Info_create(info, ierror) integer info, ierror INFO MPI_Info_delete(info, key, ierror) integer info, ierror integer info, ierror character*(*) key INFO ( ) MPI_Info_dup(info, newinfo, ierror) integer info, newinfo, ierror INFO MPI_Info_free(info, ierror) integer info, ierror INFO MPI_Info_get(info, key, valuelen, value, falg, ierror) integer info, valuelen, ierror character*(*) key, value logical flag MPI_Info_get_nkeys(info, nkeys, ierror) 251

integer info, nkeys, ierror INFO MPI_Info_get_nthkey(info, n, key, ierror) integer info, n, ierror character*(*) key INFO n MPI_Info_get_valuelen(info, key, valuelen, falg, ierror) integer info, valuelen, ierror logical flag character*(*) key MPI_Info_set(info, key, value, ierror) integer info, ierror character*(*) key, value INFO (, ) MPI_Init_thread(required, provided, ierror) integer reuqired, provided, ierror MPI MPI MPI_Is_thread_main(flag, ierror) logical flag integer ierror MPI_Lookup_name(service_name, info, port_name, ierror) character*(*) service_name, port_name integer info, ierror MPI_Open_port(info, port_name, ierror) character*(*) port_name integer info, ierror MPI_Pack_external(datarep, inbuf, incount, datatype, outbuf, outsize, Position, ierror) integer incount, datatype, ierror integer(kind=mpi_address_kind) outsize, position character*(*) datarep <type> inbuf(*), outbuf(*) MPI_Pack_external_size(datarep, incount, datatype, size, ierror) integer incount, datatype, ierror integer(kind=mpi_address_kind) size character*(*) datarep MPI_Publish_name(service_name, info, port_name, ierror) integer info, ierror character*(*) service_name, port_name 252

MPI_Put(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, ierror) <type> origin_addr(*) integer(kind=mpi_address_kind) target_disp integer origin_count, origin_datatype, target_rank, target_count, target_datatype, win, ierror MPI_Query_thread(provide, ierror) integer provided, ierror MPI_Register_datarep(datarep, read_conversion_fn, write_conversion_fn, dtype_file_extent_fn, extra_state, ierror) character*(*) datarep external read_conversion_fn, write_conversion_fn, dtype_file_extent_fn integer(kind=mpi_address_kind) extra_state integer ierror MPI MPI_Request_get_status(request, falg, status, ierror) integer request, status(mpi_status_size), ierror logical flag MPI_Sizeof(x, size, ierror) <type> x integer size, ierror MPI_Status_set_cancelled(status, flag, ierror) integer status(mpi_status_size), ierror logical falg MPI_Test_cancelled MPI_Status_set_elements(status, datatype, count, ierror) integer status(mpi_status_size), datatype, count, ierror MPI_Get_elements MPI_Type_create_darray(size, rank, ndims, array_of_gsizes, array_of_distribs, array_of_dargs, array_of_psizes, order, oldtype, newtype, ierror) integer size, rank, ndims, array_of_gsizes(*), array_of_distribs(*), array_of_dargs(*), array_of_psizes(*), order, oldtype, newtype, ierror MPI_type_create_f90_complex(p, r, newtype, ierror) integer p, r, newtype, ierror MPI Fortran 90 MPI_Type_create_f90_integer(r, newtype, ierror) integer r, newtype, ierror 253

MPI Fortran 90 MPI_Type_create_f90_real(p, r, newtype, ierror) integer p, r, newtype, ierror MPI Fortran 90 MPI_Type_create_hindexed(count, array_of_blocklengths, array_of_dispalcements, oldtype, newtype, ierror) integer count, array_of_blocklengths(*), oldtype, newtype, ierror integer(kind=mpi_address_kind) array_of_displacements(*) MPI_Type_create_hvector(count, blocklength, stide, oldtype, newtype, ierror) integer count, blocklength, oldtype, newtype, ierror integer(kind=mpi_address_kind) stride MPI_Type_create_indexed_block(count, blocklength, array_of_displacements, oldtype, newtype, ierror) integer count, blocklength, array_of_displacements(*), oldtype, newtype, ierror MPI_Type_create_keyval(type_copy_attr_fn, type_delete_attr_fn, type_keyval, extra_state, ierror) external type_copy_attr_fn, type_delete_attr_fn integer type_keyval, ierror integer(kind=mpi_address_kind) extra_state MPI_Type_create_resized(oldtype, lb, extent, newtype, ierror) integer oldtype, newtype, ierror integer(kind=mpi_address_kind) lb, extent MPI_Type_create_struct(count, array_of_blocklengths, array_of_displacements, array_of_types, newtyep, ierror) integer count, array_of_blocklengths(*), array_of_types(*), newtype, ierror integer(kind=mpi_address_kind) array_of_displacements(*) MPI_Type_create_subarray(ndims, array_of_sizes, array_of_subsizes, array_of_starts, order, oldtype, newtype, ierror) integer ndims, array_of_sizes(*), array_of_subsizes(*), array_of_starts(*), order, oldtype, newtype, ierror MPI_type_delete_attr(type, type_keyval, ierror) integer type, tyep_keyval, ierror MPI_Type_dup(type, newtype, ierror) integer type, newtype, ierror 254

MPI_Type_free_keyval(type_keyval, ierror) integer type_keyval, ierror MPI_Type_create_keyval MPI_Type_get_attr(type, type_keyval, attribute_val, flag, ierror) integer type, type_keyval, ierror integer(kind=mpi_address_kind) attribute_val logical flag MPI_Type_get_contents(datatype, max_integers, max_addresses, max_datatypes, array_of_integers, array_of_addresses, array_of_datatypes, ierror) integer datatype, max_integers, max_addresses, max_datatypes, array_of_integers(*), array_of_datatypes(*), ierror integer(kind=mpi_address_kind) array_of_addresses(*) MPI_Type_get_envelope(datatype, num_integers, num_addresses, num_datatypes, combiner, ierror) integer datatype, num_integers, num_addresses, num_datatypes, combiner, ierror MPI_Type_get_extent(datatype, lb, extent, ierror) integer datatype, ierror integer(kind=mpi_address_kind) lb, extent MPI_Type_get_name(type, type_name, resultlen, ierror) integer tyep, resultlen, ierror character*(*) type_name MPI_Type_get_true_extent(datatype, true_lb, true_extent, ierror) integer datatype, ierror integer(kind=mpi_address_kind) true_lb, true_extent MPI_Type_match_size(typeclass, size, type, ierror) integer typeclass, size, type, ierror MPI MPI_type_set_attr(type, type_keyval, attribute_val, ierror) integer type, type_keyval, ierror integer(kind=mpi_address_kind) attribute_val MPI_Type_set_name(type, type_name, ierror) integer type, ierror character*(*) type_name MPI_Unpack_external(datarep, inbuf, insize, position, outbuf, outcount, datatype, ierror) 255

integer outcount, datatype, ierror integer(kind=mpi_address_kind) insize, position character*(*) datarep <type> inbuf(*), outbuf(*) MPI_Unpublish_name(service_name, info, port_name, ierror) integer info, ierror character*(*) service_name, port_name MPI_Win_call_errhandler(win, errorcode, ierror) integer win, errorcode, ierror MPI_Win_complete(win, ierror) integer win, ierror MPI_Win_start RMA MPI_Win_create(base, size, disp_unit, info, comm, win, ierror) <type> base(*) integer(kind=mpi_address_kind) size integer disp_unit, info, comm, win, ierror MPI_Win_create_errhanlder(function, errhanlder, ierror) external function integer errhandler, ierror MPI_Win_create_keyval(win_copy_attr_fn, win_delete_attr_fn, win_keyval, extra_state, ierror) external win_copy_attr_fn, win_delete_attr_fn integer win_keyval, ierror integer(kind=mpi_address_kind) extra_state MPI_Win_delete_attr(win, win_keyval, ierror) integer win, win_keyval, ierror MPI_Win_fence(assert, win, ierror) integer assert, win, ierror RMA MPI_Win_free(win, ierror) integer win, ierror MPI_Win_free_keyval(win_keyval, ierror) integer win_keyval, ierror MPI_Win_create_keyval MPI_Win_get_attr(win, win_keyval, attribute_val, flag, ierror) integer win, win_keyval, ierror 256

integer(kind=mpi_address_kind) attribute_val logical flag MPI_Win_get_errhandler(win, errhandler, ierror) integer iwn, errhandler, ierror MPI_Win_get_group(win, group, ierror) integer win, group, irror MPI_Win_get_name(win, win_name, resultlen, ierror) integer win, resultlen, irror character*(*) win_name MPI_Win_lock(lock_type, rank, assert, win, ierror) integer lock_type, rank, assert, win, ierror MPI_Win_post(group, assert, win, ierror) integer group, assert, win, ierror MPI_Win_set_attr(win, win_keyval, attribute_val, ierror) integer win, win_keyval, ierror integer(kind=mpi_address_kind) attribute_val MPI_Win_set_errhandler(win, errhandler, ierror) integer win, errhandler, ierror MPI_Win_set_name(win, win_name, ierror) integer win, ierror character*(*) win_name MPI_Win_start(group, assert, win, ierror) integer group, assert, win, ierror MPI_Win_post MPI_Win_test(win, falg, ierror) integer win, ierror logical flag RMA MPI_Win_unlock(rank, win, ierror) integer rank, win, ierror MPI_Win_wait(win, ierror) integer win, ierror MPI_Win_post RMA 257

18.5 258

MPI MPI-2 MPI MPIF 1994 MPI MPI MPI MPI-2 MPI-2 MPI-1 I/O MPI 259

19 MPI-1 MPI MPI MPI-2 19.1 MPI-1 MPI MPI_Init PVM / MPI MPI-1 2 MPI-2 0 0 1 2 1 2 2 3 4 3 3 78 260

0 0 1 2 1 0 0 1 2 1 79 0 0 1 1 2 2 3 4 4 0 0 1 1 2 2 3 4 4 80 ROOT ROOT ROOT ROOT ROOT ROOT ROOT ROOT 261

ROOT MPI_ROOT ROOT ROOT MPI_PROC_NULL ROOT socket 19.2 MPI MPI-2 MPI_COMM_SPAWN(command, argv,maxprocs,info,root,comm,intercomm,array_of_errcodes) IN command IN argv command IN maxprocs MPI IN info IN root IN comm OUT intercomm OUT array_of_errcodes Int MPI_Comm_spawn(char * command, char ** argv, int maxprocs, MPI_Info info, int root, MPI_Comm comm, MPI_Comm * intercomm, int * array_of_errcodes) MPI_COMM_SPAWN(COMMAND, ARGV, MAXPROCS, INFO, ROOT, COMM, INTERCOMM, ARRAY_OF_ERRCODES, IERROR) INTEGER INFO, MAXPROCS, ROOT, COMM, INTERCOMM ARRAY_OF_ERRCODES(*), IERROR MPI 129 MPI_COMM_SPAWN MPI MPI_COMM_SPAWN MPI_COMM_SPAWN command argv maxprocs info MPI ROOT ROOT comm intercomm array_of_errcodes 262

MPI_INIT MPI_COMM_SPAWN MPI_INIT MPI_COMM_GET_PARENT MPI_COMM_GET_PARENT(parent) OUT parent int MPI_Comm_get_parent(MPI_Comm * parent) MPI_COMM_GET_PARENT(PARENT, IERROR) INTEGER PARENT, IERROR MPI 130 MPI_COMM_GET_PARENT MPI_COMM_GET_PARENT MPI_COMM_SPAWN MPI_COMM_SPAWN_MULTIPLE MPI_COMM_SPAWN MPI_COMM_SPAWN_MULTIPLE MPI_COMM_SPAWN_MULTIPLE 263

MPI_COMM_SPAWN_MULTIPLE(count, array_of_commands, array_of_argv, array_of_max_maxprocs,array_of_info, root, vcomm,intercomm) IN count IN array_of_commands IN array_of_maxprocs IN array_of_info IN root IN comm OUT intercomm OUT array_of_errcodes int MPI_Comm_spawn_multiple(int count, char ** array_of_commands, char *** array_of_argv, int * array_of_maxprocs, MPI_Info * array_of_info, int root, MPI_Comm comm, MPI_Comm * intercomm, int * array_of_errcodes) MPI_COMM_SPAWN_MULTIPLE(COUNT, ARRAY_OF_COMMANDS, ARRAY_OF_ARGV, ARRAY_OF_MAXPROCS, ARRAY_OF_INFO, ROOT, COMM, INTERCOMM, ARRAY_OF_ERRCODES, IERROR) INTEGER COUNT, ARRAY_OF_MAXPROCS(*), ARRAY_OF_INFO(*),ROOT,COMM INTERCOMM, ARRAY_OF_ERRCODES, IERR CHARACTER *(*) ARRAY_OF_COMMANDS(*), ARRAY_OF_ARGV(COUNT, *) MPI 131 MPI_COMM_SPAWN_MULTIPLE MPI_Comm_spawn MPI_Comm_spawn_multiple 19.3 MPI / 264

MPI_OPEN_PORT(info, port_name) IN info OUT port_name int MPI_Open_prot(MPI_Info info, char * port_name) MPI_OPEN_PORT(INFO, PORT_NAME, IERROR) CHARACTER *(*) PORT_NAME INTEGER INFOR, IERROR MPI 132 MPI_OPEN_PORT OUT newcomm MPI_COMM_ACCEPT(port_name, info, root, comm, newcomm) IN port_name IN info IN root IN comm int MPI_Comm_accept(char * port_name, MPI_Info info, int root, MPI_Comm comm, MPI_Comm * newcomm, ) MPI_COMM_ACCEPT(PORT_NAME, INFO, ROOT, COMM, NEWCOMM, IERROR) CHARACTER *(*) PORT_NAME INTEGER INFO, ROOT, COMM,NEWCOMM,IERROR MPI 133 MPI_COMM_ACCEPT port_name info newcomm MPI_CLOSE_PORT(port_name) IN port_name int MPI_Close_port(char * port_name) MPI_CLOSE_PORT(PORT_NAME, IERROR) CHARACTER *(*) PORT_NAME INTEGER IERROR MPI 134 MPI_CLOSE_PORT MPI_CLOSE_PORT port_name 265

MPI_COMM_CONNECT(port_name, info, root, comm, newcomm) IN port_name IN info IN root IN comm OUT newcomm int MPI_Comm_connect(char * port_name, MPI_Info info, int root, MPI_Comm comm, MPI_Comm * newcomm) MPI_COMM_CONNECT(PORT_NAME, INFO, ROOT, COMM, NEWCOMM, IERROR) CHARACTER *(*) PORT_NAME INTEGER INFO, ROOT, COMM, NEWCOMM, IERROR MPI 135 MPI_COMM_CONNECT MPI_COMM_CONNECT port_name port_name info root newcomm MPI_COMM_DISCONNECT MPI_COMM_DISCONNECT(comm) INOUT comm int MPI_Comm_disconnect(MPI_Comm * comm) MPI_COMM_DISCONNECT(COMM, IERROR) INTEGER COMM, IERROR MPI 136 MPI_COMM_DISCONNECT comm / / 266

MPI_PUBLISH_NAME(service_name, info, port_name) IN service_name IN info IN port_name int MPI_Publish_name(char * service_name, MPI_Info info, char * prot_name) MPI_PUBLISH_NAME(SERVICE_NAME, INFO, PORT_NAME, IERROR) INTEGER INFO, IERROR CHARACTER *(*) SERVICE_NAME, PORT_NAME MPI_PUBLISH_NAME MPI 137 MPI_PUBLISH_NAME MPI_LOOKUP_NAME(service_name, info, port_name) IN service_name IN info OUT port_name int MPI_Lookup_name(char * service_name, MPI_Info info, char * port_name) MPI_LOOKUP_NAME(SERVICE_NAME, INFO, PORT_NAME, IERROR) CHARACTER *(*) SERVICE_NAME, PORT_NAME INTEGER INFO, IERROR MPI 138 MPI_LOOKUP_NAME MPI_UNPUBLISH_NAME(service_name, info, port_name) IN service_name IN info IN port_name int MPI_Unpublish_name(char * service_name, MPI_Info info, char * port_name) MPI_UNPUBLISH_NAME(SERVICE_NAME, INFO, PORT_NAME, IERROR) INTEGER INFO, IERROR CHARACTER *(*) SERVICE_NAME, PORT_NAME MPI 139 MPI_UNPUBLISH_NAME 267

19.4 socket MPI socket MPI_COMM_JOIN(fd, intercomm) IN fd socket OUT intercomm socket int MPI_Comm_join(int fd, MPI_Comm * intercomm) MPI_COMM_JOIN(FD, INTERCOMM,IERROR) INTEGER FD, INTERCOMM, IERROR MPI 140 MPI_COMM_JOIN MPI_COMM_JOIN socket socket MPI 19.5 socket 268

269 20 MPI-2 MPI-2 20.1 MPI-2 MPI MPI-2 MPI-2 MPI-2 MPI-2 MPI-2 1 fence 2 MPI_WIN_START MPI_WIN_COMPLETE MPI_WIN_POST MPI_WIN_WAIT MPI_WIN_POST MPI_WIN_WAIT MPI_WIN_START MPI_WIN_POST MPI_WIN_COMPLETE 3

20.2 20.2.1 MPI_WIN_CREATE(base, size, disp_unit, info, comm,win) IN base IN size IN disp_unit IN info IN comm OUT win int MPI_Win_create(void * base, MPI_Aint size, int disp_unit, MPI_Info info, MPI_Comm comm, MPI_Win * win) MPI_WIN_CREATE(BASE, SIZE, DISP_UNIT, INFO, COMM, WIN, IERROR) <type> BASE(*) INTEGER (KIND=MPI_ADDRESS_KIND) SIZE INTEGER DISP_UNIT, INFO, COMM, WIN, IERROR MPI 141 MPI_WIN_CREATE MPI_WIN_CREATE base size disp_unit info size=0 comm MPI_WIN_FREE(win) INOUT win int MPI_Win_free(MPI_Win * win) MPI_WIN_FREE(WIN, IERROR) INTEGER WIN, IERROR MPI 142 MPI_WIN_FREE MPI_WIN_CREAT MPI_WIN_FREE 270

MPI_WIN_NULL......... 0 1 N-1 81 20.2.2 MPI_PUT(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win) IN origin_addr IN origin_count IN origin_datatype IN target_rank IN target_disp IN target_count IN target_datatype IN win int MPI_Put(void * origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Win win) MPI_PUT(ORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK, TARGET_DISP, TARGET_COUNT, TARGET_DATATYPE, WIN, IERROR) <type> ORIGIN_ADDR(*) INTEGER (KIND=MPI_ADDRESS_KIND) TARGET_DISP INTEGER ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK, TARGET_COUNT, TARGET_DATATYPE, WIN, IERROR MPI 143 MPI_PUT 271

MPI_PUT origin_addr origin_datatype origin_count target_rank target_disp target_address=base+target_disp*disp_unit, target_count target_datatype base disp_unit 3 type1 MPI_PUT(buf,3,type1,j,4,3,type1,win) buf i 4 3 type1 0 1 2 3 4 j 82 MPI_PUT 20.2.3 MPI_GET(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win) OUT origin_addr IN origin_count IN origin_datatype IN target_rank IN target_disp IN target_count IN target_datatype IN win int MPI_Get(void *origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Win win) MPI_GET(ORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK) <type> ORIGIN_ADD(*) INTEGER (KIND=MPI_ADDRESS_KIND) TARGET_DISP INTEGER ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK, TARGET_COUNT, TARGET_DATATYPE, WIN, IERROR MPI 144 MPI_GET 272

MPI_GET MPI_PUT target_rank target_disp target_count target_datatype origin_add origin_count origin_datatype 3 type1 MPI_GET(buf,3,type1,j,4,3,type1,win) buf i 4 3 type1 0 1 2 3 4 j 83 MPI_GET 20.2.4 MPI_ACCUMULATE 84 origin_addr origin_count origin_datatype target_rank target_disp target_count target_datatype op 273

MPI_ACCUMULATE(origin_addr, origin_count, origin_datatype, target_rank, target_disp target_count, target_datatype, op, win) IN origin_addr IN origin_count IN origin_datatype IN target_rank IN target_disp IN target_count IN target_datatype IN op IN win int MPI_Accumulate(void * origin_addr, int origin_count, MPI_Datatype origin_datatype, int target_rank, MPI_Aint target_disp, int target_count, MPI_Datatype target_datatype, MPI_Op op, MPI_Win win) MPI_ACCUMULATE(ORIGIN_ADDR, ORIGIN_COUNT, ORIGIN_DATATYPE, TARGET_RANK, TARGET_DISP, TARGET_COUNT, TARGET_DATATYPE, OP, WIN, IERROR) <type> ORIGIN_ADDR(*) INTEGER (KIND=MPI_ADDRESS_KIND) TARGET_DISP INTEGER ORIGIN_ADDR, ORIGIN_DATATYPE, TARGET_RANK, TARGET_COUNT, TARGET_DATATYPE, OP, WIN, IERROR MPI 145 MPI_ACCUMULATE 3 type1 3 type1 buf MPI_SUM MPI_ACCUMULATE(buf, 3, type1, j, 4, 3, type1, MPI_SUM, win) 85 MPI_ACCUMULATE 274

MPI_WIN_GET_GROUP(win, group) IN win OUT group int MPI_Win_get_group(MPI_Win win, MPI_Group * group) MPI_WIN_GET_GROUP(WIN,GROUP, IERROR) INTEGER WIN, GROUP, IERROR MPI 146 MPI_WIN_GET_GROUP MPI_WIN_GET_GROUP win group win 20.3 20.3.1 MPI_WIN_FENCE(assert, win) IN assert IN win int MPI_Win_fence(int assert, MPI_Win win) MPI_WIN_FENCE(ASSERT,WIN, IERROR) INTEGER ASSERT, WIN, IERROR MPI 147 MPI_WIN_FENCE MPI_WIN_FENCE win 275

0 1 N-1 MPI_WIN_FENCE MPI_WIN_FENCE MPI_WIN_FENCE MPI_GET MPI_PUT MPI_WIN_FENCE MPI_WIN_FENCE MPI_WIN_FENCE 86 MPI_WIN_FENCE MPI_WIN_FENCE MPI_WIN_FENCE MPI_WIN_FENCE 1 20.3.2 MPI_WIN_POST MPI_WIN_START MPI_WIN_PUT MPI_WIN_COMPLETE MPI_WIN_WAIT 87 276

MPI_WIN_START(group, assert, win) IN group IN assert IN win int MPI_Win_start(MPI_Group group, int assert, MPI_Win win) MPI_WIN_START(GROUP, ASSERT, WIN, IERROR) INTEGER GROUP, ASSERT, WIN, IERROR MPI 148 MPI_WIN_START MPI_WIN_START group assert MPI_WIN_COMPLETE(win) IN win int MPI_Win_complete(MPI_Win win) MPI_WIN_COMPLETE(WIN, IERROR) INTEGER WIN, IERROR MPI 149 MPI_WIN_COMPLETE MPI_WIN_START MPI_WIN_COMPLETE MPI_WIN_POST(group, assert, win) IN group IN assert IN win int MPI_Win_post(MPI_Group group, int assert, MPI_Win win) MPI_WIN_POST(GROUP, ASSERT, WIN) INTEGER GROUP, ASSER5T, WIN, IERROR MPI 150 MPI_WIN_POST MPI_WIN_POST MPI_WIN_POST MPI_WIN_START 277

MPI_WIN_WAIT(win) IN win int MPI_Win_wait(MPI_Win win) MPI_WIN_WAIT(WIN, IERROR) INTEGER WIN, IERROR MPI 151 MPI_WIN_WAIT MPI_WIN_WAIT MPI_WIN_POST MPI_WIN_COMPLETE MPI_WIN_TEST(win,flag) IN win OUT flag int MPI_Win_test(MPI_Win win, int * flag) MPI_WIN_TEST(WIN,FLAG,IERROR) INTEGER WIN, IERROR LOGICAL FLAG MPI 152 MPI_WIN_TEST MPI_WIN_TEST flag=true MPI_WIN_WAIT flag=false MPI_WIN_WAIT 20.3.3 278

i j k MPI_WIN_LOCK 1 j j MPI_WIN_PUT 2 1 MPI_WIN_UNLOCK j 2 MPI_WIN_LOCK j MPI_WIN_GET MPI_WIN_UNLOCK j 88 MPI_WIN_LOCK(lock_type, rank, assert, win) IN lock_type IN rank IN assert IN win int MPI_Win_lock(int lock_type, int rank, int assert, MPI_Win win) MPI_WIN_LOCK(LOCK_TYPE, RANK, ASSERT, WIN, IERROR) INTEGER LOCK_TYPE, RANK, ASSERT, WIN, IERROR MPI 153 MPI_WIN_LOCK MPI_WIN_LOCK 279

MPI_WIN_UNLOCK(rank, win) IN rank IN win int MPI_Win_unlock(int rank, MPI_Win win) MPI_WIN_UNLOCK(RANK, WIN, IERROR) INTEGER RANK, WIN, IERROR MPI 154 MPI_WIN_UNLOCK MPI_WIN_UNLOCK rank MPI_WIN_LOCK 20.4 MPI-2 280

281 21 I/O MPI-1 I/O I/O I/O MPI-2 I/O 21.1 MPI-2 I/O 1 2 3 MPI MPI MPI_WAIT MPI-2 MPI MPI MPI

20 I/O READ_AT WRITE_AT IREAD_AT IWRITE_AT READ WRITE IREAD IWRITE READ_SHARED WRITE_SHARED IREAD_SHARED IWRITE_SHARED READ_AT_ALL WRITE_AT_ALL READ_AT_ALL_BEGIN READ_AT_ALL_END WRITE_AT_ALL_BEGIN WRITE_AT_ALL_END READ_ALL WRITE_ALL READ_ALL_BEGIN READ_ALL_END WRITE_ALL_BEGIN WRITE_ALL_END READ_ORDERED WRITE_ORDERED READ_ORDERED_BEGIN READ_ORDERED_END WRITE_ORDERED_BEGIN WRITE_ORDERED_END 21.2 MPI_FILE_OPEN(comm, filename, amode, info, fh) IN comm IN filename IN amode IN info OUT fh int MPI_File_open(MPI_Comm comm, char * filename, int amode, MPI_Info info, MPI_File * fh) MPI_FILE_OPEN(COMM,FILENAME, AMODE, INFO, FH,IERROR) CHARACTER *(*) FILENAME INTEGER COMM, AMODE, INFO, FH, IERROR MPI 155 MPI_FILE_OPEN MPI_FILE_OPEN comm 282

filename filename amode info fh fh 21 9 21 MPI_MODE_RDONLY MPI_MODE_RDWR MPI_MODE_WRONLY MPI_MODE_CREATE MPI_MODE_EXCL MPI_MODE_DELETE_ON_CLOSE MPI_MODE_UNIQUE_OPEN MPI_MODE_SEQUENTIAL MPI_MODE_APPEND MPI_FILE_CLOSE(fh) INOUT fh int MPI_File_close(MPI_File * fh) MPI_FILE_CLOSE(FH,IERROR) INTEGER FH, IERROR MPI 156 MPI_FILE_CLOSE MPI_FILE_CLOSE fh fh MPI_FILE_DELETE(filename, info) IN filename IN info int MPI_File_delete(char * filename, MPI_Info info) MPI_FILE_DELETE(FILENAME, INFO, IERROR) CHARACTER *(*) FILENAME INTEGER INFO, IERROR MPI 157 MPI_FILE_DELETE MPI_FILE_DELETE filename 283

MPI_FILE_SET_SIZE(fh,size) INOUT fh IN size int MPI_File_set_size(MPI_File fh, MPI_Offset size) MPI_FILE_SET_SIZE(FH, SIZE, IERROR) INTEGER FH, IERROR INTEGER (KIND=MPI_OFFSET_KIND) SIZE MPI 158 MPI_FILE_SET_SIZE MPI_FILE_SET_SIZE fh size size MPI_FILE_PREALLOCATE(fh, size) INOUT fh IN size int MPI_File_preallocate(MPI_File fh, MPI_Offset size) MPI_FILE_PREALLOCATE(FH, SIZE, IERROR) INTEGER FH, IERROR INTEGER (KIND=MPI_OFFSET_KIND) SIZE MPI 159 MPI_FILE_PREALLOCATE MPI_FILE_PREALLOCATE fh size size size size size MPI_FILE_GET_SIZE(fh,size) IN fh OUT size int MPI_File_get_size(MPI_File fh, MPI_Offset * size) MPI_FILE_GET_SIZE(FH, SIZE, IERROR) INTEGER FH, IERROR INTEGER (KIND=MPI_OFFSET_KIND) SIZE MPI 160 MPI_FILE_GET_SIZE MPI_FILE_GET_SIZE 284

MPI_FILE_GET_GROUP(fh,group) IN fh OUT group int MPI_File_get_group(MPI_File fh, MPI_Group * group) MPI_FILE_GET_GROUP( FH, GROUP, IERROR) INTEGER FH, GROUP, IERROR MPI 161 MPI_FILE_GET_GROUP MPI_FILE_GET_GROUP fh group comm MPI_FILE_GET_AMODE(fh, amode) IN fh OUT amode int MPI_File_get_amode(MPI_File fh, int * amode) MPI_FILE_GET_AMODE(FH, AMODE, IERROR) INTEGER FH, AMODE, IERROR MPI 162 MPI_FILE_GET_AMODE MPI_FILE_GET_AMODE fh amode MPI_FILE_SET_INFO(fh, info) INOUT fh IN info int MPI_File_set_info(MPI_File fh, MPI_Info info) MPI_FILE_SET_INFO(FH, INFO, IERROR) INTEGER FH, INFO, IERROR MPI 163 MPI_FILE_SET_INFO MPI_FILE_SET_INFO fh fh MPI_FILE_GET_INFO(fh, info_used) IN fh OUT info_used int MPI_File_get_info(MPI_file fh, MPI_Info * info_used) MPI_FILE_GET_INFO(FH, INFO_USED,IERROR) INTEGER FH,INFO_USED, IERROR MPI 164 MPI_FILE_GET_INFO 285

MPI_FILE_GET_INFO fh info_used 21.3 SEEK 21.3.1 MPI_FILE_READ_AT(fh, offset,buf,count,datatype,status) IN fh IN offset OUT buf IN count IN datatype OUT status int MPI_File_read_at(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype, MPI_Statye * status) MPI_FILE_READ_AT(FH, OFFSET, BUF, COUNT,DATATYPE,STATUS, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR INTEGER(KIND=MPI_OFFSET_KIND) OFFSET MPI 165 MPI_FILE_READ_AT MPI_FILE_READ_AT fh offset count datatype buf status 286

type1 offset=100 MPI_FILE_READ_AT(fh,100,buf,5,type1,status) buf 89 MPI_FILE_READ_AT MPI_FILE_WRITE_AT(fh, offset, buf, count, datatype,status) INOUT fh IN offset IN buf IN count IN datatype OUT status int MPI_File_write_at(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_WRITE_AT(FH, OFFSET, BUF, COUNT, DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 166 MPI_FILE_WRITE_AT MPI_FILE_WRITE_AT MPI_FILE_READ_AT fh offset buf count datatype status 287

type1 offset=100 MPI_FILE_WRITE_AT(fh,100,buf,5,type1,status) buf 90 MPI_FILE_WRITE_AT MPI_FILE_READ_AT_ALL(fh, offset,but,count,datatype,status) IN fh IN offset OUT buf IN count IN datatype OUT status int MPI_File_read_at_all(MPI_File fh, MPI_Offset offset, void *buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_READ_AT_ALL(FH, OFFSET,BUF,COUNT,DATATYPE,STATUS, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 167 MPI_FILE_READ_AT_ALL MPI_FILE_READ_AT_ALL fh MPI_FILE_READ_AT 288

MPI_FILE_WRITE_AT_ALL(fh, offset, buf, count, datatype, status) INOUT fh IN offset IN buf IN count IN datatype OUT status int MPI_File_write_at_all(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_WRITE_AT_ALL(FH, OFFSET, BUF, COUNT, DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 168 MPI_FILE_WRITE_AT_ALL MPI_FILE_WRITE_AT_ALL MPI_FILE_READ_AT_ALL fh MPI_FILE_WRITE_AT 21.3.2 MPI_FILE_IREAD_AT MPI_FILE_READ_AT fh offset count datatype buf request request MPI_WAIT MPI_TEST 289

MPI_FILE_IREAD_AT(fh, offset,buf, count, datatype, request) IN fh IN offset OUT buf IN count IN datatype OUT request int MPI_File_iread_at(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype, MPI_Request * request) MPI_FILE_IREAD_AT(FH,OFFSET,BUF,COUNT,DATATYPE,REQUEST,IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 169 MPI_FILE_IREAD_AT MPI_FILE_IWRITE_AT(fh, offset,buf, count, datatype, request) INOUT fh IN offset IN buf IN count IN datatype OUT request int MPI_File_iwrite_at(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype, MPI_Request * request) MPI_FILE_IWRITE_AT(FH,OFFSET,BUF,COUNT,DATATYPE,REQUEST,IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 170 MPI_FILE_IWRITE_AT MPI_FILE_IWRITE_AT MPI_FILE_WRITE_AT fh offset count datatype request buf request MPI_WAIT 290

21.3.3 MPI-2 MPI_WAIT MPI_WAIT 0 1 N-1 MPI_FILE_..._BEGIN MPI_FILE_..._BEGIN MPI_FILE_..._BEGIN......... MPI_FILE_..._END MPI_FILE_..._END MPI_FILE_..._END 91 MPI_FILE_READ_AT_ALL_BEGIN(fh, offset, buf, count, datatype) IN fh IN offset OUT buf IN count IN datatype int MPI_File_read_at_all_begin(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype) MPI_FILE_READ_AT_ALL_BEGIN(FH, OFFSET, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 171 MPI_FILE_READ_AT_ALL_BEGIN MPI_FILE_READ_AT_ALL_BEGIN fh offset count datatype 291

buf MPI_FILE_READ_AT_ALL_END MPI_FILE_READ_AT_ALL_END buf MPI_FILE_READ_AT_ALL_END(fh, buf, status) IN fh OUT buf OUT status int MPI_File_read_at_all_end(MPI_File fh, void * buf, MPI_Status *status) MPI_FILE_READ_AT_ALL_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR MPI 172 MPI_FILE_READ_AT_ALL_END MPI_FILE_READ_AT_ALL_END MPI_FILE_READ_AT_ALL_BEGIN fh buf MPI_FILE_READ_AT_ALL_BEGIN buf MPI_FILE_WRITE_AT_ALL_BEGIN(fh, offset, buf, count, datatype) INOUT fh IN offset IN buf IN count IN datatype int MPI_File_write_at_all_begin(MPI_File fh, MPI_Offset offset, void * buf, int count, MPI_Datatype datatype) MPI_FILE_WRITE_AT_ALL_BEGIN(FH, OFFSET, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 173 MPI_FILE_WRITE_AT_ALL_BEGIN MPI_FILE_WRITE_AT_ALL_BEGIN fh offset buf count datatype MPI_FILE_READ_AT_ALL_BEGIN MPI_FILE_WRITE_AT_ALL_END 292

MPI_FILE_WRITE_AT_ALL_END(fh, buf, status) INOUT fh IN buf OUT status int MPI_File_write_at_all_end(MPI_File fh, void * buf, MPI_Status *status) MPI_FILE_WRITE_AT_ALL_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR MPI 174 MPI_FILE_WRITE_AT_ALL_END MPI_FILE_WRITE_AT_ALL_END MPI_FILE_WRITE_AT_ALL_BEGIN buf 21.4 1 2 3 92 293

294 21.4.1 < > MPI 93 MPI_FILE_SET_VIEW IO fh fh disp disp etype etype filetype etype etype filetype filetype etype filetype disp N filetype

MPI_FILE_SET_VIEW(fh, disp,etype,filetype,datarep,info) INOUT fh IN disp IN etype IN filetype IN datarep IN info int MPI_File_set_view(MPI_File fh, MPI_Offset disp, MPI_Datatype etype, MPI_Datatype filetype, char * datarep, MPI_Info info) MPI_FILE_SET_VIEW(FH, DISP, ETYPE, FILETYPE, DATAREP, INFO, IERROR) CHARACTER *(*) DATAREP INTEGER FH, ETYPE, FILETYPE,INFO,IERROR MPI 175 MPI_FILE_SET_VIEW MPI_FILE_SET_VIEW fh fh native internal external32 MPI native native native internal external32 94 internal MPI native external32 external32 295

MPI external32 MPI_FILE_GET_VIEW(fh, disp,etype,filetype,datarep) IN fh OUT disp OUT etype OUT filetype OUT datarep int MPI_File_get_view(MPI_File fh, MPI_Offset * disp, MPI_Datatype * etype, MPI_Datatype * filetype, char * datarep) MPI_FILE_GET_VIEW(FH, DISP,ETYPE,FILETYPE,DATAREP,IERROR) INTEGER FH, ETYPE,FILETYPE,IERROR CHARACTER *(*) DATAREP, INTEGER (KIND=MPI_OFFSET_KIND) DISP MPI 176 MPI_FILE_GET_VIEW MPI_FILE_GET_VIEW fh disp etype filetype datarep MPI_FILE_SET_VIEW MPI_FILE_SEEK(fh, offset, whence) INOUT fh IN offset IN whence offset int MPI_File_seek(MPI_File fh, MPI_Offset offset, int whence) MPI_FILE_SEEK(FH,OFFSET,WHENCE,IERROR) INTEGER FH, WHENCE, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 177 MPI_FILE_SEEK MPI_FILE_SEEK fh offset whence whence MPI_SEEK_SET MPI_SEEK_CUR MPI_SEEK_END 22 22 MPI_SEEK_SET MPI_SEEK_CUR MPI_SEEK_END offset +offset +offset 296

MPI_FILE_GET_POSITION(fh, offset) IN fh OUT offset int MPI_File_get_position(MPI_File fh, MPI_Offset * offset) MPI_FILE_GET_POSITION(FH, OFFSET, IERROR) INTEGER FH, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 178 MPI_FILE_GET_POSITION MPI_FILE_GET_POSITION etype 0 1 2 3 =3 95 MPI_FILE_GET_BYTE_OFFSET(fh, offset,disp) IN fh IN offset OUT disp int MPI_File_get_byte_offset(MPI_File fh, MPI_Offset offset, MPI_Offset * disp) MPI_FILE_GET_BYTE_OFFSET(FH, OFFSET, DISP, IERROR) INTEGER FH, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET, DISP MPI 179 MPI_FILE_GET_BYTE_OFFSET MPI_FILE_GET_BYTE_OFFSET offset 297

disp offset 96 21.4.2 MPI_FILE_READ(fh, buf,count,datatype,status) INOUT fh OUT buf IN count IN datatype OUT status int MPI_File_read(MPI_file fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_READ(FH, BUF, COUNT,DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH,COUNT,DATATYPE,STATUS,IERROR MPI 180 MPI_FILE_READ MPI_FILE_READ fh datatype count buf status 298

MPI_FILE_WRITE(fh, buf,count,datatype,status) INOUT fh IN buf IN count IN datatype OUT status int MPI_File_write(MPI_file fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_WRITE(FH, BUF, COUNT,DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH,COUNT,DATATYPE,STATUS,IERROR MPI_FILE_WRITE status MPI 181 MPI_FILE_WRITE buf count datatype MPI_FILE_WRITE MPI_FILE_READ MPI_FILE_READ_ALL(fh, buf,count,datatype,status) INOUT fh OUT buf IN count IN datatype OUT status int MPI_File_read_all(MPI_file fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_READ_ALL(FH, BUF, COUNT,DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH,COUNT,DATATYPE,STATUS,IERROR MPI 182 MPI_FILE_READ_ALL MPI_FILE_READ_ALL fh MPI_FILE_READ count datatype buf status 299

MPI_FILE_WRITE_ALL(fh, buf,count,datatype,status) INOUT fh IN buf IN count IN datatype OUT status int MPI_File_write_all(MPI_file fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_WRITE_ALL(FH, BUF, COUNT,DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH,COUNT,DATATYPE,STATUS,IERROR MPI 183 MPI_FILE_WRITE_ALL MPI_FILE_WRITE_ALL MPI_FILE_READ_ALL fh MPI_FILE_WRITE buf count datatype status 21.4.3 MPI_WAIT MPI_FILE_IREAD(fh, buf, count,datatype,request) INOUT fh OUT buf IN count IN datatype OUT request int MPI_File_iread(MPI_File fh, void * buf, int count,datatype datatype, MPI_Request * request) MPI_FILE_IREAD(FH,BUF,COUNT,DATATYPE,REQUEST,IERROR) <type> BUF (*) INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR MPI 184 MPI_FILE_IREAD 300

fh count datatype buf request request MPI_WAIT MPI_FILE_IWRITE(fh, buf, count,datatype,request) INOUT fh IN buf IN count IN datatype OUT request int MPI_File_iwrite(MPI_File fh, void * buf, int count,datatype datatype, MPI_Request * request) MPI_FILE_IWRITE(FH,BUF,COUNT,DATATYPE,REQUEST,IERROR) <type> BUF (*) INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR MPI 185 MPI_FILE_IWRITE MPI_FILE_IWRITE fh buf count datatype request request MPI_WAIT 21.4.4 MPI-2 MPI_WAIT MPI_FILE_READ_ALL_BEGIN(fh, buf,count,datatype) INOUT fh OUT buf IN count IN datatype int MPI_File_read_all_begin(MPI_File fh, void * buf, int count, MPI_Datatype datatype) MPI_FILE_READ_ALL_BEGIN(FH, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, IERROR MPI 186 MPI_FILE_READ_ALL_BEGIN MPI_FILE_READ_ALL_BEGIN fh 301

count datatype buf MPI_FILE_READ_ALL_END MPI_FILE_READ_ALL_END(fh, buf,status) INOUT fh OUT buf OUT status int MPI_File_read_all_end(MPI_File fh, void * buf, MPI_Status * status) MPI_FILE_READ_ALL_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR MPI 187 MPI_FILE_READ_ALL_END MPI_FILE_READ_ALL_END MPI_FILE_READ_ALL_BEGIN MPI_FILE_READ_ALL_END MPI_FILE_WRITE_ALL_BEGIN(fh, buf,count,datatype) INOUT fh IN buf IN count IN datatype int MPI_File_write_all_begin(MPI_File fh, void * buf, int count, MPI_Datatype datatype) MPI_FILE_WRITE_ALL_BEGIN(FH, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, IERROR MPI 188 MPI_FILE_WRITE_ALL_BEGIN MPI_FILE_WRITE_ALL_BEGIN MPI_FILE_READ_ALL_BEGIN fh buf count datatype fh MPI_FILE_WRITE_ALL_END 302

MPI_FILE_WRITE_ALL_END(fh, buf,status) INOUT fh IN buf OUT status int MPI_File_write_all_end(MPI_File fh, void * buf, MPI_Status * status) MPI_FILE_WRITE_ALL_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR MPI 189 MPI_FILE_WRITE_ALL_END MPI_FILE_WRITE_ALL_END MPI_FILE_WRITE_ALL_BEGIN MPI_FILE_WRITE_ALL_END 21.5 MPI_FILE_SEEK_SHARED(fh, offset, whence) INOUT fh IN offset IN whence int MPI_File_seek_shared(MPI_File fh, MPI_Offset offset, int whence) MPI_FILE_SEEK_SHARED(FH, OFFSET, WHENCE, IERROR) INTEGER FH, WHENCE, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 190 MPI_FILE_SEEK_SHARED MPI_FILE_SEEK_SHARED MPI_FILE_SEEK MPI_FILE_SEEK MPI_FILE_GET_POSITION_SHARED MPI_FILE_GET_POSITION 303

MPI_FILE_GET_POSITION_SHARED(fh, offset) IN fh OUT offset int MPI_File_get_position(MPI_File fh, MPI_Offset * offset) MPI_FILE_GET_POSITION_SHARED(FH, OFFSET, IERROR) INTEGER FH, IERROR INTEGER (KIND=MPI_OFFSET_KIND) OFFSET MPI 191 MPI_FILE_GET_POSITION_SHARED 21.5.1 MPI_FILE_READ_SHARED(fh, buf,count,datatype,status) INOUT fh OUT buf IN count IN datatype OUT status int MPI_File_read_shared(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_READ_SHARED(FH, BUF, COUNT,DATATYPE, STATUS,IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR MPI 192 MPI_FILE_READ_SHARED MPI_FILE_READ_SHARED fh count datatype buf status MPI_FILE_WRITE_SHARED fh buf count datatype status MPI_FILE_WRITE_SHARED MPI_FILE_READ_SHARED 304

MPI_FILE_WRITE_SHARED(fh, buf,count,datatype,status) INOUT fh IN buf IN count IN datatype OUT status int MPI_File_write_shared(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_WRITE_SHARED(FH, BUF, COUNT,DATATYPE, STATUS,IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR MPI 193 MPI_FILE_WRITE_SHARED MPI_FILE_READ_ORDERED(fh, buf, count, datatype, status) INOUT fh OUT buf IN count IN datatype OUT status int MPI_File_read_ordered(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_READ_ORDERED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR MPI 194 MPI_FILE_READ_ORDERED MPI_FILE_READ_ORDERED MPI_FILE_READ_SHARED rank 0 1... N-1 count datatype buf status MPI_FILE_WRITE_ORDERED MPI_FILE_WRITE_SHARED rank 0 1... N-1 buf count datatype status 305

MPI_FILE_WRITE_ORDERED(fh, buf, count, datatype, status) INOUT fh IN buf IN count IN datatype OUT status int MPI_File_write_ordered(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_WRITE_ORDERED(FH, BUF, COUNT, DATATYPE, STATUS, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, STATUS(MPI_STATUS_SIZE), IERROR MPI 195 MPI_FILE_WRITE_ORDERED 21.5.2 MPI_FILE_IREAD_SHARED MPI_FILE_READ_SHARED fh count datatype buf MPI_FILE_READ_SHARED request MPI_WAIT MPI_FILE_IREAD_SHARED(fh, buf,count,datatype,request) INOUT fh OUT buf IN count IN datatype OUT request int MPI_File_iread_shared(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Request * request) MPI_FILE_IREAD_SHARED(FH, BUF, COUNT,DATATYPE, REQUEST,IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR MPI 196 MPI_FILE_IREAD_SHARED MPI_FILE_IWRITE_SHARED fh buf count datatype request request MPI_WAIT 306

MPI_FILE_IWRITE_SHARED(fh, buf,count,datatype,request) INOUT fh IN buf IN count IN datatype OUT request int MPI_File_iwrite_shared(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Request * request) MPI_FILE_IWRITE_SHARED(FH, BUF, COUNT,DATATYPE, REQUEST,IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, REQUEST, IERROR MPI 197 MPI_FILE_IWRITE_SHARED 21.5.3 MPI_FILE_READ_ORDERED_BEGIN(fh, buf, count, datatype) INOUT fh OUT buf IN count IN datatype int MPI_File_read_ordered_begin(MPI_File fh, void * buf, int count, MPI_Datatype datatype, MPI_Status * status) MPI_FILE_READ_ORDERED_BEGIN(FH, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*) INTEGER FH, COUNT, DATATYPE, IERROR MPI 198 MPI_FILE_READ_ORDERED_BEGIN MPI_FILE_READ_ORDERED_BEGIN fh rank count datatype buf MPI_FILE_READ_ORDERED_END MPI_FILE_READ_ORDERED_END fh buf status buf 307

MPI_FILE_READ_ORDERED_END(fh, buf, status) INOUT fh OUT buf OUT status int MPI_File_read_ordered_end(MPI_File fh, void * buf, MPI_Status * status) MPI_FILE_READ_ORDERED_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS, IERROR MPI 199 MPI_FILE_READ_ORDERED_END MPI_FILE_WRITE_ORDERED_BEGIN(fh, buf, count, datatype) INOUT fh IN buf IN count IN datatype int MPI_File_write_ordered_begin(MPI_File fh, void * buf, int count, MPI_Datatype datatype) MPI_FILE_WRITE_ORDERED_BEGIN(FH, BUF, COUNT, DATATYPE, IERROR) <type> BUF(*) INTEGER FH, COUNT,DATATYPE, IERROR MPI 200 MPI_FILE_WRITE_ORDERED_BEGIN MPI_FILE_WRITE_ORDERED_BEGIN fh rank buf count datatype MPI_FILE_WRITE_ORDERED_END MPI_FILE_WRITE_ORDERED_END(fh, buf, status) INOUT fh IN buf OUT status int MPI_File_write_ordered_end(MPI_File fh, void * buf, MPI_Status * status) MPI_FILE_WRITE_ORDERED_END(FH, BUF, STATUS, IERROR) <type> BUF(*) INTEGER FH, STATUS(MPI_STATUS_SIZE), IERROR MPI 201 MPI_FILE_WRITE_ORDERED_END 308

MPI_FILE_WRITE_ORDERED_END fh buf status buf MPI_FILE_GET_TYPE_EXTENT(fh, datatype, extent) IN fh IN datatype OUT extent int MPI_File_get_type_extent(MPI_File fh, MPI_Datatype datatype, MPI_Aint * extent) MPI_FILE_GET_TYPE_EXTENT(FH, DATATYPE,EXTENT, IERROR) INTEGER FH, DATATYPE, IERROR INTEGER (KIND=MPI_ADDRESS_KIND) EXTENT MPI 202 MPI_FILE_GET_TYPE_EXTENT MPI_FILE_GET_TYPE_EXTENT fh datatype extent dtype_file_extent_fn MPI_REGISTER_DATAREP(datarep, read_conversion_fn, write_conversion_fn, dtype_file_extent_fn,extra_state) IN datarep IN read_conversion_fn IN write_conversion_fn IN dtype_file_extent_fn IN extra_state int MPI_Register_datarep(char * datarep, MPI_Datarep_conversion_function * read_conversion_fn, MPI_Datarep_conversion_function * write_conversion_fn, MPI_Datarep_extent_function * dtype_file_extent_fn, void * extra_state) MPI_REGISTER_DATAREP(DATAREP,READ_CONVERSION_FN, WRITE_CONVERSION_FN,DTYPE_FILE_EXTENT_FN,EXTRA_STATE,IERROR) EXTERNAL READ_CONVERSION_FN, WRITE_CONVERSION_FN DTYPE_FILE_EXTENT_FN INTEGER (KIND=MPI_ADDRESS_KIND) EXTRA_STATE INTEGER IERROR MPI 203 MPI_REGISTER_DATAREP MPI_REGISTER_DATAREP datarep MPI_FILE_SET_VIEW datarep read_conversion_fn write_conversion_fn dtype_file_extent_fn 309

MPI_FILE_SET_ATOMICITY(fh, flag) INOUT fh IN flag int MPI_File_set_atomicity(MPI_File fh, int flag) MPI_FILE_SET_ATOMICITY(FH, FLAG, IERROR) INTEGER FH, IERROR LOGICAL FLAG MPI 204 MPI_FILE_SET_ATOMICITY MPI_FILE_SET_ATOMICITY fh flag=ture flag=false MPI_FILE_GET_ATOMICITY(fh, flag) IN fh OUT flag int MPI_File_set_atomicity(MPI_File fh, int * flag) MPI_FILE_SET_ATOMICITY(FH, FLAG, IERROR) INTEGER FH, IERROR LOGICAL FLAG MPI 205 MPI_FILE_GET_ATOMICITY MPI_FILE_GET_ATOMICITY fh flag MPI_FILE_SET_ATOMICITY flag=true flag=false MPI_FILE_SYNC(fh) INOUT fh int MPI_File_sync(MPI_File fh) MPI_FILE_SYNC(FH, IERROR) INTEGER FH, IERROR MPI 206 MPI_FILE_SYNC MPI_FILE_SYNC fh fh 310

21.6 MPI A1(100) 4 P1(4) A1 P1 A1(1:25) A1(26:50) A1(51:75) A1(76:100) A1 P1(1) P1(2) P1(3) P1(4) P1 A1 P1 97 A1(1:100:4) A1(2:100:4) A1(3:100:4) A1(4:100:4) A1 P1(1) P1(2) P1(3) P1(4) P1 98 A1 P1 2 /1,2/9,10/.. /3,4/11,12/.. /5,6/13,14/.. /7,8/15,16/.. A1(1:100:8) A1(3:100:8) A1(5:100:8) A1(7:100:8) A1(2:100:8) A1(4:100:8) A1(6:100:8) A1(8:100:8) A1 P1(1) P1(2) P1(3) P1(4) P1 99 311

MPI_TYPE_CREATE_DARRAY(size,rank,ndims,array_of_gsizes,array_of_distribs, array_of_dargs,array_of_psizes,order,oldtype,newtype) IN size IN rank IN ndims IN array_of_gsize IN array_of_distribs IN array_of_dargs IN array_of_psizes IN older C FORTRAN IN oldtype OUT newtype int MPI_Type_create_darray(int size, int rank, int ndims, int array_of_gsizes[], int array_of_distribs[], int array_of_dargs[], int array_of_psizes[], int order, MPI_Datatype oldtype, MPI_Datatype * newtype) MPI_TYPE_CREATE_DARRAY(SIZE,RANK,NDIMS,ARRAY_OF_GSIZES, ARRAY_OF_DISTRIBS,ARRAY_OF_DARGS, ARRAY_OF_PSIZE,ORDER, ORDER, OLDTYPE, NEWTYPE, IERROR) INTEGER SIZE,RANK,NDIMS, ARRAY_OF_GSIZE(*), ARRAY_OF_DISTRIBS(*), ARRAY_OF_DARGS(*), ARRAY_OF_PSIZES(*), ORDER, OLDTYPE, MPI 207 MPI_TYPE_CREATE_DARRAY MPI_TYPE_CREATE_DARRAY ndims array_of_gsize array_of_distribs array_of_dargs array_of_dargs MPI_DISTRIBUTE_DFLT_DARG size array_of_psizes older FORTRAN MPI_ORDER_FORTRAN C MPI_ORDER_C oldtype newtype m*n 2 6 2*3 MPI_Type_create_darray filetype filetype gsizes[0]=m; gzises[1]=n; distribs[0]=mpi_distribute_block; distribs[1]=mpi_distribute_block; 312

dargs[0]=mpi_distribute_dflt_darg; dargs[1]=mpi_distribute_dflt_darg; psizes[0]=2; psizes[1]=3; MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Type_create_darray(6,rank,2,gsizes,distribs,dargs,psizes,MPI_ORDER_C,MPI_FLOAT, &filetype); MPI_Type_commit(&filetype); MPI_File_open(MPI_COMM_WORLD, "datafile",mpi_mode_create MPI_MODE_WRONLY,MPI_INFO_NULL,&fh); MPI_File_set_view(fh,0,MPI_FLOAT,filetype,"native",MPI_INFO_NULL); local_array_size=num_local_rows*num_local_cols; MPI_File_write_all(fh,local_array,local_array_size,MPI_FLOAT,&status); MPI_File_close(&fh); 67 MPI_TYPE_CREATE_SUBARRAY(ndims,array_of_sizes, array_of_subsizes, array_of_starts, order, oldtype, newtype) IN ndims IN array_of_sizes IN array_of_subsizes IN array_of_starts IN order IN oldtype OUT newtype int MPI_Type_create_subarray(int ndims, int array_of_sizes[], int array_of_subsizes[], int array_of_starts[], int order, MPI_Datatype oldtype, MPI_Datatype * newtype) MPI_TYPE_CREAT_SUBARRAY(NDIMS, ARRAY_OF_SIZES, ARRAY_OF_SUBSIZES,ARRAY_OF_STARTS,ORDER,OLDTYPE,NEWTYPE, IERROR) INTEGER NDIMS, ARRAY_OF_SIZES(*), ARRAY_OF_SUBSIZES(*), ARRAY_OF_STARTS(*), ORDER, OLDTYPE, NEWTYPE, IERROR MPI 208 MPI_TYPE_CREATE_SUBARRAY MPI_TYPE_CREATE_SUBARRAY ndims array_of_sizes array_of_subsizes array_of_starts order oldtype newtype m*n 2 6 2*3 m/2 n/3 start_indices[0]=coords[0]*lsizes[0] 313

start_indices[1]=coords[1]*lsizes[1] MPI_Type_create_subarray filetype filetype gsizes[0]=m; gsizes[1]=n; psizes[0]=2; psizes[3]=3; lsizes[0]=m/psizes[0]; lsizes[1]=n/psizes[1]; dims[0]=2; dims[1]=3; periods[0]=periods[1]=1; MPI_Cart_create(MPI_COMM_WORLD,2,dims,periods, 0 &comm); MPI_Comm_rank(comm,&rank); MPI_Cart_coords(comm,rank,2,coords); start_indices[0]=coords[0]*lsizes[0]; start_indices[1]=coords[1]*lsizes[1]; MPI_Type_create_subarray(2,gsizes,lsizes,start_indices,MPI_ORDER_C,MPI_FLOAT,&filetype); MPI_Type_commit(&filetype); MPI_File_open(MPI_COMM_WORLD,"datafile",MPI_MODE_CREATE MPI_MODE_WRONLY, MPI_INFO_NULL, &fh); MPI_File_setview(fh,0,MPI_FLOAT,filetype,"native",MPI_INFO_NULL); memsizes[0]=lsizes[0]+8; memsizes[1]=lsizes[1]+8; /* 4 */ start_indices[0]=start_indices[1]=4; MPI_Type_subarray(2,memsizes,lsizes,start_indices,MPI_ORDER_C,MPI_FLOAT,&memtype); /* */ MPI_Type_commit(&memtype); MPI_File_write_all(fh,local_array,1,memtype,&status); MPI_File_close(&fh); 68 21.7 I/O MPI I/O 314

315 MPI MPI http://www.mpi-forum.org MPIF http://www.mcs.anl.gov/mpi MPI http://www.netlib.org/mpi/index.html netlib MPI MPI http://www-unix.mcs.anl.gov/mpi/mpich/ ANL/MSU MPICH http://www.mcs.anl.gov/mpi/mpich MPICH ftp://ftp.mcs.anl.gov/pub/mpi MPICH http://www.mpi.nd.edu/mpi MPI http://www.lsc.nd.edu/mpi2 MPI http://www.erc.msstate.edu/mpi MSU MPI http://www.mpi.nd.edu/lam/ LAM MPI MPI http://www-unix.mcs.anl.gov/mpi/tutorial/ MPI http://www.erc.msstate.edu/mpi/mpi-faq.html http://www-unix.mcs.anl.gov/mpi/mpich/faq.html http://www.mpi-forum.org/docs comp.parallel.mpi MPI ftp://ftp.mpi-forum.org/pub/docs/ MPI http://www.mcs.anl.gov/mpi/usingmpi MPI http://www.mcs.anl.gov/mpi/usingmpi2 MPI ftp://ftp.mcs.anl.gov/pub/mpi/using/examples MPI ftp://ftp.mcs.anl.gov/pub/mpi/using2/examples MPI http://www-unix.mcs.anl.gov/mpi/tutorial/mpiexmpl/contents.html MPI

[Ado98] Jean-Marc Adamo. Multi-threaded object-oriented MPI-based message passing interface: the ARCH library. Boston : Kluwer Academic, 1998. ISBN 0792381653. [Ads97] Jeanne C.Adams. Fortran 95 Handbook. 1997. [Akn87] Edited by Akinori Yonezawa and Mario Tokoro. Object-oriented concurrent programming. Cambridge, Mass. : MIT Press, 1987. ISBN 0262240262. [Akl89] Selim G. Akl. The design and analysis of parallel algorithms. Englewood Cliffs, N.J. : Prentice Hall, 1989. ISBN 0132000563. [Alv98] Vassil Alexandrov, Jack Dongarra (eds.). Recent advances in parallel virtual machine and message passing interface : 5th European PVM/MPI User's Group Meeting, Liverpool, UK, September 7-9, 1998 : proceedings. Berlin ; New York : Springer, 1998. ISBN 3540650415. [Ans91] Gregory R.Andrews. Concurrent programming : principles and practice. Redwood City, Calif. : Benjamin/Cummings Pub. Co., 1991. ISBN 0805300864. [Bab88] Edited by Robert G. Babb. Programming parallel processors. Reading, Mass. : Addison- Wesley Pub. Co., 1988. ISBN 0201117215. [Bar92] Barr E. Bauer. Practical parallel programming. San Diego : Academic Press, 1992. ISBN 0120828103. [Brc93] Lester, Bruce P. The art of parallel programming.englewood Cliffs, N.J. : Prentice Hall, 1993. ISBN 0130459232. [Bus88] Alan Burns. Programming in Occam 2. Wokingham [Berkshire] England Reading, Mass. Addison-Wesley, 1988. ISBN 0201173719. [Car90] Nicholas Carriero, David Gelernter. How to write parallel programs : a first course. Cambridge, Mass. : MIT Press, 1990. ISBN 026203171X. [Chay88] K. Mani Chandy, Jayadev Misra. Parallel program design : a foundation. Reading, Mass. : Addison-Wesley Pub. Co., 1988. ISBN 0201058669. [Chs92] Cheese, A. (Andrew). Parallel execution of Parlog. Berlin : Springer-Verlag, 1992. ISBN 0387553827 (New York). [Con92] Michael H. Coffin. Parallel programming : a new approach. Summit, NJ : Silicon Press, 1992. ISBN 0929306139. [Fok95] Lloyd D.Fosdick... [et al]. An Introduction to High-Performance Scientific Computing. 1995. [For94] Ian Foster. Designing and building parallel programs : concepts and tools for parallel software engineering. Reading, Mass. : Addison-Wesley, 1994. ISBN 0201575949. [Gen88] Narain Gehani, Andrew McGettrick.Concurrent programming. Wokingham, England : Addison-Wesley, 1988. ISBN 0201174359. [Gen89] Narain Gehani, William D. Roome. The Concurrent C programming language. Summit, NJ, USA : Silicon Press, 1989. ISBN 0929306007. [Gej97] Robert A.van de Geijn. Using PLAPACK: Parallel Linear Algebra Package. 1997. [Get94] Al Geist... [et al]. PVM: Parallel Virtual Machine-- A Users's Guide and Tutorial for Network Parallel Computing. 1994. [Gry87] Steve Gregory. Parallel logic programming in PARLOG : the language and its 316

implementation. Wokingham, England ; Reading, Mass. : Addison-Wesley Pub. Co., 1987. ISBN 0201192411, 0201192412. [Grp99] William Gropp, Ewing Lusk, Anthony Skjellum. Using MPI : portable parallel programming with the message-passing interface Cambridge, Mass. : MIT Press, 1999. 2 nd edition. ISBN 0262571323. [Grp99] William Gropp, Ewing Lusk, Rajeev Thakur. Using MPI-2 : advanced features of the message-passing interface. Cambridge, Mass. : MIT Press, 1999. ISBN 0262571331. [Har91] Philip J. Hatcher and Michael J. Quinn. Data-parallel programming on MIMD computers. Cambridge, Mass. : MIT Press, c1991. ISBN 0262082055. [Kol94] Charles H.Koelbel, David B.Loveman...[et al]. The High Performance Fortran Handbook. 1994 [Pen96] Guy-RenPerrin, Alain Darte, (eds.). The data parallel programming model : foundations, HPF realization, and scientific applications Berlin ; New York : Springer, 1996 ISBN 3540617361 (Berlin : acid-free paper) [Pet87] R.H. Perrott. Parallel programming. Wokingham, England : Addison-Wesley Pub. Co., 1987. ISBN 0201142317. [Pos88] Constantine D. Polychronopoulos. Parallel programming and compilers. Boston : Kluwer Academic, 1988. ISBN 0898382882. [Ral91] Susann Ragsdale, editor. Parallel programming. New York : McGraw-Hill, 1991. ISBN 0070511861. [Sat88] Gary Sabot. The paralation model : architecture-independent parallel programming. Cambridge, Mass. : MIT Press, 1988. ISBN 0262192772. [Snr97] Marc Snir, Steve Otto, Steven Huss Lederman, David Walker, Jack Dongarra. MPI: the Complete Reference. the MIT Press, 1997 [Snr98] Marc Snir... [et al.]. MPI--the complete reference. Cambridge, Mass. : MIT Press, 1998. 2nd edtion. ISBN 0262692155, 0262692163. [Snw92] C.R. Snow. Concurrent programming. New York : Combridge University Press, 1992. ISBN 0521327962. [Tik91] Evan Tick. Parallel logic programming. Cambridge, Mass. : MIT Press, 1991. ISBN 0262200872. [Wim96] William H. Press... [et al.] Numerical recipes in Fortran 90 : the art of parallel scientific computing. Cambridge [England] ; New York : Cambridge University Press, 1996. Edition 2nd ed. ISBN 0521574390 (hardcover). [Wis90] Shirley A. Williams. Programming models for parallel systems. New York : J. Wiley, 1990. ISBN 0471923044. [Win96] Edited by Gregory V.Wilson... [et..al ]. Parallel Programming Using C++.1996. [Win95] Gregory V. Wilson. Practical Parallel Programming.1995 [Yag87] Rong Yang. P-Prolog, a parallel logic programming language. Singapore : World Scientific, 1987. ISBN 9971505088. [Yun93] C.K. Yuen... [et al.]. Parallel lisp systems : a study of languages and architectures. London : Chapman & Hall, 1993. ISBN 0442315686. 317

Aliased Argument Asynchronous Communication Attributes Bandwidth Blocking Communication Blocking Receive Blocking Send Buffer Buffered Communication Mode Caching of Attributes Cartesian Topology Collective Communication Communication Modes Communication Processor Communicator Context Contiguous Data Datatypes Deadlock Event Graph Topology Group Heterogeneous Computing InterCommunicator IntraCommunicator Latency MPI MPIF Multicomputer NoBlocking Communication NoBlocking Receive NoBlocking Send Node Persistent Requests Physical Topology Point-to-Point Communication Portability Process Processor PVM Rank Ready Ready Communication Mode Message Passing Interface MPI Parallel Virtual Machine 318

Reduce Request Object Safe Programs Standard Communication Mode Status Object Subgroup Synchronization Synchronous Communication Mode Thread Topology Type Map Type Signature User-Defined Topology Virtual Shared Memory Virtual Topology 319

MPI MPI 1 MPI_INIT...25 MPI 2 MPI_FINALIZE...25 MPI 3 MPI_COMM_RANK...25 MPI 4 MPI_COMM_SIZE...26 MPI 5 MPI_SEND...26 MPI 6 MPI_RECV...27 MPI 7 MPI_WTIME...36 MPI 8 MPI_WTICK...36 MPI 9 MPI_GET_PROCESSOR_NAME...38 MPI 10 MPI_GET_VERSION...38 MPI 11 MPI_INITALIZED...39 MPI 12 MPI_ABORT...40 MPI 13 MPI_SENDRECV...56 MPI 14 MPI_SENDRECV_REPLACE...57 MPI 15 MPI_BSEND...70 MPI 16 MPI_BUFFER_ATTACH...71 MPI 17 MPI_BUFFER_DETACH...71 MPI 18 MPI_SSEND...74 MPI 19 MPI_RSEND...76 MPI 20 MPI_ISEND...100 MPI 21 MPI_IRECV...101 MPI 22 MPI_ISSEND...101 MPI 23 MPI_IBSEND...102 MPI 24 MPI_IRSEND...102 MPI 25 MPI_WAIT...103 MPI 26 MPI_TEST...103 MPI 27 MPI_WAITANY...104 MPI 28 MPI_WAITALL...105 MPI 29 MPI_WAITSOME...105 MPI 30 MPI_TESTANY...106 MPI 31 MPI_TESTALL...106 MPI 32 MPI_TESTSOME...107 MPI 33 MPI_CANCEL...108 MPI 34 MPI_TEST_CANCELLED...108 MPI 35 MPI_REQUEST_FREE...109 MPI 36 MPI_PROBE...110 MPI 37 MPI_IPROBE... 111 MPI 38 MPI_SEND_INIT...116 MPI 39 MPI_BSEND_INIT...117 MPI 40 MPI_SSEND_INIT...117 320

MPI 41 MPI_RSEND_INIT...118 MPI 42 MPI_RECV_INIT...118 MPI 43 MPI_START...119 MPI 44 MPI_STARTALL...119 MPI 45 MPI_BCAST...126 MPI 46 MPI_GATHER...128 MPI 47 MPI_GATHERV...129 MPI 48 MPI_SCATTER...131 MPI 49 MPI_SCATTERV...131 MPI 50 MPI_ALLGATHER...133 MPI 51 MPI_ALLGATHERV...134 MPI 52 MPI_ALLTOALL...135 MPI 53 MPI_ALLTOALLV...138 MPI 54 MPI_BARRIER...138 MPI 55 MPI_REDUCE...140 MPI 56 MPI_ALLREDUCE...145 MPI 57 MPI_REDUCE_SCATTER...145 MPI 58 MPI_SCAN...147 MPI 59 MPI_OP_CREATE...153 MPI 60 MPI_OP_FREE...154 MPI 61 MPI_TYPE_CONTIGUOUS...157 MPI 62 MPI_TYPE_VECTOR...158 MPI 63 MPI_TYPE_HVECTOR...160 MPI 64 MPI_TYPE_INDEXED...161 MPI 65 MPI_TYPE_HINDEXED...162 MPI 66 MPI_TYPE_STRUCT...163 MPI 67 MPI_TYPE_COMMIT...164 MPI 68 MPI_TYPE_FREE...164 MPI 69 MPI_ADDRESS...171 MPI 70 MPI_TYPE_EXTENT...173 MPI 71 MPI_TYPE_SIZE...173 MPI 72 MPI_GET_ELEMENTS...173 MPI 73 MPI_GET_COUNT...174 MPI 74 MPI_TYPE_LB...175 MPI 75 MPI_TYPE_UB...175 MPI 76 MPI_PACK...177 MPI 77 MPI_UNPACK...178 MPI 78 MPI_PACK_SIZE...179 MPI 79 MPI_GROUP_SIZE...182 MPI 80 MPI_GROUP_RANK...183 MPI 81 MPI_GROUP_TRANSLATE_RANKS...183 MPI 82 MPI_GROUP_COMPARE...183 MPI 83 MPI_COMM_GROUP...184 MPI 84 MPI_GROUP_UNION...184 321

MPI 85 MPI_GROUP_INTERSECTION...184 MPI 86 MPI_GROUP_DIFFERENCE...185 MPI 87 MPI_GROUP_INCL...185 MPI 88 MPI_GROUP_EXCL...185 MPI 89 MPI_GROUP_RANGE_INCL...186 MPI 90 MPI_GROUP_RANGE_EXCL...186 MPI 91 MPI_GROUP_FREE...187 MPI 92 MPI_COMM_COMPARE...188 MPI 93 MPI_COMM_DUP...188 MPI 94 MPI_COMM_CREATE...188 MPI 95 MPI_COMM_SPLIT...189 MPI 96 MPI_COMM_FREE...189 MPI 97 MPI_COMM_TEST_INTER...191 MPI 98 MPI_COMM_REMOTE_SIZE...191 MPI 99 MPI_COMM_REMOTE_GROUP...191 MPI 100 MPI_INTERCOMM_CREATE...192 MPI 101 MPI_INTERCOMM_MERGE...192 MPI 102 MPI_KEYVAL_CREATE...194 MPI 103 MPI_KEYVAL_FREE...195 MPI 104 MPI_ATTR_PUT...196 MPI 105 MPI_ATTR_GET...196 MPI 106 MPI_ATTR_DELETE...196 MPI 107 MPI_CART_CREATE...200 MPI 108 MPI_DIMS_CREATE...200 MPI 109 MPI_TOPO_TEST...201 MPI 110 MPI_CART_GET...201 MPI 111 MPI_CART_RANK...201 MPI 112 MPI_CARTDIM_GET...202 MPI 113 MPI_CART_SHIFT...202 MPI 114 MPI_CART_COORDS...202 MPI 115 MPI_CART_SUB...203 MPI 116 MPI_CART_MAP...204 MPI 117 MPI_GRAPH_CREATE...206 MPI 118 MPI_GRAPHDIMS_GET...207 MPI 119 MPI_GRAPH_GET...207 MPI 120 MPI_GRAPH_NEIGHBORS_COUNT...207 MPI 121 MPI_GRAPH_NEIGHBORS...208 MPI 122 MPI_GRAPH_MAP...208 MPI 123 MPI_ERRHANDLER_CREATE...213 MPI 124 MPI_ERRHANDLER_SET...213 MPI 125 MPI_ERRHANDLER_GET...214 MPI 126 MPI_ERRHANDLER_FREE...214 MPI 127 MPI_ERROR_STRING...214 MPI 128 MPI_ERROR_CLASS...215 322

MPI 129 MPI_COMM_SPAWN...262 MPI 130 MPI_COMM_GET_PARENT...263 MPI 131 MPI_COMM_SPAWN_MULTIPLE...264 MPI 132 MPI_OPEN_PORT...265 MPI 133 MPI_COMM_ACCEPT...265 MPI 134 MPI_CLOSE_PORT...265 MPI 135 MPI_COMM_CONNECT...266 MPI 136 MPI_COMM_DISCONNECT...266 MPI 137 MPI_PUBLISH_NAME...267 MPI 138 MPI_LOOKUP_NAME...267 MPI 139 MPI_UNPUBLISH_NAME...267 MPI 140 MPI_COMM_JOIN...268 MPI 141 MPI_WIN_CREATE...270 MPI 142 MPI_WIN_FREE...270 MPI 143 MPI_PUT...271 MPI 144 MPI_GET...272 MPI 145 MPI_ACCUMULATE...274 MPI 146 MPI_WIN_GET_GROUP...275 MPI 147 MPI_WIN_FENCE...275 MPI 148 MPI_WIN_START...277 MPI 149 MPI_WIN_COMPLETE...277 MPI 150 MPI_WIN_POST...277 MPI 151 MPI_WIN_WAIT...278 MPI 152 MPI_WIN_TEST...278 MPI 153 MPI_WIN_LOCK...279 MPI 154 MPI_WIN_UNLOCK...280 MPI 155 MPI_FILE_OPEN...282 MPI 156 MPI_FILE_CLOSE...283 MPI 157 MPI_FILE_DELETE...283 MPI 158 MPI_FILE_SET_SIZE...284 MPI 159 MPI_FILE_PREALLOCATE...284 MPI 160 MPI_FILE_GET_SIZE...284 MPI 161 MPI_FILE_GET_GROUP...285 MPI 162 MPI_FILE_GET_AMODE...285 MPI 163 MPI_FILE_SET_INFO...285 MPI 164 MPI_FILE_GET_INFO...285 MPI 165 MPI_FILE_READ_AT...286 MPI 166 MPI_FILE_WRITE_AT...287 MPI 167 MPI_FILE_READ_AT_ALL...288 MPI 168 MPI_FILE_WRITE_AT_ALL...289 MPI 169 MPI_FILE_IREAD_AT...290 MPI 170 MPI_FILE_IWRITE_AT...290 MPI 171 MPI_FILE_READ_AT_ALL_BEGIN...291 MPI 172 MPI_FILE_READ_AT_ALL_END...292 323

MPI 173 MPI_FILE_WRITE_AT_ALL_BEGIN...292 MPI 174 MPI_FILE_WRITE_AT_ALL_END...293 MPI 175 MPI_FILE_SET_VIEW...295 MPI 176 MPI_FILE_GET_VIEW...296 MPI 177 MPI_FILE_SEEK...296 MPI 178 MPI_FILE_GET_POSITION...297 MPI 179 MPI_FILE_GET_BYTE_OFFSET...297 MPI 180 MPI_FILE_READ...298 MPI 181 MPI_FILE_WRITE...299 MPI 182 MPI_FILE_READ_ALL...299 MPI 183 MPI_FILE_WRITE_ALL...300 MPI 184 MPI_FILE_IREAD...300 MPI 185 MPI_FILE_IWRITE...301 MPI 186 MPI_FILE_READ_ALL_BEGIN...301 MPI 187 MPI_FILE_READ_ALL_END...302 MPI 188 MPI_FILE_WRITE_ALL_BEGIN...302 MPI 189 MPI_FILE_WRITE_ALL_END...303 MPI 190 MPI_FILE_SEEK_SHARED...303 MPI 191 MPI_FILE_GET_POSITION_SHARED...304 MPI 192 MPI_FILE_READ_SHARED...304 MPI 193 MPI_FILE_WRITE_SHARED...305 MPI 194 MPI_FILE_READ_ORDERED...305 MPI 195 MPI_FILE_WRITE_ORDERED...306 MPI 196 MPI_FILE_IREAD_SHARED...306 MPI 197 MPI_FILE_IWRITE_SHARED...307 MPI 198 MPI_FILE_READ_ORDERED_BEGIN...307 MPI 199 MPI_FILE_READ_ORDERED_END...308 MPI 200 MPI_FILE_WRITE_ORDERED_BEGIN...308 MPI 201 MPI_FILE_WRITE_ORDERED_END...308 MPI 202 MPI_FILE_GET_TYPE_EXTENT...309 MPI 203 MPI_REGISTER_DATAREP...309 MPI 204 MPI_FILE_SET_ATOMICITY...310 MPI 205 MPI_FILE_GET_ATOMICITY...310 MPI 206 MPI_FILE_SYNC...310 MPI 207 MPI_TYPE_CREATE_DARRAY...312 MPI 208 MPI_TYPE_CREATE_SUBARRAY...313 324

1 MPI 1. C MPI C MPI_CHAR MPI_BYTE MPI_SHORT MPI_INT MPI_LONG MPI_FLOAT MPI_DOUBLE MPI_UNSIGNED_CHAR MPI_UNSIGNED_SHORT MPI_UNSIGNED MPI_UNSIGNED_LONG MPI_LONG_DOUBLE C char short int long float double unsigned char unsigned short unsigned int unsigned long long double (some systems may not implement) 2. MPI_MAXLOC MPI_MINLOC C MPI C MPI_FLOAT_INT struct { float, int } MPI_LONG_INT struct { long, int } MPI_DOUBLE_INT struct { double, int } MPI_SHORT_INT struct { short, int } MPI_2INT struct { int, int } MPI_LONG_DOUBLE_INT struct { long double, int }; MPI_LONG_LONG_INT struct { long long, int }; 3. MPI MPI_PACKED MPI_UB MPI_LB For MPI_Pack and MPI_Unpack For MPI_Type_struct; an upper-bound indicator For MPI_Type_struct; a lower-bound indicator 4. Fortran MPI MPI_REAL MPI_INTEGER MPI_LOGICAL MPI_DOUBLE_PRECISION MPI_COMPLEX MPI_DOUBLE_COMPLEX Fortran REAL INTEGER LOGICAL DOUBLE PRECISION COMPLEX complex*16 complex*32 325

5. FORTRAN MPI MPI_INTEGER1 MPI_INTEGER2 MPI_INTEGER4 MPI_REAL4 MPI_REAL8 Fortran integer*1 integer*2 integer*4 real*4 real*8 6. MPI_MAXLOC MPI_MINLOC Fortran MPI MPI_2INTEGER MPI_2REAL MPI_2DOUBLE_PRECISION MPI_2COMPLEX MPI_2DOUBLE_COMPLEX 7. C MPI_Comm Fortran INTEGER,INTEGER REAL, REAL DOUBLE PRECISION, DOUBLE PRECISION COMPLEX, COMPLEX complex*16, complex*16 Fortran MPI_COMM_WORLD MPI_COMM_SELF 8. C MPI_Group Fortran INTEGER MPI_GROUP_EMPTY 9. MPI_IDENT MPI_CONGRUENT MPI_SIMILAR MPI_UNEQUAL 10. MPI_REDUCE, MPI_ALLREDUCE, MPI_REDUCE_SCATTER, and MPI_SCAN C MPI_Op Fortran INTEGER MPI_MAX MPI_MIN MPI_SUM MPI_PROD MPI_LAND 326

327 MPI_BAND MPI_LOR MPI_BOR MPI_LXOR MPI_BXOR MPI_MINLOC MPI_MAXLOC 11. C Fortran MPI_TAG_UB tag MPI_HOST MPI_IO I/O MPI_WTIME_IS_GLOBAL MPI_WTIME 1 12. MPI_COMM_NULL MPI_OP_NULL MPI_GROUP_NULL MPI_DATATYPE_NULL MPI_REQUEST_NULL MPI_ERRHANDLER_NULL 13. MPI_MAX_PROCESSOR_NAME MPI_MAX_ERROR_STRING MPI_UNDEFINED MPI_UNDEFINED_RANK MPI_KEYVAL_INVALID MPI_BSEND_OVERHEAD MPI_PROC_NULL MPI_ANY_SOURCE MPI_ANY_TAG tag MPI_BOTTOM 14. MPI_GRAPH MPI_CART

328 15. MPI MPI_Status MPI_SOURCE MPI_TAG MPI_ERROR 16. MPI_Aint C MPI_Handler_function C MPI_User_function C MPI_Copy_function MPI_NULL_COPY_FN MPI_Delete_function MPI_NULL_DELETE_FN MPI_DUP_FN MPI_ERRORS_ARE_FATAL MPI_ERRORS_RETURN 17. MPI MPI_SUCCESS MPI_ERR_BUFFER MPI_ERR_COUNT MPI_ERR_TYPE MPI_ERR_TAG tag MPI_ERR_COMM MPI_ERR_RANK MPI_ERR_ROOT ROOT MPI_ERR_GROUP MPI_ERR_OP MPI_ERR_TOPOLOGY MPI_ERR_DIMS MPI_ERR_ARG MPI_ERR_UNKNOWN MPI_ERR_TRUNCATE MPI_ERR_OTHER MPI_ERR_INTERN MPI_ERR_IN_STATUS status MPI_ERR_PENDING MPI_ERR_REQUEST MPI_ERR_LASTCODE

329 2 MPICH 1.2.1 1. MPI MPI_Abort MPI_Address MPI_Allgather MPI_Allgatherv MPI_Allreduce MPI_Alltoall MPI_Alltoallv MPI_Attr_delete MPI_Attr_get MPI_Attr_put MPI_Barrier MPI_Bcast MPI_Bsend MPI_Bsend_init MPI_Buffer_attach MPI_Buffer_detach MPI_Cancel MPI_Cart_coords MPI_Cart_create MPI_Cart_get MPI_Cart_map MPI_Cart_rank MPI_Cart_shift MPI_Cart_sub MPI_Cartdim_get MPI_CHAR MPI_Comm_compare MPI_Comm_create MPI_Comm_dup MPI_Comm_free MPI_Comm_group MPI_Comm_rank MPI_Comm_remote_group MPI_Comm_remote_size MPI_Comm_size MPI_Comm_split MPI_Comm_test_inter MPI_Dims_create MPI_DUP_FN MPI_Errhandler_create MPI_Errhandler_free MPI_Errhandler_get MPI_Errhandler_set MPI_Error_class MPI_Error_string MPI_File_c2f MPI_File_close MPI_File_delete MPI_File_f2c MPI_File_get_amode MPI_File_get_atomicity MPI_File_get_byte_offset MPI_File_get_errhandler MPI_File_get_group MPI_File_get_info MPI_File_get_position MPI_File_get_position_shared MPI_File_get_size MPI_File_get_type_extent MPI_File_get_view MPI_File_iread MPI_File_iread_at MPI_File_iread_shared MPI_File_iwrite MPI_File_iwrite_at MPI_File_iwrite_shared MPI_File_open MPI_File_preallocate MPI_File_preallocate MPI_File_read MPI_File_read_all MPI_File_read_all_begin MPI_File_read_all_end MPI_File_read_at MPI_File_read_at_all MPI_File_read_at_all_begin MPI_File_read_at_all_end MPI_File_read_ordered MPI_File_read_ordered_begin MPI_File_read_ordered_end MPI_File_read_shared MPI_File_seek MPI_File_seek_shared MPI_File_set_atomicity MPI_File_set_errhandler MPI_File_set_info MPI_File_set_size MPI_File_set_view MPI_File_sync MPI_File_write MPI_File_write_all MPI_File_write_all_begin MPI_File_write_all_end MPI_File_write_at

330 MPI_File_write_at_all MPI_File_write_at_all_begin MPI_File_write_at_all_end MPI_File_write_ordered MPI_File_write_ordered_begin MPI_File_write_ordered_end MPI_File_write_shared MPI_Finalize MPI_Finalized MPI_Gather MPI_Gatherv MPI_Get_count MPI_Get_elements MPI_Get_processor_name MPI_Get_version MPI_Graph_create MPI_Graph_get MPI_Graph_map MPI_Graph_neighbors MPI_Graph_neighbors_count MPI_Graphdims_get MPI_Group_compare MPI_Group_difference MPI_Group_excl MPI_Group_free MPI_Group_incl MPI_Group_intersection MPI_Group_range_excl MPI_Group_range_incl MPI_Group_rank MPI_Group_size MPI_Group_translate_ranks MPI_Group_union MPI_Ibsend MPI_Info_c2f MPI_Info_create MPI_Info_delete MPI_Info_dup MPI_Info_f2c MPI_Info_free MPI_Info_get MPI_Info_get_nkeys MPI_Info_get_nthkey MPI_Info_get_valuelen MPI_Info_set MPI_Init MPI_Init_thread MPI_Initialized MPI_Int2handle MPI_Intercomm_create MPI_Intercomm_merge MPI_Iprobe MPI_Irecv MPI_Irsend MPI_Isend MPI_Issend MPI_Keyval_create MPI_Keyval_free MPI_NULL_COPY_FN MPI_NULL_DELETE_FN MPI_Op_create MPI_Op_free MPI_Pack MPI_Pack_size MPI_Pcontrol MPI_Probe MPI_Recv MPI_Recv_init MPI_Reduce MPI_Reduce_scatter MPI_Request_c2f MPI_Request_free MPI_Rsend MPI_Rsend_init MPI_Scan MPI_Scatter MPI_Scatterv MPI_Send MPI_Send_init MPI_Sendrecv MPI_Sendrecv_replace MPI_Ssend MPI_Ssend_init MPI_Start MPI_Startall MPI_Status_c2f MPI_Status_set_cancelled MPI_Status_set_elements MPI_Test MPI_Test_cancelled MPI_Testall MPI_Testany MPI_Testsome MPI_Topo_test MPI_Type_commit MPI_Type_contiguous MPI_Type_create_darray MPI_Type_create_indexed_block MPI_Type_create_subarray MPI_Type_extent MPI_Type_free MPI_Type_get_contents MPI_Type_get_envelope MPI_Type_hindexed MPI_Type_hvector MPI_Type_indexed MPI_Type_lb MPI_Type_size MPI_Type_struct MPI_Type_ub

331 MPI_Type_vector MPI_Unpack MPI_Wait MPI_Waitall MPI_Waitany MPI_Waitsome MPI_Wtick MPI_Wtime MPIO_Request_c2f MPIO_Request_f2c MPIO_Test MPIO_Wait 2. MPE CLOG_commtype CLOG_cput CLOG_csync CLOG_Finalize CLOG_get_new_event CLOG_get_new_state CLOG_Init CLOG_init_buffers CLOG_mergelogs CLOG_mergend CLOG_msgtype CLOG_newbuff CLOG_nodebuffer2disk CLOG_Output CLOG_procbuf CLOG_reclen CLOG_rectype CLOG_reinit_buff CLOG_treesetup MPE MPE_Add_RGB_color MPE_CaptureFile MPE_CaptureFile MPE_Close_graphics MPE_Comm_global_rank MPE_Counter_create MPE_Counter_free MPE_Counter_nxtval MPE_Decomp1d MPE_Describe_event MPE_Describe_state MPE_Draw_circle MPE_Draw_line MPE_Draw_logic MPE_Draw_point MPE_Draw_points MPE_Draw_string MPE_Fill_circle MPE_Fill_rectangle MPE_Finish_log MPE_Get_mouse_press MPE_GetTags MPE_Iget_mouse_press MPE_Iget_mouse_press MPE_Init_log MPE_Initialized_logging MPE_IO_Stdout_to_file MPE_Line_thickness MPE_Log_event MPE_Log_get_event_number MPE_Log_receive MPE_Log_send MPE_Make_color_array MPE_Num_colors MPE_Open_graphics MPE_Print_datatype_pack_action MPE_Print_datatype_unpack_action MPE_ReturnTags MPE_Seq_begin MPE_Seq_end MPE_Start_log MPE_Stop_log MPE_TagsEnd MPE_Update