mpi



Similar documents
模板

untitled

PowerPoint 演示文稿

消息传递并行编程环境MPI.doc

Parallel Programming with MPI

Linux Linux Linux

投影片 1

第7章-并行计算.ppt

Microsoft PowerPoint - KN002.ppt


Microsoft PowerPoint - Tongji_MPI编程初步

30,000,000 75,000,000 75,000, (i) (ii) (iii) (iv)

mannal

2015年廉政公署民意調查

C/C++ - 文件IO

智力测试故事

大綱介紹 MPI 標準介紹 MPI 的主要目標 Compiler & Run 平行程式 MPICH 程式基本架構 點對點通訊 函數介紹 集體通訊 函數介紹

Microsoft Word - John_Ch_1202

全唐诗50

_汪_文前新ok[3.1].doc

FY.DOC

I. 1-2 II. 3 III. 4 IV. 5 V. 5 VI. 5 VII. 5 VIII. 6-9 IX. 9 X XI XII. 12 XIII. 13 XIV XV XVI. 16

CC213

Microsoft Word - Final Chi-Report _PlanD-KlnEast_V7_ES_.doc

施 的 年 度 維 修 工 程 已 於 4 月 15 日 完 成, 並 於 4 月 16 日 重 新 開 放 給 市 民 使 用 ii. 天 水 圍 游 泳 池 的 年 度 維 修 工 程 已 於 3 月 31 日 完 成, 並 於 4 月 1 日 重 新 開 放 給 市 民 使 用 iii. 元

PowerPoint 演示文稿

C/C++ - 函数

新版 明解C言語入門編

PowerPoint Presentation

奇闻怪录

<4D F736F F D20BB4FAA46BFA4B2C4A447B4C15F D313038A67E5FBAEEA658B56FAE69B9EAAC49A4E8AED72D5FAED6A977A5BB5F >

Microsoft Word - COC HKROO App I _Chi_ Jan2012.doc

C 1

目 录 参 考 材 料 1 第 一 章 预 备 知 识 高 性 能 并 行 计 算 机 系 统 简 介 微 处 理 器 的 存 储 结 构

(i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix) (x) (i) (ii)(iii) (iv) (v)

財 務 委 員 會 審 核 2014 至 2015 年 度 開 支 預 算 的 報 告 2014 年 7 月

科学计算的语言-FORTRAN95

PowerPoint 演示文稿

RDEC-RES

Parallel Programming with MPI

「保險中介人資格考試」手冊

(譯本)

Microsoft Word - Entry-Level Occupational Competencies for TCM in Canada200910_ch _2_.doc

, 7, Windows,,,, : ,,,, ;,, ( CIP) /,,. : ;, ( 21 ) ISBN : -. TP CIP ( 2005) 1

山东出版传媒招股说明书

(b)

我国服装行业企业社会责任问题的探讨.pages

-i-

Microsoft Word - 强迫性活动一览表.docx


- 2 - 获 豁 免 计 算 入 总 楼 面 面 积 及 / 或 上 盖 面 积 的 环 保 及 创 新 设 施 根 据 建 筑 物 条 例 的 规 定 4. 以 下 的 环 保 设 施 如 符 合 某 些 条 件, 并 由 有 关 人 士 提 出 豁 免 申 请, 则 可 获 豁 免 计 算 入


建築污染綜合指標之研究

新・解きながら学ぶC言語

<D6D0B9FAB9C5CAB757512E6D7073>

C/C++ - 字符输入输出和字符确认

1 LINUX IDE Emacs gcc gdb Emacs + gcc + gdb IDE Emacs IDE C Emacs Emacs IDE ICE Integrated Computing Environment Emacs Unix Linux Emacs Emacs Emacs Un

<4D F736F F D20CDF2B4EFB5E7D3B0D4BACFDFB9C9B7DDD3D0CFDEB9ABCBBECAD7B4CEB9ABBFAAB7A2D0D0B9C9C6B1D5D0B9C9CBB5C3F7CAE9A3A8C9EAB1A8B8E C4EA34D4C23137C8D5B1A8CBCDA3A92E646F63>

对联故事

新・明解C言語入門編『索引』


<4D F736F F D D342DA57CA7DEA447B14D2DA475B57BBB50BADEB27AC3FEB14DA447B8D5C344>


C/C++语言 - C/C++数据

全唐诗28

「香港中學文言文課程的設計與教學」單元設計範本

软件测试(TA07)第一学期考试

歡 迎 您 成 為 滙 豐 銀 聯 雙 幣 信 用 卡 持 卡 人 滙 豐 銀 聯 雙 幣 信 用 卡 同 時 兼 備 港 幣 及 人 民 幣 戶 口, 讓 您 的 中 港 消 費 均 可 以 當 地 貨 幣 結 算, 靈 活 方 便 此 外, 您 更 可 憑 卡 於 全 球 近 400 萬 家 特

普 通 高 等 教 育 十 二 五 重 点 规 划 教 材 计 算 机 系 列 中 国 科 学 院 教 材 建 设 专 家 委 员 会 十 二 五 规 划 教 材 操 作 系 统 戴 仕 明 姚 昌 顺 主 编 姜 华 张 希 伟 副 主 编 郑 尚 志 梁 宝 华 参 编 参 编 周 进 钱 进

「保險中介人資格考試」手冊

C/C++程序设计 - 字符串与格式化输入/输出

- 1 - ( ) ( ) ( )

天主教永年高級中學綜合高中課程手冊目錄

Microsoft Word - Panel Paper on T&D-Chinese _as at __final_.doc

II II

目录 第一章 MPI 简介 消息传递编程的相关概念 分布式内存 消息传输 进程 消息传递库 发送 / 接收 同步 / 异步 阻塞

《小王子》 (法)圣埃克苏佩里 原著

Microsoft PowerPoint - OPVB1基本VB.ppt

<4D F736F F D20B1B1BEA9D6B8C4CFD5EBBFC6BCBCB7A2D5B9B9C9B7DDD3D0CFDEB9ABCBBEB4B4D2B5B0E5CAD7B4CEB9ABBFAAB7A2D0D0B9C9C6B1D5D0B9C9CBB5C3F7CAE9A3A8C9EAB1A8B8E C4EA36D4C23230C8D5B1A8CBCDA3A92E646F63>

Microsoft Word - NCH final report_CHI _091118_ revised on 10 Dec.doc

Microsoft Word - 0B 封裡面.doc

<4D F736F F D20B6ABD0CBD6A4C8AFB9C9B7DDD3D0CFDEB9ABCBBECAD7B4CEB9ABBFAAB7A2D0D0B9C9C6B1D5D0B9C9CBB5C3F7CAE9A3A8C9EAB1A8B8E C4EA33D4C23131C8D5B1A8CBCDA3A92E646F63>

一、

江苏宁沪高速公路股份有限公司.PDF


epub83-1

Microsoft Word - MP2018_Report_Chi _12Apr2012_.doc

南華大學數位論文

李天命的思考藝術

皮肤病防治.doc

性病防治

中国南北特色风味名菜 _一)

全唐诗24

509 (ii) (iii) (iv) (v) 200, , , , C 57

95年度社區教育學習計畫執行成果報告

Parallel Programing with MPI Binding with Fortran, C,C++

W. Richard Stevens UNIX Sockets API echo Sockets TCP OOB IO C struct C/C++ UNIX fork() select(2)/poll(2)/epoll(4) IO IO CPU 100% libevent UNIX CPU IO

untitled

全国计算机技术与软件专业技术资格(水平)考试

Symantec™ Sygate Enterprise Protection 防护代理安装使用指南

Transcription:

MPI I

II MPI FORTRAN C MPI MPI C /FORTRAN MPI MPI MPI MPI MPI MPI-2 MPI-1 MPI-2 MPI MPI

...IX...XI... XII...XIV... XVII...1 1...2 1.1...2 1.1.1...2 1.1.2...3 1.2...4 1.3...5 2...6 2.1...6 2.2...7 2.3...8 3...9 3.1...9 3.2...9 3.3... 11 MPI...12 4 MPI...13 4.1 MPI...13 4.2 MPI...13 4.3 MPI...14 4.4 MPI...14 4.5 MPI...15 4.6...15 5 MPI...16 5.1 MPI Hello World!...16 5.1.1 FORTRAN77+MPI...16 5.1.2 C+MPI...18 5.2 MPI...21 5.3...22 6 MPI...23 6.1...23 6.1.1 MPI...23 III

6.1.2 MPI...25 6.1.3 MPI...25 6.1.4...25 6.1.5...26 6.1.6...26 6.1.7...27 6.1.8 status...27 6.1.9...28 6.2 MPI...29 6.3 MPI...30 6.3.1 MPI...30 6.3.2...32 6.4 MPI...33 6.4.1 MPI...33 6.4.2...34 6.4.3 MPI...35 6.5...35 7 MPI...36 7.1 MPI...36 7.2 MPI...38 7.3...39 7.4...41 7.5...43 7.6...46 7.7 MPI...47 7.8...50 8 MPI...51 8.1 MPI...51 8.1.1 Jacobi...51 8.1.2 MPI Jacobi...52 8.1.3 Jacobi...55 8.1.4 Jacobi...60 8.2 MPI...62 8.2.1...62 8.2.2...65 8.3...68 9 MPI...69 9.1...69 9.2...70 9.3...74 9.4...76 9.5...79 10 MPICH MPI...80 10.1 Linux MPICH...80 IV

10.1.1...80 10.1.2...81 10.1.3...82 10.1.4...82 10.1.5...83 10.1.6...83 10.1.7...86 10.2 Windows NT MPICH...87 10.2.1...87 10.2.2...87 10.2.3...88 10.2.4...91 11...92 11.1...92 11.2...93 11.3...94 MPI...95 12 MPI...96 12.1...96 12.2...97 12.3...99 12.4...101 12.5...102 12.5.1... 102 12.5.2... 104 12.6...107 12.6.1... 107 12.6.2... 109 12.7...110 12.8...112 12.9 Jacobi...113 12.10...116 12.11 Jacobi...119 12.12...122 13 MPI...123 13.1...123 13.1.1... 123 13.1.2... 124 13.1.3... 125 13.2...126 13.3...127 13.4...130 13.5...132 13.6...135 V

13.7...138 13.8...139 13.9 MPI...141 13.10 π...142 13.11...144 13.12...145 13.13...146 13.14...147 13.15...149 13.16 MINLOC MAXLOC...151 13.17...153 13.18...155 14 MPI...156 14.1...156 14.2...157 14.2.1... 157 14.2.2... 158 14.2.3... 160 14.2.4... 163 14.2.5... 164 14.3...171 14.4...172 14.5...175 14.6...177 14.7...181 15 MPI...182 15.1...182 15.2...182 15.3...187 15.4...190 15.5...194 15.6...198 16 MPI...199 16.1...199 16.2...199 16.3...205 16.4 Jacobi...208 16.5...212 17 MPI...213 17.1...213 17.2...215 18 MPI...216 18.1 MPI-1 C...216 18.2 MPI-1 Fortran...223 VI

18.3 MPI-2 C...234 18.4 MPI-2 Fortran...243 18.5...258 MPI MPI-2...259 19...260 19.1...260 19.2 MPI...262 19.3...264 19.4 socket...268 19.5...268 20...269 20.1...269 20.2...270 20.2.1... 270 20.2.2... 271 20.2.3... 272 20.2.4... 273 20.3...275 20.3.1... 275 20.3.2... 276 20.3.3... 278 20.4...280 21 I/O...281 21.1...281 21.2...282 21.3...286 21.3.1... 286 21.3.2... 289 21.3.3... 291 21.4...293 21.4.1... 294 21.4.2... 298 21.4.3... 300 21.4.4... 301 21.5...303 21.5.1... 304 21.5.2... 306 21.5.3... 307 21.6...311 21.7...314...315...316 VII

...318 MPI...320 1 MPI...325 2 MPICH 1.2.1...329 VIII

IX 21, Great Challenge, 90 HPCC ASCI (Computational Science and Engineering) 777 THNPSC-1 THNPSC-2 2000 ---MPI MPI MPI THNPSC-1 THNPSC-2 2000 MPI MPI MPI

MPI MPI MPI MPI MPI-2 2001 2 2 X

XI -- PC MPI MPI 1994 MPI MPI FORTRAN 77 C MPI 6 FORTRAN 77 C MPI 6 Fortran90 C++ MPI Fortran 90 C++ MPI FORTRAN77 C MPI MPI MPI MPI MPI MPI MPI MPI-2 I/O MPI 2001 2 1

1 FORTRAN77+MPI...17 3 C+MPI...20 4 Fortran90+MPI...20 5...29 6 MPI_REAL...31 7 MPI_REAL MPI_BYTE...31 8 MPI_BYTE MPI_BYTE...32 9 MPI_CHARACTER...32 10...36 11 MPI...38 12 MPI...39 13 MPI...41 14...42 15...45 16...46 17...47 18...48 19...49 20 Jacobi...52 21 MPI_SEND MPI_RECV Jacobi...55 22 MPI_SENDRECV Jacobi...60 23 Jacobi...62 24...65 25...67 26...74 27...76 28...79 29...97 30 MPI_WAIT...103 31...108 32 MPI_REQUEST_FREE...110 33... 111 34...112 35...113 36 Jacobi...116 37 Jacobi...122 38...127 39 MPI_Gather...129 40 MPI_Gatherv...130 41 MPI_Scatter...132 XII

42 MPI_Scatterv...132 43 MPI_Allgather...134 44 MPI_Allgatherv...135 45 MPI_Alltoall...137 46...139 47 π...144 48...150 49...150 50...151 51 MPI_MAXLOC...153 52...155 53...165 54 MPI_ADDRESS...171 55 MPI...172 56...174 57...176 58...177 59...179 60...180 61...181 62...190 63...194 64...198 65...205 66 Jacobi...212 67...313 68...314 XIII

1...3 2...3 3...4 4...8 5 SPMD...10 6 SPMD...10 7 MPMD... 11 8 FORTRAN77+MPI 1...17 9 FORTRAN77+MPI 4...17 10 FORTRAN77+MPI...18 11 C+MPI 1...19 12 C+MPI 4...19 13 MPI...21 14 MPI...23 15 MPI...30 16 MPI_SEND...34 17 MPI_RECV...34 18 tag MPI...34 19...41 20...43 21...44 22...46 23...47 24...48 25...49 26 Jacobi...52 27 Jacobi...53 28 MPI_SENDRECV Jacobi...57 29...63 30...65 31...68 32...70 33...71 34...75 35...77 36...77 37 MPI...82 38...84 39...84 40 NT MPI...89 XIV

41 NT MPI...89 42 NT MPI 1...89 43 NT MPI 2...90 44 NT MPI 3...90 45...96 46...97 47...98 48...99 49...100 50...123 51...123 52...124 53 MPI...124 54 MPI...125 55...126 56...128 57...130 58...133 59 MPI_ALLTOALL...136 60 MPI...140 61 π...142 62 π...142 63...146 64...147 65...148 66...148 67...149 68...156 69 MPI_TYPE_CONTIGUOUS...158 70 MPI_TYPE_VECTOR...159 71 MPI_TYPE_INDEXED...161 72 MPI_TYPE_STRUCT...163 73...203 74...206 75...209 76...209 77...210 78...260 79...261 80...261 81...271 82 MPI_PUT...272 83 MPI_GET...273 84...273 XV

85 MPI_ACCUMULATE...274 86 MPI_WIN_FENCE...276 87...276 88...279 89 MPI_FILE_READ_AT...287 90 MPI_FILE_WRITE_AT...288 91...291 92...293 93...294 94...295 95...297 96...298 97...311 98...311 99...311 XVI

1...7 2 MPI...15 3 MPI FORTRAN77...29 4 MPI C...29 5 MPI...30 6 MPI...69 7 MPI...98 8...99 9 MPI...141 10 C FORTRAN MPI...141 11...141 12 MPI Fortran...152 13 MPI C...152 14 MPI_MAXLOC...153 15...193 16...199 17...206 18...206 19...215 20 I/O...282 21...283 22...296 XVII

MPI 1

1 1.1 1 2 3 1.1.1 SIMD Single-Instruction Multiple-Data MIMD Multiple- Instruction Multiple-Data 1 SIMD A=A+1 SIMD A 1 SIMD SIMD MIMD A=B+C+D-E+F*G A=(B+C)+(D-E)+(F*G) B+C D-E F*G SIMD MIMD SPMD Single-Program Multuple-Data MPMD Multiple-Program Multiple-Data 1 2

SPMD MPMD MPMD D D M SIMD MIMD M SPMD MPMD S SISD MISD S SPSD MPSD S M I S M P 1 1.1.2 2 2 3

Cluster Computing 1.2 3 3 4

1.3 5

6 2 2.1 SIMD SPMD B C A A=B+C B C A 1

1 SIMD/SPMD SIMD/MIMD/SPMD/MPMD 2.2 1 2 3 MPI 4 7

4 2.3 FORTRAN C MPI FORTRAN C 8

9 3 3.1 3.2 SIMD MIMD SPMD MPMD SPMD MPMD

5 SPMD 5 6 SPMD 6 SPMD MPMD SPMD MPMD 7 MPMD 10

7 MPMD 3.3 11

MPI MPI MPI MPI MPICH Linux NT MPI MPI MPI MPI-1 MPI-2 12

4 MPI MPI MPI MPI 4.1 MPI MPI MPI 1 MPI MPI FORTRAN+MPI C+MPI MPI FORTRAN77/C/Fortran90/C++ / / 2 MPI MPI MPI MPI 3 MPI MPI MPI MPI MPI, MPI 4.2 MPI MPI 1 2 3 C Fortran 77 PVM NX Express p4 13

MPI 4.3 MPI MPI Venus (IBM) NX/2 (Intel) Express (Parasoft) Vertex (ncube) P4 (ANL) PARMACS (ANL) Zipcode (MSU) Chimp (Edinburgh University) PVM (ORNL, UTK, Emory U.) Chameleon (ANL) PICL (ANL) MPI Dongarra,Hempel,Hey Walker MPI 1.0 MPI MPI MPI MPI MPI1.1 MPI MPI I/O MPI MPI MPI MPI-2 MPI MPI-1 MPI-2 I/O MPI-1 MPI-2 4.4 MPI MPI MPI FORTRAN C FORTRAN C MPI-1 MPI FORTRAN 77 C FORTRAN 77 C MPI-1 MPI Fortran90 FORTRAN Fortran90 FORTRAN 77 Fortran90 C++ C MPI-2 FORTRAN 77 C Fortran90 C++ MPI-2 14

4.5 MPI MPICH MPI http://www-unix.mcs.anl.gov/mpi/mpich MPICH MPI-1 MPI MPICH MPICH MPICH-1.2.1 MPI-2 Argonne MSU MPICH CHIMP Edinburgh MPI EPCC Edinburgh Parallel Computing Centre ftp://ftp.epcc.ed.ac.uk/pub/packages/chimp/release/ CHIMP 1991 1994 Alasdair Bruce, James (Hamish) Mills, Gordon Smith LAM (Local Area Multicomputer) MPI Ohio State University LAM/MPI 6.3.2 http://www.mpi.nd.edu/lam/download/ 2 MPI 2 MPI Mpich Argonne and MSU http://www-unix.mcs.anl.gov/mpi/mpich Chimp Edinburgh ftp://ftp.epcc.ed.ac.uk/pub/packages/chimp/ Lam Ohio State University http://www.mpi.nd.edu/lam/ 4.6 MPI MPI MPI FORTRAN C MPI MPI 15

5 MPI Hello World MPI MPI FORTRAN C MPI 5.1 MPI Hello World! C Hello World MPI 5.1.1 FORTRAN77+MPI 1 FORTRAN77+MPI MPI FORTRAN mpif.h MPI C FORTRAN MPI FORTRAN mpif.h Fortran90+MPI MPI-2 Fortran90 C++ Fortran90 include mpif.h use mpi MPI Fortran90 4 MPI MPI_MAX_PROCESSOR_NAME MPI MPI processor_name myid numprocs namelen rc ierr MPI MPI MPI_INIT MPI_FINALIZE MPI MPI MPI FORTRAN MPI_COMM_RANK myid MPI_COMM_SIZE numprocs MPI_GET_PROCESSOR_NAME processor_name namelen write FORTRAN FORTRAN 4 tp5 4 tp5 0 1 2 3 8 MPI 10 16

program main include 'mpif.h' character * (MPI_MAX_PROCESSOR_NAME) processor_name integer myid, numprocs, namelen, rc,ierr call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) call MPI_GET_PROCESSOR_NAME(processor_name, namelen, ierr) write(*,10) myid,numprocs,processor_name 10 FORMAT('Hello World! Process ',I2,' of ',I1,' on ', 20A) call MPI_FINALIZE(rc) end 1 FORTRAN77+MPI Hello World! Process 1 of 4 on tp5 Hello World! Process 0 of 4 on tp5 Hello World! Process 2 of 4 on tp5 Hello World! Process 3 of 4 on tp5 8 FORTRAN77+MPI 1 4 tp1 tp3 tp4 tp5 9 4 4 Hello World! Process 0 of 4 on tp5 Hello World! Process 1 of 4 on tp1 Hello World! Process 2 of 4 on tp3 Hello World! Process 3 of 4 on tp4 9 FORTRAN77+MPI 4 17

Hello World 0 1 2 3 MPI_INIT MPI_INIT MPI_INIT MPI_INIT MPI_COMM_RANK MPI_COMM_RANK MPI_COMM_RANK MPI_COMM_RAN myid=0 myid=1 myid=2 myid=3 MPI_GET_PROCESSOR_NAME MPI_GET_PROCESSOR_NAME MPI_GET_PROCESSOR_NAME MPI_GET_PROCESSOR_NAME processor_name= tp5 processor_name= tp5 processor_name= tp5 processor_name= tp5 namelen=3 namelen=3 namelen=3 namelen=3 write write write write Hello World! Process 0 of 4 on tp5 Hello World! Process 1 of 4 on tp5 Hello World! Process 2 of 4 on tp5 Hello World! Process 3 of 4 on tp5 MPI_FINALIZE MPI_FINALIZE MPI_FINALIZE MPI_FINALIZE Hello World 10 FORTRAN77+MPI 5.1.2 C+MPI 3 C+MPI FORTRAN77+MPI MPI C mpi.h mpif.h MPI FORTRAN77 MPI_MAX_PROCESSOR_NAME MPI MPI 18

processor_name FORTRAN77 myid numprocs namelen MPI MPI_Init MPI_Finalize MPI FORTRAN77+MPI FORTRAN77 C FORTRAN77 MPI FORTRAN77 MPI FORTRAN77 C MPI_ MPI MPI C MPI_Comm_rank myid MPI_Comm_size numprocs MPI_Get_processor_name processor_name namelen fprintf C 4 tp5 tp5 0 1 2 3 11 MPI FORTRAN77+MPI Hello World! Process 0 of 4 on tp5 Hello World! Process 1 of 4 on tp5 Hello World! Process 3 of 4 on tp5 Hello World! Process 2 of 4 on tp5 11 C+MPI 1 4 tp1 tp3 tp4 tp5 12 4 4 FORTRAN77+MPI C+MPI Hello World! Process 0 of 4 on tp5 Hello World! Process 1 of 4 on tp1 Hello World! Process 2 of 4 on tp3 Hello World! Process 3 of 4 on tp4 12 C+MPI 4 19

#include "mpi.h" #include <stdio.h> #include <math.h> void main(argc,argv) int argc; char *argv[]; { int myid, numprocs; int namelen; char processor_name[mpi_max_processor_name]; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Get_processor_name(processor_name,&namelen); fprintf(stderr,"hello World! Process %d of %d on %s\n", myid, numprocs, processor_name); } MPI_Finalize(); 3 C+MPI program main use mpi character * (MPI_MAX_PROCESSOR_NAME) processor_name integer myid, numprocs, namelen, rc, ierr call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) call MPI_GET_PROCESSOR_NAME(processor_name, namelen, ierr) print *,"Hello World! Process ",myid," of ", numprocs, " on", processor_name call MPI_FINALIZE(rc) end 4 Fortran90+MPI 20

MPI 13 MPI 13 MPI 5.2 MPI MPI MPI MPI_ MPI_ MPI FORTRAN MPI FORTRAN C MPI MPI_Aaaa_aaa MPI FORTRAN MPI_SUCCESS MPI FORTRAN FORTRAN 1 C 0 FORTRAN 77 MPI ANSI FORTRAN 77 ANSI FORTRAN 77 MPI, MPI MPI mpif.h, mpif.h 21

5.3 MPI MPI MPI MPI MPI Hello World MPI Hello World SPMD Single Program Multiple Data 22

6 MPI MPI-1 128 MPI-2 287 MPI MPI 6 6 6 MPI 6.1 6 MPI MPI FORTRAN 77 C 6.1.1 MPI C FORTRAN 14 MPI 14 MPI MPI FORTRAN 77 C MPI-2 C++ MPI IN OUT INOUT IN MPI MPI OUT MPI INOUT MPI MPI OUT INOUT MPI INOUT 23

MPI IN OUT INOUT MPI MPI OUT INOUT void copyintbuffer( int *pin, int *pout, int len ) { int i; for (i=0; i<len; ++i) *pout++ = *pin++; } int a[10]; copyintbuffer( a, a+3, 7); C, MPI FORTRAN77 MPI MPI, C FORTRAN 77 MPI_INIT MPI_INIT() int MPI_Init(int *argc, char ***argv) C C argc argv argc argv MPI_INIT(IERROR) INTEGER IERROR FORTRAN77 FORTRAN77 IERROR C FORTRAN77 void*,<type> MPI C FORTRAN77 MPI MPI_SEND C FORTRAN77 void * <type> 24

6.1.2 MPI MPI_INIT() int MPI_Init(int *argc, char ***argv) MPI_INIT(IERROR) INTEGER IERROR MPI 1 MPI_INIT MPI_INIT MPI MPI 6.1.3 MPI MPI_FINALIZE() int MPI_Finalize(void) MPI_FINALIZE(IERROR) INTEGER IERROR MPI 2 MPI_FINALIZE MPI_FINALIZE MPI MPI MPI 6.1.4 MPI_COMM_RANK(comm,rank) IN comm OUT rank comm int MPI_Comm_rank(MPI_Comm comm, int *rank) MPI_COMM_RANK(COMM,RANK,IERROR) INTEGER COMM,RANK,IERROR MPI 3 MPI_COMM_RANK 25

6.1.5 MPI_COMM_SIZE(comm,size) IN comm OUT size comm int MPI_Comm_size(MPI_Comm comm, int *size) MPI_COMM_SIZE(COMM,SIZE,IERROR) INTEGER COMM,SIZE,IERROR MPI 4 MPI_COMM_SIZE 6.1.6 MPI_SEND(buf,count,datatype,dest,tag,comm) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) int MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_SEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR MPI 5 MPI_SEND MPI_SEND count datatype dest tag MPI_SEND count datatype buf datatype MPI MPI_SEND 26

6.1.7 MPI_RECV source datatype tag count count datatype datatype buf MPI count datatype MPI MPI_RECV MPI_RECV(buf,count,datatype,source,tag,comm,status) OUT buf ( ) IN count ( ) IN datatype ( ) IN source ( ) IN tag ( ) IN comm ( ) OUT status ( ) int MPI_Recv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS (MPI_STATUS_SIZE) IERROR MPI 6 MPI_RECV 6.1.8 status status MPI C MPI_SOURCE MPI_TAG MPI_ERROR status.mpi_source status.mpi_tag status.mpi_error tag FORTRAN status MPI_STATUS_SIZE status(mpi_source) status(mpi_tat) status(mpi_error) tag 27

status MPI_GET_COUNT 6.1.9 0 1 Hello, process 1 1 #include "mpi.h" main( argc, argv ) int argc; char **argv; { char message[20]; int myrank; MPI_Init( &argc, &argv ); /* MPI */ MPI_Comm_rank( MPI_COMM_WORLD, &myrank ); /* */ if (myrank == 0) /* 0 */ { /* message MPI_Send strlen(message) MPI_CHAR 1 1 99 MPI_COMM_WORLD 0 1 */ strcpy(message,"hello, process 1"); MPI_Send(message, strlen(message), MPI_CHAR, 1, 99,MPI_COMM_WORLD); } else if(myrank==1) /* 1 */ { /* 1 message 20 MPI_CHAR 0 99 MPI_COMM_WORLD status */ MPI_Recv(message, 20, MPI_CHAR, 0, 99, MPI_COMM_WORLD, &status); printf("received :%s:", message); } MPI_Finalize(); 28

} /* MPI */ 5 6.2 MPI FORTRAN77 MPI FORTRAN77 3 3 MPI FORTRAN77 MPI FORTRAN77 MPI_INTEGER INTEGER MPI_REAL REAL MPI_DOUBLE_PRECISION DOUBLE PRECISION MPI_COMPLEX COMPLEX MPI_LOGICAL LOGICAL MPI_CHARACTER CHARACTER(1) MPI_BYTE MPI_PACKED MPI C 4 4 MPI C MPI C MPI_CHAR signed char MPI_SHORT signed short int MPI_INT signed int MPI_LONG signed long int MPI_UNSIGNED_CHAR unsigned char MPI_UNSIGNED_SHORT unsigned short int MPI_UNSIGNED unsigned int MPI_UNSIGNED_LONG unsigned long int MPI_FLOAT float MPI_DOUBLE double MPI_LONG_DOUBLE long double MPI_BYTE MPI_PACKED MPI_BYTE MPI_PACKED FORTRAN77 C MPI_BYTE (8 ) MPI, Fortran 77 ANSI C MPI 5 29

5 MPI MPI MPI_LONG_LONG_INT MPI MPI_DOUBLE_COMPLEX MPI_REAL2 MPI_REAL4 MPI_REAL8 MPI_INTEGER1 MPI_INTEGER2 MPI_INTEGER4 C long long int FORTRAN77 DOUBLE COMPLEX REAL*2 REAL*4 REAL*8 INTEGER*1 INTEGER*2 INTEGER*4 6.3 MPI 6.3.1 MPI MPI MPI 15 15 MPI MPI 1 2 3 1 2 3 MPI 1 2 FORTRAN77 INTEGER MPI_INTEGER REAL MPI_REAL FORTRAN77 MPI C int MPI_INT float MPI_FlOAT 30

MPI_INTEGER MPI_INTEGER MPI_REAL MPI_REAL C int long MPI MPI_INT MPI_LONG MPI_INT MPI_LONG MPI_INT MPI_LONG MPI MPI_BYTE MPI_PACKED MPI_TYPE MPI_PACK MPI_UNPACK REAL a(20),b(20) CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_SEND(a(1), 10, MPI_REAL, 1, tag, comm, ierr) ELSE IF (rank.eq. 1) THEN CALL MPI_RECV(b(1), 15, MPI_REAL, 0, tag, comm, status, ierr) END IF 6 MPI_REAL REAL a(20),b(20) CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_SEND(a(1), 10, MPI_REAL, 1, tag, comm, ierr) ELSE IF (rank.eq. 1) THEN CALL MPI_RECV(b(1), 40, MPI_BYTE, 0, tag, comm, status, ierr) END IF 7 MPI_REAL MPI_BYTE REAL a(20),b(20) CALL MPI_COMM_RANK(comm, rank, ierr) IF(rank.EQ.0) THEN CALL MPI_SEND(a(1), 40, MPI_BYTE, 1, tag, comm, ierr) ELSE IF (rank.eq. 1) THEN CALL MPI_RECV(b(1), 60, MPI_BYTE, 0, tag, comm, status, ierr) END IF 31

8 MPI_BYTE MPI_BYTE MPI_BYTE, MPI_PACKED 6 7 MPI_REAL MPI_BYTE 8 MPI_BYTE MPI_CHARACTER FORTRAN 77 CHARACTER CHARACTER FORTRAN 77 CHARACTER*10 a CHARACTER*10 b CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_SEND(a, 5, MPI_CHARACTER, 1, tag, comm, ierr) ELSE IF (rank.eq. 1) THEN CALL MPI_RECV(b(6), 5, MPI_CHARACTER, 0, tag, comm, status, ierr) END IF 9 MPI_CHARACTER 9 1 b 0 a Fortran CHARACTER, MPI MPI Fortran CHARACTER MPI 6.3.2 32

33 MPI_SEND( buf, count,datatype,dest,tag,comm) 32 64 MPI MPI MPI MPI MPI MPI (,MPI_BYTE ) MPI,, a b 10 10 ( ) a b 6.4 MPI 6.4.1 MPI MPI < / > < > MPI_SEND MPI_RECV 17 17

16 MPI_SEND MPI_RECV(buf,count,datatype,source,tag,comm,status) 17 MPI_RECV / tag 18 0 1 MPI_SEND( x,1,,1,tag1,comm) 1 MPI_SEND( y,1,,1,tag2,comm) 2 tag2 2 0 2 y x tag 1 MPI_RECV(x,1,,0,tag1,comm,status) tag1 1 1 1 MPI_RECV(y,1,,0,tag2,comm,status) 1 2 18 tag MPI 6.4.2 source,tag comm, source MPI_ANY_SOURCE tag tag MPI_ANY_TAG tag MPI_ANY_SOURCE MPI_ANY_TAG comm source ( source = MPI_ANY_SOURCE) tag( = MPI_ANY_TAG) MPI_ANY_SOURCE MPI_ANY_TAG MPI = Source = destination 34

35 6.4.3 MPI MPI N 0 N-1 MPI_COMM_WORLD MPI MPI MPI_COMM_WORLD 6.5 MPI MPI MPI MPI MPI MPI MPI MPI MPI

7 MPI MPI MPI MPI 7.1 MPI MPI MPI MPI_WTIME() double MPI_Wtime(void) DOUBLE PRECISION MPI_WTIME() MPI 7 MPI_WTIME MPI_WTIME, double starttime, endtime;... starttime = MPI_Wtime() endtime = MPI_Wtime() printf("that tooks %f secodes\n", endtime-starttime); 10 MPI_WTICK double MPI_Wtick DOUBLE PRECISION MPI_WTICK MPI 8 MPI_WTICK MPI_WTICK MPI_WTIME 36

MPI #include <stdio.h> #include <stdlib.h> #include "mpi.h" #include "test.h" int main( int argc, char **argv ) { int err = 0; double t1, t2; double tick; int i; MPI_Init( &argc, &argv ); t1 = MPI_Wtime();/* t1*/ t2 = MPI_Wtime();/* t2*/ if (t2 - t1 > 0.1 t2 - t1 < 0.0) { /* 0.1 */ err++; fprintf( stderr, "Two successive calls to MPI_Wtime gave strange results: (%f) (%f)\n", t1, t2 ); } /* 10 1 */ for (i = 0; i<10; i++) { t1 = MPI_Wtime();/* */ sleep(1);/* 1 */ t2 = MPI_Wtime();/* */ if (t2 - t1 >= (1.0-0.01) && t2 - t1 <= 5.0) break; /* */ if (t2 - t1 > 5.0) i = 9; /* */ } /* 10 */ if (i == 10) { /* */ fprintf( stderr, "Timer around sleep(1) did not give 1 second; gave %f\n",t2 - t1 ); err++; 37

} } tick = MPI_Wtick(); /* */ if (tick > 1.0 tick < 0.0) { /* */ err++; fprintf( stderr, "MPI_Wtick gave a strange result: (%f)\n", tick ); } MPI_Finalize( ); 11 MPI 7.2 MPI MPI rank MPI MPI_GET_PROCESSOR_NAME name, resultlen OUT name OUT resultlen int MPI_Get_processor_name ( char *name, int *resultlen) MPI_GET_PROCESSOR_NAME NAME, RESULTLEN, IERROR CHARACTER *(*) NAME INTEGER RESULTLEN, IERROR MPI 9 MPI_GET_PROCESSOR_NAME MPI_GET_PROCESSOR_NAME MPI_GET_VERSION(version, subversion) OUT version OUT subversion int MPI_Get_version(int * version, int * subversion) MPI_GET_VERSION(VERSION, SUBVERSION,IERROR) INTEGER VERSION, SUBVERSION, IERROR MPI 10 MPI_GET_VERSION MPI_GET_VERSION MPI version subversion MPI 38

program main include 'mpif.h' character*(mpi_max_processor_name) name integer resultlen, version, subversion, ierr C call MPI_Init( ierr ) name = " " C C C call MPI_Get_processor_name( name, resultlen, ierr ) name resultlen call MPI_GET_VERSION(version, subversion,ierr) MPI errs = 0 do i=resultlen+1, MPI_MAX_PROCESSOR_NAME if (name(i:i).ne. " ") then name resultlen errs = errs + 1 endif enddo if (errs.gt. 0) then print *, 'Non-blanks after name' else print *, name, " MPI version",version, ".", subversion endif call MPI_Finalize( ierr ) end 12 MPI 7.3 MPI MPI_INIT MPI MPI_INITALIZED MPI_INIT MPI_INITALIZED(flag) OUT flag MPI_INIT int MPI_Initialized(int *flag) MPI_INITALIZED(FLAG, IERROR) LOGICAL FLAG INTEGER IERROR MPI 11 MPI_INITALIZED 39

MPI_INITALIZED MPI_INIT flag=true flag=false MPI MPI MPI_ABORT(comm, errorcode) IN comm IN errorcode int MPI_Abort(MPI_Comm comm, int errorcode) MPI_ABORT(COMM, ERRORCODE, IERROR) INTEGER COMM, ERRORCODE, IERROR MPI 12 MPI_ABORT MPI_ABORT comm master #include "mpi.h" #include <stdio.h> /* masternode == 0 masternode!= 0 */ int main( int argc, char **argv ) { int node, size, i; int masternode = 0; /* */ MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &node); MPI_Comm_size(MPI_COMM_WORLD, &size); /* */ for (i=1; i<argc; i++) { fprintf(stderr,"myid=%d,procs=%d,argv[%d]=%s\n",node,size,i,argv[i]); if (argv[i] && strcmp( "lastmaster", argv[i] ) == 0) { masternode = size-1; /* master*/ } } if(node == masternode) { /* master */ fprintf(stderr,"myid=%d is masternode Abort!\n",node); MPI_Abort(MPI_COMM_WORLD, 99); } 40

} else { /* master */ fprintf(stderr,"myid=%d is not masternode Barrier!\n",node); MPI_Barrier(MPI_COMM_WORLD); } MPI_Finalize(); 13 MPI 7.4 19 0 1 N-1 19 #include <stdio.h> #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank, value, size; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); /* */ do { /* */ 41

if (rank == 0) { fprintf(stderr, "\nplease give new value="); /* 0 */ scanf( "%d", &value ); fprintf(stderr,"%d read <-<- (%d)\n",rank,value); if (size>1) { MPI_Send( &value, 1, MPI_INT, rank + 1, 0, MPI_COMM_WORLD ); fprintf(stderr,"%d send (%d)->-> %d\n", rank,value,rank+1); /* */ } } else { MPI_Recv( &value, 1, MPI_INT, rank - 1, 0, MPI_COMM_WORLD, &status ); /* */ fprintf(stderr,"%d receive (%d)<-<- %d\n",rank,value,rank-1); if (rank < size - 1) { MPI_Send( &value, 1, MPI_INT, rank + 1, 0, MPI_COMM_WORLD ); fprintf(stderr,"%d send (%d)->-> %d\n", rank,value,rank+1); /* */ } } MPI_Barrier(MPI_COMM_WORLD); /* */ } while ( value>=0); MPI_Finalize( ); } 14 7 76-3 20 42

Please give new value=76 0 read <-<- (76) 0 send (76)->-> 1 1 receive (76)<-<- 0 1 send (76)->-> 2 2 receive (76)<-<- 1 2 send (76)->-> 3 3 receive (76)<-<- 2 3 send (76)->-> 4 4 receive (76)<-<- 3 4 send (76)->-> 5 5 receive (76)<-<- 4 5 send (76)->-> 6 6 receive (76)<-<- 5 Please give new value=-3 0 read <-<- (-3) 0 send (-3)->-> 1 1 receive (-3)<-<- 0 2 receive (-3)<-<- 1 3 receive (-3)<-<- 2 4 receive (-3)<-<- 3 4 send (-3)->-> 5 5 receive (-3)<-<- 4 6 receive (-3)<-<- 5 1 send (-3)->-> 2 2 send (-3)->-> 3 3 send (-3)->-> 4 5 send (-3)->-> 6 20 7.5 21 43

0 hello hello hello hello hello 1 2 hello 21 #include "mpi.h" #include <stdio.h> #include <stdlib.h> void Hello( void ); int main(int argc, char *argv[]) { int me, option, namelen, size; char processor_name[mpi_max_processor_name]; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&me); MPI_Comm_size(MPI_COMM_WORLD,&size); /* */ if (size < 2) { /* 2 */ fprintf(stderr, "systest requires at least 2 processes" ); MPI_Abort(MPI_COMM_WORLD,1); } MPI_Get_processor_name(processor_name,&namelen); /* */ fprintf(stderr,"process %d is alive on %s\n", me, processor_name); MPI_Barrier(MPI_COMM_WORLD); /* */ Hello(); /* */ MPI_Finalize(); } 44

void Hello( void ) /* */ { int nproc, me; int type = 1; int buffer[2], node; MPI_Status status; MPI_Comm_rank(MPI_COMM_WORLD, &me); MPI_Comm_size(MPI_COMM_WORLD, &nproc); /* */ if (me == 0) { /* 0 */ printf("\nhello test from all to all\n"); fflush(stdout); } for (node = 0; node<nproc; node++) { /* */ if (node!= me) { /* */ buffer[0] = me; /* */ buffer[1] = node; /* */ MPI_Send(buffer, 2, MPI_INT, node, type, MPI_COMM_WORLD); /* */ MPI_Recv(buffer, 2, MPI_INT, node, type, MPI_COMM_WORLD, &status); /* */ if ( (buffer[0]!= node) (buffer[1]!= me) ) { /* */ (void) fprintf(stderr, "Hello: %d!=%d or %d!=%d\n", buffer[0], node, buffer[1], me); printf("mismatch on hello process ids; node = %d\n", node); } printf("hello from %d to %d\n", me, node); /* */ fflush(stdout); } } } 15 45

7.6 tag 22 ROOT 0 1 2 i N-1 ROOT ROOT 0 22 #include "mpi.h" #include <stdio.h> int main(argc, argv) int argc; char **argv; { int rank, size, i, buf[1]; MPI_Status status; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); if (rank == 0) { for (i=0; i<100*(size-1); i++) { MPI_Recv( buf, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status ); printf( "Msg=%d from %d with tag %d\n", buf[0], status.mpi_source, status.mpi_tag ); } } else { for (i=0; i<100; i++) buf[0]=rank+i; MPI_Send( buf, 1, MPI_INT, 0, i, MPI_COMM_WORLD ); } MPI_Finalize(); } 16 46

7.7 MPI MPI 17 CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_RECV(recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 1, tag, comm, ierr) ELSE IF( rank.eq. 1) CALL MPI_RECV(recvbuf, count, MPI_REAL, 0, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 0, tag, comm, ierr) END IF 17 23 47

24 CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_SEND(sendbuf, count, MPI_REAL, 1, tag, comm, ierr) CALL MPI_RECV(recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) ELSE rank.eq.1 CALL MPI_SEND(sendbuf, count, MPI_REAK, 0, tag, comm, status, ierr) CALL MPI_RECV(recvbuf, count, MPI_REAL, 0, tag, comm, status, ierr) END IF 18 24 0 1 MPI 0 1 1 0 25 48

CALL MPI_COMM_RANK(comm, rank, ierr) IF (rank.eq.0) THEN CALL MPI_SEND(sendbuf, count, MPI_REAL, 1, tag, comm, ierr) CALL MPI_RECV(recvbuf, count, MPI_REAL, 1, tag, comm, status, ierr) ELSE rank.eq. 1 CALL MPI_RECV(recvbuf, count, MPI_REAL, 0, tag, comm, status, ierr) CALL MPI_SEND(sendbuf, count, MPI_REAL, 0, tag, comm, ierr) END IF 19 25 C A A D A D A D B C B C A C D B 49

7.8 MPI MPI MPI 50

51 8 MPI MPI MPI MPI MPI Jacobi MPI MPI MPI MPI SPMD MPI MPMD MPMD SPMD MPI SPMD SPMD SPMD SPMD SPMD SPMD 8.1 MPI 8.1.1 Jacobi Jacobi 20 Jacobi Jacobi

REAL A(N+1,N+1), B(N+1,N+1) DO K=1,STEP DO J=1,N DO I=1,N B(I,J)=0.25*(A(I-1,J)+A(I+1,J)+A(I,J+1)+A(I,J-1)) END DO END DO DO J=1,N DO I=1,N A(I,J)=B(I,J) END DO END DO 20 Jacobi 8.1.2 MPI Jacobi 26 4 0 1 2 3 1 26 Jacobi M M A(M,M) M=4*N 0 A(M,1:N) 1 A(M,N+1:2*N), 3 A(M,2*N+1:3*N) 3 A(M,3*N+1:M) 1 52

M*N N+2 0 1 8 0 Jacobi 27 FORTRAN 21 0 1 2 3 27 Jacobi C program main implicit none include 'mpif.h' integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) C integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer status(mpi_status_size) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) 53

print *, "Process ", myid, " of ", numprocs, " is alive" C do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do C Jacobi do n=1,steps C C C C if (myid.lt. 3) then call MPI_RECV(a(1,mysize+2),totalsize,MPI_REAL,myid+1,10, * MPI_COMM_WORLD,status,ierr) end if if ((myid.gt. 0) ) then call MPI_SEND(a(1,2),totalsize,MPI_REAL,myid-1,10, * MPI_COMM_WORLD,ierr) end if if (myid.lt. 3) then call MPI_SEND(a(1,mysize+1),totalsize,MPI_REAL,myid+1,10, * MPI_COMM_WORLD,ierr) end if if (myid.gt. 0) then call MPI_RECV(a(1,1),totalsize,MPI_REAL,myid-1,10, 54

* MPI_COMM_WORLD,status,ierr) end if begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 endif if (myid.eq. 3) then end_col=mysize endif do j=begin_col,end_col do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) end do end do end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) end do call MPI_Finalize(rc) end 21 MPI_SEND MPI_RECV Jacobi 8.1.3 Jacobi Jacobi MPI MPI 55

MPI_SENDRECV(sendbuf,sendcount,sendtype,dest,sendtag,recvbuf,recvcount, recvtype, source,recvtag,comm,status) IN sendbuf ( ) IN sendcount ( ) IN sendtype ( ) IN dest ( ) IN sendtag ( ) OUT recvbuf ( ) IN recvcount ( ) IN recvtype ( ) IN source ( ) IN recvtag ( ) IN comm ( ) OUT status (status) int MPI_Sendrecv(void *sendbuf, int sendcount,mpi_datatype sendtype, int dest, int sendtag, void *recvbuf, int recvcount, MPI_Datatype recvtype, int source, int recvtag, MPI_Comm comm, MPI_Status *status) MPI_SENDRECV(SENDBUF, SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVBUF, RECVCOUNT, RECVTYPE, SOURCE, RECVTAG, COMM, STATUS, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVCOUNT, RECVTYPE,SOURCE, RECVTAG, COMM,STATUS(MPI_STATUS_SIZE), IERROR MPI 13 MPI_SENDRECV MPI_SENDRECV MPI_SENDRECV_REPLACE MPI_SENDRECV MPI_SENDRECV_REPLACE MPI_SENDRECV 56

MPI_SENDRECV_REPLACE(buf,count,datatype,dest,sendtag,source,recvtag,comm, status) INOUT buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN sendtag ( ) IN source ( ) IN recvtag ( ) IN comm ( ) OUT status (status) int MPI_Sendrecv_replace(void *buf, int count, MPI_Datatype datatype, int dest, int sendtag, int source,int recvtag, MPI_Comm comm, MPI_Status *status) MPI_SENDRECV_REPLACE(BUF, COUNT, DATATYPE, DEST, SENDTAG, SOURCE, RECVTAG, COMM, STATUS, IERROR) BUF(*) INTEGER COUNT, DATATYPE, DEST, SENDTAG, SOURCE, RECVTAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR MPI 14 MPI_SENDRECV_REPLACE Jacobi MPI_SENDRECV 28 28 MPI_SENDRECV Jacobi 57

22 MPI_SENDRECV program main implicit none include 'mpif.h' integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer status(mpi_status_size) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) print *, "Process ", myid, " of ", numprocs, " is alive" C do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do C 58

do n=1,steps C C if (myid.eq. 0) then call MPI_SEND(a(1,mysize+1),totalsize,MPI_REAL,myid+1,10, * MPI_COMM_WORLD,ierr) else if (myid.eq. 3) then call MPI_RECV(a(1,1),totalsize,MPI_REAL,myid-1,10, * MPI_COMM_WORLD,status,ierr) else call MPI_SENDRECV(a(1,mysize+1),totalsize,MPI_REAL,myid+1,10, * a(1,1),totalsize,mpi_real,myid-1,10, * MPI_COMM_WORLD,status,ierr) end if if (myid.eq. 0) then call MPI_RECV(a(1,mysize+2),totalsize,MPI_REAL,myid+1,10, * MPI_COMM_WORLD,status,ierr) else if (myid.eq. 3) then call MPI_SEND(a(1,2),totalsize,MPI_REAL,myid-1,10, * MPI_COMM_WORLD,ierr) else call MPI_SENDRECV(a(1,2),totalsize,MPI_REAL,myid-1,10, * a(1,mysize+2),totalsize,mpi_real,myid+1,10, * MPI_COMM_WORLD,status,ierr) end if begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 endif if (myid.eq. 3) then end_col=mysize endif do j=begin_col,end_col do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) 59

end do end do end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) end do call MPI_Finalize(rc) end 22 MPI_SENDRECV Jacobi 8.1.4 Jacobi MPI_PROC_NULL MPI MPI_PRC_NULL MPI_PROC_NULL Jacobi program main implicit none include 'mpif.h' integer totalsize,mysize,steps parameter (totalsize=16) parameter (mysize=totalsize/4,steps=10) integer n, myid, numprocs, i, j,rc real a(totalsize,mysize+2),b(totalsize,mysize+2) integer begin_col,end_col,ierr integer left,right,tag1,tag2 integer status(mpi_status_size) call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) print *, "Process ", myid, " of ", numprocs, " is alive" 60

C do j=1,mysize+2 do i=1,totalsize a(i,j)=0.0 end do end do if (myid.eq. 0) then do i=1,totalsize a(i,2)=8.0 end do end if if (myid.eq. 3) then do i=1,totalsize a(i,mysize+1)=8.0 end do end if do i=1,mysize+2 a(1,i)=8.0 a(totalsize,i)=8.0 end do C C C C tag1=3 tag2=4 if (myid.gt. 0) then left=myid-1 else left=mpi_proc_null end if if (myid.lt. 3) then right=myid+1 else right=mpi_proc_null end if Jacobi do n=1,steps call MPI_SENDRECV(a(1,mysize+1),totalsize,MPI_REAL,right,tag1, * a(1,1),totalsize,mpi_real,left,tag1, * MPI_COMM_WORLD,status,ierr) call MPI_SENDRECV(a(1,2),totalsize,MPI_REAL,left,tag2, * a(1,mysize+2),totalsize,mpi_real,right,tag2, * MPI_COMM_WORLD,status,ierr) 61

begin_col=2 end_col=mysize+1 if (myid.eq. 0) then begin_col=3 endif if (myid.eq. 3) then end_col=mysize endif do j=begin_col,end_col do i=2,totalsize-1 b(i,j)=(a(i,j+1)+a(i,j-1)+a(i+1,j)+a(i-1,j))*0.25 end do end do do j=begin_col,end_col do i=2,totalsize-1 a(i,j)=b(i,j) end do end do end do do i=2,totalsize-1 print *, myid,(a(i,j),j=begin_col,end_col) end do call MPI_Finalize(rc) end 23 Jacobi 8.2 MPI 8.2.1 C=A B 29 B A B A A 62

A B 29 program main include "mpif.h" integer MAX_ROWS,MAX_COLS, rows, cols parameter (MAX_ROWS=1000, MAX_COLS=1000) double precision a(max_rows, MAX_COLS),b(MAX_COLS),c(MAX_COLS) double precision buffer (MAX_COLS), ans integer myid, master, numprocs, ierr, status(mpi_status_size) integer i,j,numsent, numrcvd, sender integer anstype, row call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, myid, ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD, numprocs, ierr) master=0 rows=100 cols=100 C if (myid.eq. master) then A B do i=1,cols b(i)=1 do j=1,rows a(i,j)=i end do end do numsent=0 numrcvd=0 63

C C C C C C C C C C C C B call MPI_BCAST(b,cols,MPI_DOUBLE_PRECISION,master, $ MPI_COMM_WORLD, ierr) A numprocs-1 do i=1,min(numprocs-1,rows) do j=1,cols buffer(j)=a(i,j) end do call MPI_SEND(buffer, cols, MPI_DOUBLE_PRECISION,i, $ i,mpi_comm_world, ierr) numsent=numsent+1 end do do i=1,row call MPI_RECV(ans, 1,MPI_DOUBLE_PRECISION, MPI_ANY_SOURCE, $ MPI_ANY_TAG,MPI_COMM_WORLD, status, ierr) sender=status(mpi_source) anstype=status(mpi_tag) C c(anstype)=ans if (numsent.lt. rows) then do j=1,cols buffer(j)=a(numsent+1,j) end do call MPI_SEND(buffer,cols, MPI_DOUBLE_PRECISION, sender, $ numsent+1,mpi_comm_world, ierr) numsent=numsent+1 else 0 call MPI_SEND(1.0,0,MPI_DOUBLE_PRECISION,sender, $ 0, MPI_COMM_WORLD, ierr) end if else B call MPI_BCAST(b,cols,MPI_DOUBLE_PRECISION,master, $ MPI_COMM_WORLD, ierr) 64

C A 90 call MPI_RECV(buffer,cols, MPI_DOUBLE_PRECISION, master, $ MPI_ANY_TAG, MPI_COMM_WORLD, status,ierr) C 0 if (status(mpi_tag).ne. 0) then row=status(mpi_tag) ans=0.0 do i=1,cols ans=ans+buffer(i)*b(i) end do C call MPI_SEND(ans, 1, MPI_DOUBLE_PRECISION, master, row, $ MPI_COMM_WORLD, ierr) goto 90 end if endif call MPI_FINALIZE(ierr) end 24 8.2.2 30 0 1 2... 30 65

#include <stdio.h> #include "mpi.h" int main( argc, argv ) int argc; char **argv; { int rank, size; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); if (rank == 0) master_io(); /* 0 */ else slave_io(); /* */ MPI_Finalize( ); } #define MSG_EXIT 1 #define MSG_PRINT_ORDERED 2 /* */ #define MSG_PRINT_UNORDERED 3 /* */ /* */ int master_io( void ) { int i,j, size, nslave, firstmsg; char buf[256], buf2[256]; MPI_Status status; MPI_Comm_size( MPI_COMM_WORLD, &size );/* */ nslave = size - 1;/* */ while (nslave > 0) {/* */ MPI_Recv( buf, 256, MPI_CHAR, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status );/* */ switch (status.mpi_tag) { case MSG_EXIT: nslave--; break;/* 1*/ case MSG_PRINT_UNORDERED:/* */ fputs( buf, stdout ); break; case MSG_PRINT_ORDERED:/* */ firstmsg = status.mpi_source; for (i=1; i<size; i++) {/* */ if (i == firstmsg) fputs( buf, stdout );/* */ 66

else {/* */ MPI_Recv( buf2, 256, MPI_CHAR, i, MSG_PRINT_ORDERED, MPI_COMM_WORLD, &status );/* */ fputs( buf2, stdout ); } } break; } } } /* */ int slave_io( void ) { char buf[256]; int rank; MPI_Comm_rank( MPI_COMM_WORLD, &rank );/* */ sprintf( buf, "Hello from slave %d ordered print\n", rank ); MPI_Send( buf, strlen(buf) + 1, MPI_CHAR, 0, MSG_PRINT_ORDERED, MPI_COMM_WORLD );/* */ sprintf( buf, "Goodbye from slave %d, ordered print\n", rank ); MPI_Send( buf, strlen(buf) + 1, MPI_CHAR, 0, MSG_PRINT_ORDERED, MPI_COMM_WORLD );/* */ sprintf( buf, "I'm exiting (%d),unordered print\n", rank ); MPI_Send( buf, strlen(buf) + 1, MPI_CHAR, 0, MSG_PRINT_UNORDERED, MPI_COMM_WORLD );/* */ MPI_Send( buf, 0, MPI_CHAR, 0, MSG_EXIT, MPI_COMM_WORLD );/* */ } 25 31 10 1 9 Hello from slave 1,ordered print Hello from slave 2,ordered print Hello from slave 3,ordered print Hello from slave 4,ordered print Hello from slave 5,ordered print Hello from slave 6,ordered print Hello from slave 7,ordered print Hello from slave 8,ordered print Hello from slave 9,ordered print 67

Goodbye from slave 1,ordered print Goodbye from slave 2,ordered print Goodbye from slave 3,ordered print Goodbye from slave 4,ordered print Goodbye from slave 5,ordered print Goodbye from slave 6,ordered print Goodbye from slave 7,ordered print Goodbye from slave 8,ordered print Goodbye from slave 9,ordered print I'm exiting (1),unordered print I'm exiting (3),unordered print I'm exiting (4),unordered print I'm exiting (7),unordered print I'm exiting (8),unordered print I'm exiting (9),unordered print I'm exiting (2),unordered print I'm exiting (5),unordered print I'm exiting (6),unordered print 31 8.3 MPI MPI MPI MPI 68

9 MPI MPI standard mode bufferedmode synchronous-mode ready-mode MPI 1 2 3 4 MPI 6 MPI MPI_SEND MPI_RECV MPI_BSEND MPI_SSEND MPI_RSEND MPI B S R 9.1 MPI 32 MPI MPI MPI 69

32 9.2 33 MPI_BSEND(buf, count, datatype, dest, tag, comm) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) int MPI_Bsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_BSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR MPI 15 MPI_BSEND MPI_BSEND MPI_SEND 70

33 MPI MPI MPI_BUFFER_ATTACH( buffer, size) IN buffer ( ) IN size ( ) int MPI_Buffer_attach( void* buffer, int size) MPI_BUFFER_ATTACH( BUFFER, SIZE, IERROR) <type>bufferr(*) INTEGER SIZE, IERROR MPI 16 MPI_BUFFER_ATTACH MPI_BUFFER_ATTACH size MPI MPI_BUFFER_DETACH( buffer, size) OUT buffer ( ) OUT size ( ) int MPI_Buffer_detach( void** buffer, int* size) MPI_BUFFER_DETACH( BUFFER, SIZE, IERROR) <type>buffer(*) INTEGER SIZE, IERROR MPI 17 MPI_BUFFER_DETACH MPI_BUFFER_DETACH size buffer 5 71

#include <stdio.h> #include <stdlib.h> #include "mpi.h" #define SIZE 6 /* */ static int src = 0; static int dest = 1; void Generate_Data ( double *, int ); /* */ void Normal_Test_Recv ( double *, int ); /* */ void Buffered_Test_Send ( double *, int ); /* */ void Generate_Data(buffer, buff_size) double *buffer; int buff_size; { int i; for (i = 0; i < buff_size; i++) buffer[i] = (double)i+1; } void Normal_Test_Recv(buffer, buff_size) double *buffer; int buff_size; { int i, j; MPI_Status Stat; double *b; b = buffer; /* buff_size - 1 */ MPI_Recv(b, (buff_size - 1), MPI_DOUBLE, src, 2000, MPI_COMM_WORLD, &Stat); fprintf(stderr,"standard receive a message of %d data\n",buff_size-1); for (j=0;j<buff_size-1;j++) fprintf(stderr," buf[%d]=%f\n",j,b[j]); b += buff_size - 1; /* */ MPI_Recv(b, 1, MPI_DOUBLE, src, 2000, MPI_COMM_WORLD, &Stat); fprintf(stderr,"standard receive a message of one data\n"); fprintf(stderr,"buf[0]=%f\n",*b); 72

} void Buffered_Test_Send(buffer, buff_size) double *buffer; int buff_size; { int i, j; void *bbuffer; int size; fprintf(stderr,"buffered send message of %d data\n",buff_size-1); for (j=0;j<buff_size-1;j++) fprintf(stderr,"buf[%d]=%f\n",j,buffer[j]); /* buff_size - 1 */ MPI_Bsend(buffer, (buff_size - 1), MPI_DOUBLE, dest, 2000, MPI_COMM_WORLD); buffer += buff_size - 1; fprintf(stderr,"buffered send message of one data\n"); fprintf(stderr,"buf[0]=%f\n",*buffer); /* 1 */ MPI_Bsend(buffer, 1, MPI_DOUBLE, dest, 2000, MPI_COMM_WORLD); /* */ MPI_Buffer_detach( &bbuffer, &size ); /* */ MPI_Buffer_attach( bbuffer, size ); } int main(int argc, char **argv) { int rank; /* My Rank (0 or 1) */ double buffer[size], *tmpbuffer, *tmpbuf; int tsize, bsize; char *Current_Test = NULL; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == src) /* */ Generate_Data(buffer, SIZE);/* */ MPI_Pack_size( SIZE, MPI_DOUBLE, MPI_COMM_WORLD, &bsize ); /* SIZE MPI_DOUBLE */ tmpbuffer = (double *) malloc( bsize + 2*MPI_BSEND_OVERHEAD ); /* */ 73

if (!tmpbuffer) { fprintf( stderr, "Could not allocate bsend buffer of size %d\n", bsize ); MPI_Abort( MPI_COMM_WORLD, 1 ); } MPI_Buffer_attach( tmpbuffer, bsize + MPI_BSEND_OVERHEAD ); /* MPI MPI */ Buffered_Test_Send(buffer, SIZE);/* */ MPI_Buffer_detach( &tmpbuf, &tsize );/* */ } else if (rank == dest) { /* */ Normal_Test_Recv(buffer, SIZE);/* */ } } else { fprintf(stderr, "*** This program uses exactly 2 processes! ***\n"); /* */ MPI_Abort( MPI_COMM_WORLD, 1 ); } MPI_Finalize(); 26 9.3 MPI_SSEND(buf, count, datatype, dest, tag, comm) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) int MPI_Ssend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_SSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR) MPI 18 MPI_SSEND 34 74

34 1 4 tag 1 2 #include <stdio.h> #include "mpi.h" #define SIZE 10 /* Amount of time in seconds to wait for the receipt of the second Ssend message */ static int src = 0; static int dest = 1; int main( int argc, char **argv) { int rank; /* My Rank (0 or 1) */ int act_size = 0; int flag, np, rval, i; int buffer[size]; MPI_Status status, status1, status2; int count1, count2; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size( MPI_COMM_WORLD, &np ); if (np!= 2) { fprintf(stderr, "*** This program uses exactly 2 processes! ***\n"); MPI_Abort( MPI_COMM_WORLD, 1 ); } act_size = 5;/* */ if (rank == src) { /* */ act_size = 1; MPI_Ssend( buffer, act_size, MPI_INT, dest, 1, MPI_COMM_WORLD ); /* tag 1*/ fprintf(stderr,"mpi_ssend %d data,tag=1\n", act_size); 75

} act_size = 4; MPI_Ssend( buffer, act_size, MPI_INT, dest, 2, MPI_COMM_WORLD ); /* 4 tag 2*/ fprintf(stderr,"mpi_ssend %d data,tag=2\n", act_size); } else if (rank == dest) {/* */ MPI_Recv( buffer, act_size, MPI_INT, src, 1, MPI_COMM_WORLD, &status1 ); /* act_size tag 1*/ MPI_Recv( buffer, act_size, MPI_INT, src, 2, MPI_COMM_WORLD, &status2 ); /* act_size tag 2*/ MPI_Get_count( &status1, MPI_INT, &count1 );/* 1 */ fprintf(stderr,"receive %d data,tag=%d\n",count1,status1.mpi_tag); MPI_Get_count( &status2, MPI_INT, &count2 );/* 2 */ fprintf(stderr,"receive %d data,tag=%d\n",count2,status2.mpi_tag); } MPI_Finalize(); 27 9.4 MPI_RSEND(buf, count, datatype, dest, tag, comm) IN buf ( ) IN count ( ) IN datatype ( ) IN dest ( ) IN tag ( ) IN comm ( ) int MPI_Rsend(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_RSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type>buf(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR MPI 19 MPI_RSEND 35 76

35 1 4 1 2 3 4 36 1 2 3 4 3 4 3 4 2 3 1 2 1 4 C program rsendtest include 'mpif.h' integer ierr call MPI_Init(ierr) call test_rsend 77

call MPI_Finalize(ierr) end subroutine test_rsend include 'mpif.h' integer TEST_SIZE parameter (TEST_SIZE=2000) integer ierr, prev, next, count, tag, index, i, outcount, $ requests(2), indices(2), rank, size, $ status(mpi_status_size), statuses(mpi_status_size,2) logical flag real send_buf( TEST_SIZE ), recv_buf ( TEST_SIZE ) call MPI_Comm_rank( MPI_COMM_WORLD, rank, ierr ) call MPI_Comm_size( MPI_COMM_WORLD, size, ierr ) if (size.ne. 2) then print *, 'This test requires exactly 2 processes' call MPI_Abort( 1, MPI_COMM_WORLD, ierr ) endif C C C C C C next = rank + 1 if (next.ge. size) next = 0 prev = rank - 1 if (prev.lt. 0) prev = size - 1 if (rank.eq. 0) then print *, " Rsend Test " end if tag = 1456 count = TEST_SIZE / 3 if (rank.eq. 0) then call MPI_Recv( MPI_BOTTOM, 0, MPI_INTEGER, next, tag, $ MPI_COMM_WORLD, status, ierr ) 0 0 MPI_BOTTOM MPI print *,"Process ",rank," post Ready send" call MPI_Rsend(send_buf, count, MPI_REAL, next, tag, $ MPI_COMM_WORLD, ierr) else print *, "process ",rank," post a receive call" call MPI_Irecv(recv_buf, TEST_SIZE, MPI_REAL, 78

C C C C C $ MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, $ requests(1), ierr) 1 call MPI_Send( MPI_BOTTOM, 0, MPI_INTEGER, next, tag, $ MPI_COMM_WORLD, ierr ) MPI_Irecv call MPI_Wait( requests(1), status, ierr ) print *,"Process ", rank," Receive Rsend message from ", $ status(mpi_source) end if end 28 MPI 79

10 MPICH MPI MPI MPICH MPICH Linux NT MPI MPICH MPI MPICH MPI MPICH MPI MPICH MPI MPICH Argonne National Laboratory Mississippi State University IBM MPI 10.1 Linux MPICH 10.1.1 1 MPICH mpich.tar.gz mpich.tar.z mpich.tar.gz gunzip http://www.mcs.anl.org/mpi/mpich/ ftp ftp://ftp.mcs.anl.org/pub/mpi ftp://ftp.mcs.anl.org/pub/mpisplit ftp://ftp.mcs.anl.org/pub/mpisplit cat 2 tar zxvf mpich.tar.gz gunzip c mpich.tar.gz tar xovf zcat mpich.tar.z tar xovf uncompress mpich.tar.z tar xvf mpich.tar 3 mpich cd mpich 1.1.1 1.1.2 4 Makefile./configure prefix./configure prefix=/usr/local/mpich-1.2.1 make configure MPI make MPI 80

5 cd examples/basic make cpi../../bin/mpirun np 4 cpi $(HOME)/mpich make testing 6 mpich make install prefix 10.1.2 $ HOME /mpich-1.2.1/mpi-2-c++ mpich C++ $ HOME /mpich-1.2.1/bin mpich $ HOME /mpich-1.2.1/doc mpich $ HOME /mpich-1.2.1/examples mpich $ HOME /mpich-1.2.1/f90modules mpich Fortran90 $ HOME /mpich-1.2.1/include mpich $ HOME /mpich-1.2.1/lib mpich $ HOME /mpich-1.2.1/man mpich $ HOME /mpich-1.2.1/mpe mpich $ HOME /mpich-1.2.1/mpid mpich $ HOME /mpich-1.2.1/romio mpich I/O $ HOME /mpich-1.2.1/share upshot jumpshot $ HOME /mpich-1.2.1/src mpich $ HOME /mpich-1.2.1/util mpich $ HOME /mpich-1.2.1/www mpich MPI 81

10.1.3 mpicc/mpicc/mpif77/mpif90 mpicc C++ MPI mpicc C mpif77 mpif90 FORTRAN77 Fortran90 MPI MPI mpicc C -mpilog MPE log -mpitrace MPI -mpilog -mpianim -show -help -echo C++/C/FORTRAN77/Fortran90 10.1.4 MPI SPMD Single Program Multiple Data MPI MASTER/SLAVER MPI MPI C FORTRAN MPI 1 2 N MPI 37 MPI 82

MPI 37 1 MPI MPI 2 3 mpirun MPI 10.1.5 MPI MPI /etc/hosts.equiv MPI tp5 16 MPI tp1,tp2,...,tp16 tp1,...,tp16 /etc/hosts.equiv tp5 tp5 /etc/hosts.equiv.rhosts MPI home.rhosts tp1 pact tp5 pact tp1 pact home.rhosts tp5 pact MPI 10.1.6 MPI mpirun np N program N program MPI $(HOME)/mpich/util/machines/machines.LINUX tp5.cs.tsinghua.edu.cn tp1.cs.tsinghua.edu.cn tp2.cs.tsinghua.edu.cn tp3.cs.tsinghua.edu.cn tp4.cs.tsinghua.edu.cn tp8.cs.tsinghua.edu.cn 83

6 MPI tp5.cs.tsinghua.edu.cn $(HOME)/mpich/examples/basic/ mpirun np 6 cpi {tp1,tp2,tp3,tp4,tp8} $(HOME)/mpich/examples/basic/ cpi mashines.linux mpirun machinefile hosts np 6 cpi hosts mpirun p4pg pgfile cpi pgfile 38 < > < > < > < > < > < > < > < > < > 38 39 tp5 0 /home/pact/mpich/examples/basic/cpi tp1 1 /home/pact/mpich/examples/basic/cpi tp2 1 /home/pact/mpich/examples/basic/cpi tp3 1 /home/pact/mpich/examples/basic/cpi tp4 1 /home/pact/mpich/examples/basic/cpi tp8 1 /home/pact/mpich/examples/basic/cpi 39 0 tp5 0 tp5 MPI mpirun MPI MPI MPI 84

mpirun -np <number of processes> <program name and arguments> MPI MPI chameleon ( chameleon/pvm, chameleon/p4,...) meiko ( meiko ) paragon (paragon ch_nx ) p4 ( ch_p4 ) ibmspx (IBM SP2 ch_eui) anlspx (ANLs SPx ch_eui) ksr (KSR 1 2 ch_p4) sgi_mp (SGI ch_shmem) cray_t3d (Cray T3D t3d) smp (SMPs ch_shmem) execer ( ) MPI MPI mpirun [mpirun_options...] <progname> [options...] -arch <architecture> ${MPIR_HOME}/util/machines machines.<arch> -h -machine <machine name> use startup procedure for <machine name> -machinefile <machine-file name> -np <np> -nolocal -stdin filename -t -v -dbx dbx -gdb gdb -xxgdb xxgdb -tv totalview NEC - CENJU-3 -batch -stdout filename -stderr filename Nexus -nexuspg filename -np -nolocal -leave_pg -nexusdb filename Nexus -e execer -pg p4 execer -leave_pg P4 -p4pg filename -np -nolocal 85

-leave_pg -tcppg filename tcp -np nolocal -leave_pg -p4ssport num p4 num num=0 MPI_P4SSPORT MPI_USEP4SSPORT MPI_P4SSPORT -p4ssport 0 -mvhome home -mvback files -maxtime min -nopoll -mem value -cpu time CPU IBM SP2 -cac name ANL Intel Paragon -paragontype name -paragonname name shells -paragonpn name Paragon -arch -np MPI sun4 rs6000 sun4 2 rs6000 3 mpirun -arch sun4 -np 2 -arch rs6000 -np 3 program sun4 program.sun4 rs6000 program.rs6000 %a mpirun -arch sun4 -np 2 -arch rs6000 -np 3 program.%a /tmp/me/sun4 /tmp/me/rs6000 mpirun -arch sun4 -np 2 -arch rs6000 -np 3 /tmp/me/%a/program 10.1.7 mpiman MPI UNIX man Web HTML mpiman xman, X -xmosaic xmosaic Web -mosaic mosaic Web -netscape netscape Web -xman X xman -man man program ( mpiman -man MPI_Send) mpireconfig 86

make MPICH make mpireconfig filename filename filename.in 10.2 Windows NT MPICH NT MPICH MPICH.NT.1.2.0.4 tcp/ip, VIA sockets VI MS Visual C++ 6.0 Digital Fortran 6.0 FORTRAN MPI PMPI C FORTRAN 10.2.1 ftp://ftp.mcs.anl.gov/pub/mpi/nt/mpich.nt.1.2.0.4.all.zip setup MPICH NT c:\program Files\Argonne National Lab\MPICH.NT.1.2.0.4 MPI launcher MPI sdk 10.2.2 C/C++ MPI MS Visual C++ makefile project include [MPICH Home]\include Debug - /MTd Release - /MT Debug - ws2_32.lib mpichd.lib pmpichd.lib romiod.lib Release - ws2_32.lib mpich.lib pmpich.lib romio.lib pmpich*.lib MPI PMPI_ * lib [MPICH Home]\lib MPI build 87

FORTRAN FORTRAN Visual Fortran 6+ mpif.h Visual Fortran 6+ /iface:cref /iface:nomixed_str_len_arg C/C++ NT MPICH VIA 10.2.3 NT MPICH Remote Shell Server MPIRun.exe Simple Launcher MPIRun.exe MPICH Remote Shell Server MPI DCOM server SYSTEM MPIRun Remote Shell Server MPIRun MPI Remote Shell Server MPIRun.exe MPI MPIRun -np MPIRun.exe c:\program Files\Argonne National Lab\MPICH.NT.1.2.0.4\RemoteShell\Bin MPI MPI MPIConfig MPIConfig c:\program Files\Argonne National Lab\MPICH.NT.1.2.0.4\RemoteShell\Bin MPI MPIConfig MPI Refresh: Find: Verify: DCOM server Set: "set HOSTS" MPIRun "set TEMP" remote shell service MPI C:\ 88

timeout Remote Shell Server MPIRun.exe MPI 40 MPIRun configfile [-logon] [args...] MPIRun -np #processes [-logon] [-env "var1=val1 var2=val2..."] executable [args...] MPIRun -localonly #processes [-env "var1=val1 var2=val2..."] executable [args...] 41 40 NT MPI exe c:\somepath\myapp.exe \\host\share\somepath\myapp.exe [args arg1 arg2 arg3...] [env VAR1=VAL1 VAR2=VAL2... VARn=VALn] hosts hosta #procs [path\myapp.exe] hostb #procs [\\host\share\somepath\myapp2.exe] hostc #procs... 41 NT MPI 8 NT01 NT02... NT08 MPI testmpint c:\mpint 42 mpiconf1 exe c:\mpint\testmpint.exe hosts NT01 1 NT02 1 NT03 1 NT04 1 NT05 1 NT06 1 NT07 1 NT08 1 42 NT MPI 1 mpirun mpiconf1 testmpint 8 89