4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
@@ -718,8 +718,8 @@ Hello world from process 0 of 4 on host r38u31n1000
...
@@ -718,8 +718,8 @@ Hello world from process 0 of 4 on host r38u31n1000
### Coprocessor-Only Model
### Coprocessor-Only Model
There are two ways how to execute an MPI code on a single coprocessor: 1) launch the program using "**mpirun**" from the
There are two ways how to execute an MPI code on a single coprocessor: 1) launch the program using `mpirun` from the
coprocessor; or 2) launch the task using "**mpiexec.hydra**" from a host.
coprocessor; or 2) launch the task using `mpiexec.hydra` from a host.
#### Execution on Coprocessor
#### Execution on Coprocessor
...
@@ -815,7 +815,7 @@ Hello world from process 0 of 4 on host r38u31n1000-mic0
...
@@ -815,7 +815,7 @@ Hello world from process 0 of 4 on host r38u31n1000-mic0
```
```
!!! hint
!!! hint
**"mpiexec.hydra"** requires a file on the MIC filesystem. If the file is missing, contact the system administrators.
`mpiexec.hydra` requires a file on the MIC filesystem. If the file is missing, contact the system administrators.
A simple test to see if the file is present is to execute:
A simple test to see if the file is present is to execute:
...
@@ -854,7 +854,7 @@ This output means that the PBS allocated nodes cn204 and cn205, which means that
...
@@ -854,7 +854,7 @@ This output means that the PBS allocated nodes cn204 and cn205, which means that
- to connect to the accelerator on the first node from the first node: `$ ssh r25u25n710-mic0` or `$ ssh mic0`
- to connect to the accelerator on the first node from the first node: `$ ssh r25u25n710-mic0` or `$ ssh mic0`
- to connect to the accelerator on the second node from the first node: `$ ssh r25u25n711-mic0`
- to connect to the accelerator on the second node from the first node: `$ ssh r25u25n711-mic0`
At this point, we expect that the correct modules are loaded and the binary is compiled. For parallel execution, the mpiexec.hydra is used. Again the first step is to tell mpiexec that the MPI can be executed on MIC accelerators by setting up the environmental variable `I_MPI_MIC`; do not forget to have correct FABRIC and PROVIDER defined.
At this point, we expect that the correct modules are loaded and the binary is compiled. For parallel execution, `mpiexec.hydra` is used. Again the first step is to tell mpiexec that the MPI can be executed on MIC accelerators by setting up the environmental variable `I_MPI_MIC`; do not forget to have correct FABRIC and PROVIDER defined.
In a symmetric mode, MPI programs are executed on both the host computer(s) and the MIC accelerator(s). Since MIC has a different
In a symmetric mode, MPI programs are executed on both the host computer(s) and the MIC accelerator(s). Since MIC has a different
architecture and requires different binary file produced by the Intel compiler, two different files have to be compiled before the MPI program is executed.
architecture and requires different binary file produced by the Intel compiler, two different files have to be compiled before the MPI program is executed.
In the previous section, we have compiled two binary files, one for hosts "**mpi-test**" and one for MIC accelerators "**mpi-test-mic**". These two binaries can be executed at once using mpiexec.hydra:
In the previous section, we have compiled two binary files, one for hosts "**mpi-test**" and one for MIC accelerators "**mpi-test-mic**". These two binaries can be executed at once using `mpiexec.hydra`:
```console
```console
$mpirun \
$mpirun \
...
@@ -944,7 +944,7 @@ In addition, if a naming convention is set in a way that the name of the binary
...
@@ -944,7 +944,7 @@ In addition, if a naming convention is set in a way that the name of the binary
$export I_MPI_MIC_POSTFIX=-mic
$export I_MPI_MIC_POSTFIX=-mic
```
```
To run the MPI code using mpirun and the machine file "hosts_file_mix", use:
To run the MPI code using `mpirun` and the machine file "hosts_file_mix", use: