Skip to content
Snippets Groups Projects
Commit b6974b0a authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Update running-mpich2.md

parent fe08afe1
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
## MPICH2 Program Execution ## MPICH2 Program Execution
The MPICH2 programs use mpd daemon or ssh connection to spawn processes, no PBS support is needed. However the PBS allocation is required to access compute nodes. On Anselm, the **Intel MPI** and **mpich2 1.9** are MPICH2 based MPI implementations. The MPICH2 programs use MPD daemon or SSH connection to spawn processes, no PBS support is needed. However, the PBS allocation is required to access compute nodes. On Anselm, the **Intel MPI** and **mpich2 1.9** are MPICH2 based MPI implementations.
### Basic Usage ### Basic Usage
...@@ -26,7 +26,7 @@ $ mpirun -ppn 1 -hostfile $PBS_NODEFILE ./helloworld_mpi.x ...@@ -26,7 +26,7 @@ $ mpirun -ppn 1 -hostfile $PBS_NODEFILE ./helloworld_mpi.x
In this example, we allocate 4 nodes via the express queue interactively. We set up the intel MPI environment and interactively run the helloworld_mpi.x program. We request MPI to spawn 1 process per node. In this example, we allocate 4 nodes via the express queue interactively. We set up the intel MPI environment and interactively run the helloworld_mpi.x program. We request MPI to spawn 1 process per node.
Note that the executable helloworld_mpi.x must be available within the same path on all nodes. This is automatically fulfilled on the /home and /scratch filesystem. Note that the executable helloworld_mpi.x must be available within the same path on all nodes. This is automatically fulfilled on the /home and /scratch filesystem.
You need to preload the executable, if running on the local scratch /lscratch filesystem You need to preload the executable if running on the local scratch /lscratch filesystem:
```console ```console
$ pwd $ pwd
...@@ -39,7 +39,7 @@ $ mpirun -ppn 1 -hostfile $PBS_NODEFILE ./helloworld_mpi.x ...@@ -39,7 +39,7 @@ $ mpirun -ppn 1 -hostfile $PBS_NODEFILE ./helloworld_mpi.x
Hello world! from rank 3 of 4 on host cn110 Hello world! from rank 3 of 4 on host cn110
``` ```
In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the cp command via mpirun, copying the executable from shared home to local scratch . Second mpirun will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node. In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the cp command via mpirun, copying the executable from shared home to local scratch. Second mpirun will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node.
!!! note !!! note
MPI process mapping may be controlled by PBS parameters. MPI process mapping may be controlled by PBS parameters.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment