The MPICH2 programs use MPD daemon or SSH connection to spawn processes, no PBS support is needed. However, the PBS allocation is required to access compute nodes.
The MPICH programs use MPD daemon or SSH connection to spawn processes, no PBS support is needed. However, the PBS allocation is required to access compute nodes.
### Basic Usage
...
...
@@ -12,35 +12,12 @@ The MPICH2 programs use MPD daemon or SSH connection to spawn processes, no PBS
In this example, we allocate 4 nodes via the express queue interactively. We set up the intel MPI environment and interactively run the helloworld_mpi.x program. We request MPI to spawn 1 process per node.
Note that the executable helloworld_mpi.x must be available within the same path on all nodes. This is automatically fulfilled on the /home and /scratch filesystem.
You need to preload the executable if running on the local scratch /lscratch filesystem:
In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the `cp` command via `mpirun`, copying the executable from shared home to local scratch. Second `mpirun` will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node.
!!! note
MPI process mapping may be controlled by PBS parameters.
...
...
@@ -51,7 +28,7 @@ The `mpiprocs` and `ompthreads` parameters allow for selection of number of runn
Follow this example to run one MPI process per node, 16 threads per process. Note that no options to `mpirun` are needed