Commit fe08afe1 authored by Jan Siwiec's avatar Jan Siwiec

Update running_openmpi.md

parent fe4d3762
Pipeline #11118 passed with stages
in 23 minutes and 12 seconds
......@@ -2,7 +2,7 @@
## OpenMPI Program Execution
The OpenMPI programs may be executed only via the PBS Workload manager, by entering an appropriate queue. On Anselm, the **bullxmpi-1.2.4.1** and **OpenMPI 1.6.5** are OpenMPI based MPI implementations. On Salomon, the **OpenMPI 1.8.6** is OpenMPI based MPI implementation.
The OpenMPI programs may be executed only via the PBS Workload manager, by entering an appropriate queue. On Anselm, the **bullxmpi-1.2.4.1** and **OpenMPI 1.6.5** are OpenMPI-based MPI implementations. On Salomon, the **OpenMPI 1.8.6** is an OpenMPI-based MPI implementation.
### Basic Usage
......@@ -26,12 +26,12 @@ $ mpiexec -pernode ./helloworld_mpi.x
```
!!! note
Be aware, that in this example, the directive **-pernode** is used to run only **one task per node**, which is normally an unwanted behaviour (unless you want to run hybrid code with just one MPI and 16 OpenMP tasks per node). In normal MPI programs **omit the -pernode directive** to run up to 16 MPI tasks per each node.
In this example, the directive **-pernode** is used to run only **one task per node**, which is normally an unwanted behavior (unless you want to run hybrid code with just one MPI and 16 OpenMPI tasks per node). In normal MPI programs, **omit the -pernode directive** to run up to 16 MPI tasks per each node.
In this example, we allocate 4 nodes via the express queue interactively. We set up the openmpi environment and interactively run the helloworld_mpi.x program. Note that the executable helloworld_mpi.x must be available within the
In this example, we allocate 4 nodes via the express queue interactively. We set up the OpenMPI environment and interactively run the helloworld_mpi.x program. Note that the executable helloworld_mpi.x must be available within the
same path on all nodes. This is automatically fulfilled on the /home and /scratch filesystem.
You need to preload the executable, if running on the local scratch /lscratch filesystem
You need to preload the executable, if running on the local scratch /lscratch filesystem:
```console
$ pwd
......@@ -43,7 +43,7 @@ $ mpiexec -pernode --preload-binary ./helloworld_mpi.x
Hello world! from rank 3 of 4 on host cn110
```
In this example, we assume the executable helloworld_mpi.x is present on compute node cn17 on local scratch. We call the mpiexec whith the **--preload-binary** argument (valid for openmpi). The mpiexec will copy the executable from cn17 to the /lscratch/15210.srv11 directory on cn108, cn109 and cn110 and execute the program.
In this example, we assume the executable helloworld_mpi.x is present on compute node cn17 on local scratch. We call the mpiexec with the **--preload-binary** argument (valid for OpenMPI). The mpiexec will copy the executable from cn17 to the /lscratch/15210.srv11 directory on cn108, cn109, and cn110 and execute the program.
!!! note
MPI process mapping may be controlled by PBS parameters.
......@@ -52,7 +52,7 @@ The mpiprocs and ompthreads parameters allow for selection of number of running
### One MPI Process Per Node
Follow this example to run one MPI process per node, 16 threads per process (**on Salomon try 24 threads in following examples**).
Follow this example to run one MPI process per node, 16 threads per process (**on Salomon, try 24 threads in following examples**).
```console
$ qsub -q qexp -l select=4:ncpus=16:mpiprocs=1:ompthreads=16 -I
......@@ -60,7 +60,7 @@ $ ml OpenMPI
$ mpiexec --bind-to-none ./helloworld_mpi.x
```
In this example, we demonstrate recommended way to run an MPI application, using 1 MPI processes per node and 16 threads per socket, on 4 nodes.
In this example, we demonstrate tthe recommended way to run an MPI application, using 1 MPI processes per node and 16 threads per socket, on 4 nodes.
### Two MPI Processes Per Node
......@@ -72,11 +72,11 @@ $ ml openmpi
$ mpiexec -bysocket -bind-to-socket ./helloworld_mpi.x
```
In this example, we demonstrate recommended way to run an MPI application, using 2 MPI processes per node and 8 threads per socket, each process and its threads bound to a separate processor socket of the node, on 4 nodes
In this example, we demonstrate the recommended way to run an MPI application, using 2 MPI processes per node and 8 threads per socket, each process and its threads bound to a separate processor socket of the node, on 4 nodes.
### 16 MPI Processes Per Node
Follow this example to run 16 MPI processes per node, 1 thread per process. Note the options to mpiexec.
Follow this example to run 16 MPI processes per node, 1 thread per process. Note the options to mpiexec:
```console
$ qsub -q qexp -l select=4:ncpus=16:mpiprocs=16:ompthreads=1 -I
......@@ -84,14 +84,14 @@ $ ml OpenMPI
$ mpiexec -bycore -bind-to-core ./helloworld_mpi.x
```
In this example, we demonstrate recommended way to run an MPI application, using 16 MPI processes per node, single threaded. Each process is bound to separate processor core, on 4 nodes.
In this example, we demonstrate the recommended way to run an MPI application, using 16 MPI processes per node, single threaded. Each process is bound to separate processor core, on 4 nodes.
### OpenMP Thread Affinity
!!! note
Important! Bind every OpenMP thread to a core!
In the previous two examples with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You might want to avoid this by setting these environment variable for GCC OpenMP:
In the previous two examples with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You might want to avoid this by setting this environment variable for GCC OpenMP:
```console
$ export GOMP_CPU_AFFINITY="0-15"
......@@ -103,7 +103,7 @@ or this one for Intel OpenMP:
$ export KMP_AFFINITY=granularity=fine,compact,1,0
```
As of OpenMP 4.0 (supported by GCC 4.9 and later and Intel 14.0 and later) the following variables may be used for Intel or GCC:
As of OpenMP 4.0 (supported by GCC 4.9 and later and Intel 14.0 and later), the following variables may be used for Intel or GCC:
```console
$ export OMP_PROC_BIND=true
......@@ -114,11 +114,11 @@ $ export OMP_PLACES=cores
The mpiexec allows for precise selection of how the MPI processes will be mapped to the computational nodes and how these processes will bind to particular processor sockets and cores.
MPI process mapping may be specified by a hostfile or rankfile input to the mpiexec program. Altough all implementations of MPI provide means for process mapping and binding, following examples are valid for the openmpi only.
MPI process mapping may be specified by a hostfile or rankfile input to the mpiexec program. Altough all implementations of MPI provide means for process mapping and binding, the following examples are valid for the OpenMPI only.
### Hostfile
Example hostfile
Example hostfile:
```console
cn110.bullx
......@@ -127,7 +127,7 @@ Example hostfile
cn17.bullx
```
Use the hostfile to control process placement
Use the hostfile to control process placement:
```console
$ mpiexec -hostfile hostfile ./helloworld_mpi.x
......@@ -137,16 +137,16 @@ $ mpiexec -hostfile hostfile ./helloworld_mpi.x
Hello world! from rank 3 of 4 on host cn17
```
In this example, we see that ranks have been mapped on nodes according to the order in which nodes show in the hostfile
In this example, we see that ranks have been mapped on nodes according to the order in which nodes show in the hostfile.
### Rankfile
Exact control of MPI process placement and resource binding is provided by specifying a rankfile
Exact control of MPI process placement and resource binding is provided by specifying a rankfile.
!!! note
Appropriate binding may boost performance of your application.
Example rankfile
Example rankfile:
```console
rank 0=cn110.bullx slot=1:0,1
......@@ -156,12 +156,12 @@ Example rankfile
rank 4=cn109.bullx slot=0:*,1:*
```
This rankfile assumes 5 ranks will be running on 4 nodes and provides exact mapping and binding of the processes to the processor sockets and cores
This rankfile assumes 5 ranks will be running on 4 nodes and provides exact mapping and binding of the processes to the processor sockets and cores.
Explanation:
rank 0 will be bounded to cn110, socket1 core0 and core1
rank 1 will be bounded to cn109, socket0, all cores
rank 2 will be bounded to cn108, socket1, core1 and core2
rank 1 will be bounded to cn109, socket0 all cores
rank 2 will be bounded to cn108, socket1 core1 and core2
rank 3 will be bounded to cn17, socket0 core1, socket1 core0, core1, core2
rank 4 will be bounded to cn109, all cores on both sockets
......@@ -179,9 +179,9 @@ $ mpiexec -n 5 -rf rankfile --report-bindings ./helloworld_mpi.x
Hello world! from rank 2 of 5 on host cn108
```
In this example we run 5 MPI processes (5 ranks) on four nodes. The rankfile defines how the processes will be mapped on the nodes, sockets and cores. The **--report-bindings** option was used to print out the actual process location and bindings. Note that ranks 1 and 4 run on the same node and their core binding overlaps.
In this example, we run 5 MPI processes (5 ranks) on four nodes. The rankfile defines how the processes will be mapped on the nodes, sockets and cores. The **--report-bindings** option was used to print out the actual process location and bindings. Note that ranks 1 and 4 run on the same node and their core binding overlaps.
It is users responsibility to provide correct number of ranks, sockets and cores.
It is the user's responsibility to provide the correct number of ranks, sockets, and cores.
### Bindings Verification
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment