Skip to content
Snippets Groups Projects
Commit 822256fb authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Merge branch 'omp-mpi' into 'master'

Omp mpi

See merge request !420
parents 7dfd941f 3a25b2a0
Branches
No related tags found
1 merge request!420Omp mpi
Pipeline #29615 failed
...@@ -17,281 +17,5 @@ However, executing a huge number of jobs via the PBS queue may strain the system ...@@ -17,281 +17,5 @@ However, executing a huge number of jobs via the PBS queue may strain the system
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array][1]. 1. A user is allowed to submit at most 100 jobs. Each job may be [a job array][1].
1. The array size is at most 1,000 subjobs. 1. The array size is at most 1,000 subjobs.
## Job Arrays [1]: job-arrays.md
[3]: hyperqueue.md
!!! note
A huge number of jobs may easily be submitted and managed as a job array.
A job array is a compact representation of many jobs called subjobs. Subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
* each subjob has a unique index, $PBS_ARRAY_INDEX
* job Identifiers of subjobs only differ by their indices
* the state of subjobs can differ (R, Q, etc.)
All subjobs within a job array have the same scheduling priority and schedule as independent jobs. An entire job array is submitted through a single `qsub` command and may be managed by `qdel`, `qalter`, `qhold`, `qrls`, and `qsig` commands as a single job.
### Shared Jobscript
All subjobs in a job array use the very same single jobscript. Each subjob runs its own instance of the jobscript. The instances execute different work controlled by the `$PBS_ARRAY_INDEX` variable.
Example:
Assume we have 900 input files with the name of each beginning with "file" (e.g. file001, ..., file900). Assume we would like to use each of these input files with myprog.x program executable, each as a separate job.
First, we create a tasklist file (or subjobs list), listing all tasks (subjobs) - all input files in our example:
```console
$ find . -name 'file*' > tasklist
```
Then we create a jobscript:
```bash
#!/bin/bash
#PBS -A OPEN-00-00
#PBS -q qprod
#PBS -l select=1,walltime=02:00:00
# change to scratch directory
SCRDIR=/scratch/project/${PBS_ACCOUNT,,}/${USER}/${PBS_JOBID}
mkdir -p $SCRDIR
cd $SCRDIR || exit
# get individual tasks from tasklist with index from PBS JOB ARRAY
TASK=$(sed -n "${PBS_ARRAY_INDEX}p" $PBS_O_WORKDIR/tasklist)
# copy input file and executable to scratch
cp $PBS_O_WORKDIR/$TASK input
cp $PBS_O_WORKDIR/myprog.x .
# execute the calculation
./myprog.x < input > output
# copy output file to submit directory
cp output $PBS_O_WORKDIR/$TASK.out
```
In this example, the submit directory contains the 900 input files, the myprog.x executable, and the jobscript file. As an input for each run, we take the filename of the input file from the created tasklist file. We copy the input file to the local scratch memory `/lscratch/$PBS_JOBID`, execute the myprog.x and copy the output file back to the submit directory, under the `$TASK.out` name. The myprog.x executable runs on one node only and must use threads to run in parallel. Be aware, that if the myprog.x **is not multithreaded**, then all the **jobs are run as single-thread programs in a sequential manner**. Due to the allocation of the whole node, the accounted time is equal to the usage of the whole node, while using only 1/16 of the node.
If running a huge number of parallel multicore (in means of multinode multithread, e.g. MPI enabled) jobs is needed, then a job array approach should be used. The main difference, as compared to the previous examples using one node, is that the local scratch memory should not be used (as it is not shared between nodes) and MPI or other techniques for parallel multinode processing has to be used properly.
### Submit the Job Array
To submit the job array, use the `qsub -J` command. The 900 jobs of the [example above][5] may be submitted like this:
```console
$ qsub -N JOBNAME -J 1-900 jobscript
506493[].isrv5
```
In this example, we submit a job array of 900 subjobs. Each subjob will run on one full node and is assumed to take less than 2 hours (note the #PBS directives in the beginning of the jobscript file, do not forget to set your valid PROJECT_ID and desired queue).
Sometimes for testing purposes, you may need to submit a one-element only array. This is not allowed by PBSPro, but there is a workaround:
```console
$ qsub -N JOBNAME -J 9-10:2 jobscript
```
This will only choose the lower index (9 in this example) for submitting/running your job.
### Manage the Job Array
Check status of the job array using the `qstat` command.
```console
$ qstat -a 12345[].dm2
dm2:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
12345[].dm2 user2 qprod xx 13516 1 16 -- 00:50 B 00:02
```
When the status is B, it means that some subjobs are already running.
Check the status of the first 100 subjobs using the `qstat` command.
```console
$ qstat -a 12345[1-100].dm2
dm2:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
12345[1].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02
12345[2].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02
12345[3].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:01
12345[4].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q --
. . . . . . . . . . .
, . . . . . . . . . .
12345[100].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q --
```
Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted.
```console
$ qdel 12345[].dm2
```
Deleting large job arrays may take a while.
Display status information for all user's jobs, job arrays, and subjobs.
```console
$ qstat -u $USER -t
```
Display status information for all user's subjobs.
```console
$ qstat -u $USER -tJ
```
For more information on job arrays, see the [PBSPro Users guide][6].
### Examples
Download the examples in [capacity.zip][9], illustrating the above listed ways to run a huge number of jobs. We recommend trying out the examples before using this for running production jobs.
Unzip the archive in an empty directory on cluster and follow the instructions in the README file-
```console
$ unzip capacity.zip
$ cat README
```
## HyperQueue
HyperQueue lets you build a computation plan consisting of a large amount of tasks and then execute it transparently over a system like SLURM/PBS.
It dynamically groups tasks into PBS jobs and distributes them to fully utilize allocated nodes.
You thus do not have to manually aggregate your tasks into PBS jobs.
Find more about HyperQueue in its [documentation][a].
![](../img/hq-idea-s.png)
### Features
* **Transparent task execution on top of a Slurm/PBS cluster**
* Automatic task distribution amongst jobs, nodes, and cores
* Automatic submission of PBS/Slurm jobs
* **Dynamic load balancing across jobs**
* Work-stealing scheduler
* NUMA-aware, core planning, task priorities, task arrays
* Nodes and tasks may be added/removed on the fly
* **Scalable**
* Low overhead per task (~100μs)
* Handles hundreds of nodes and millions of tasks
* Output streaming avoids creating many files on network filesystems
* **Easy deployment**
* Single binary, no installation, depends only on *libc*
* No elevated privileges required
### Installation
* On Barbora and Karolina, you can simply load the HyperQueue module:
```console
$ ml HyperQueue
```
* If you want to install/compile HyperQueue manually, follow the steps on the [official webpage][b].
### Usage
#### Starting the Server
To use HyperQueue, you first have to start the HyperQueue server. It is a long-lived process that
is supposed to be running on a login node. You can start it with the following command:
```console
$ hq server start
```
#### Submitting Computation
Once the HyperQueue server is running, you can submit jobs into it. Here are a few examples of job submissions.
You can find more information in the [documentation][2].
* Submit a simple job (command `echo 'Hello world'` in this case)
```console
$ hq submit echo 'Hello world'
```
* Submit a job with 10000 tasks
```console
$ hq submit --array 1-10000 my-script.sh
```
Once you start some jobs, you can observe their status using the following commands:
```console
# Display status of a single job
$ hq job <job-id>
# Display status of all jobs
$ hq jobs
```
!!! important
Before the jobs can start executing, you have to provide HyperQueue with some computational resources.
#### Providing Computational Resources
Before HyperQueue can execute your jobs, it needs to have access to some computational resources.
You can provide these by starting HyperQueue *workers* which connect to the server and execute your jobs.
The workers should run on computing nodes, therefore they should be started inside PBS jobs.
There are two ways of providing computational resources.
* **Allocate PBS jobs automatically**
HyperQueue can automatically submit PBS jobs with workers on your behalf. This system is called
[automatic allocation][c]. After the server is started, you can add a new automatic allocation
queue using the `hq alloc add` command:
```console
$ hq alloc add pbs -- -qqprod -AAccount1
```
After you run this command, HQ will automatically start submitting PBS jobs on your behalf
once some HQ jobs are submitted.
* **Manually start PBS jobs with HQ workers**
With the following command, you can submit a PBS job that will start a single HQ worker which
will connect to a running HQ server.
```console
$ qsub <qsub-params> -- /bin/bash -l -c "$(which hq) worker start"
```
!!! tip
For debugging purposes, you can also start the worker e.g. on a login node, simply by running
`$ hq worker start`. Do not use such worker for any long-running computations though.
### Architecture
Here you can see the architecture of HyperQueue.
The user submits jobs into the server which schedules them onto a set of workers running on compute nodes.
![](../img/hq-architecture.png)
[1]: #job-arrays
[2]: https://it4innovations.github.io/hyperqueue/stable/jobs/jobs/
[3]: #hyperqueue
[5]: #shared-jobscript
[6]: ../pbspro.md
[9]: capacity.zip
[a]: https://it4innovations.github.io/hyperqueue/stable/
[b]: https://it4innovations.github.io/hyperqueue/stable/installation/
[c]: https://it4innovations.github.io/hyperqueue/stable/deployment/allocation/
# HyperQueue
HyperQueue lets you build a computation plan consisting of a large amount of tasks and then execute it transparently over a system like SLURM/PBS.
It dynamically groups tasks into PBS jobs and distributes them to fully utilize allocated nodes.
You thus do not have to manually aggregate your tasks into PBS jobs.
Find more about HyperQueue in its [documentation][a].
![](../img/hq-idea-s.png)
## Features
* **Transparent task execution on top of a Slurm/PBS cluster**
* Automatic task distribution amongst jobs, nodes, and cores
* Automatic submission of PBS/Slurm jobs
* **Dynamic load balancing across jobs**
* Work-stealing scheduler
* NUMA-aware, core planning, task priorities, task arrays
* Nodes and tasks may be added/removed on the fly
* **Scalable**
* Low overhead per task (~100μs)
* Handles hundreds of nodes and millions of tasks
* Output streaming avoids creating many files on network filesystems
* **Easy deployment**
* Single binary, no installation, depends only on *libc*
* No elevated privileges required
## Installation
* On Barbora and Karolina, you can simply load the HyperQueue module:
```console
$ ml HyperQueue
```
* If you want to install/compile HyperQueue manually, follow the steps on the [official webpage][b].
## Usage
### Starting the Server
To use HyperQueue, you first have to start the HyperQueue server. It is a long-lived process that
is supposed to be running on a login node. You can start it with the following command:
```console
$ hq server start
```
### Submitting Computation
Once the HyperQueue server is running, you can submit jobs into it. Here are a few examples of job submissions.
You can find more information in the [documentation][1].
* Submit a simple job (command `echo 'Hello world'` in this case)
```console
$ hq submit echo 'Hello world'
```
* Submit a job with 10000 tasks
```console
$ hq submit --array 1-10000 my-script.sh
```
Once you start some jobs, you can observe their status using the following commands:
```console
# Display status of a single job
$ hq job <job-id>
# Display status of all jobs
$ hq jobs
```
!!! important
Before the jobs can start executing, you have to provide HyperQueue with some computational resources.
### Providing Computational Resources
Before HyperQueue can execute your jobs, it needs to have access to some computational resources.
You can provide these by starting HyperQueue *workers* which connect to the server and execute your jobs.
The workers should run on computing nodes, therefore they should be started inside PBS jobs.
There are two ways of providing computational resources.
* **Allocate PBS jobs automatically**
HyperQueue can automatically submit PBS jobs with workers on your behalf. This system is called
[automatic allocation][c]. After the server is started, you can add a new automatic allocation
queue using the `hq alloc add` command:
```console
$ hq alloc add pbs -- -qqprod -AAccount1
```
After you run this command, HQ will automatically start submitting PBS jobs on your behalf
once some HQ jobs are submitted.
* **Manually start PBS jobs with HQ workers**
With the following command, you can submit a PBS job that will start a single HQ worker which
will connect to a running HQ server.
```console
$ qsub <qsub-params> -- /bin/bash -l -c "$(which hq) worker start"
```
!!! tip
For debugging purposes, you can also start the worker e.g. on a login node, simply by running
`$ hq worker start`. Do not use such worker for any long-running computations though.
## Architecture
Here you can see the architecture of HyperQueue.
The user submits jobs into the server which schedules them onto a set of workers running on compute nodes.
![](../img/hq-architecture.png)
[1]: https://it4innovations.github.io/hyperqueue/stable/jobs/jobs/
[a]: https://it4innovations.github.io/hyperqueue/stable/
[b]: https://it4innovations.github.io/hyperqueue/stable/installation/
[c]: https://it4innovations.github.io/hyperqueue/stable/deployment/allocation/
# Job Arrays
A job array is a compact representation of many jobs called subjobs. Subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
* each subjob has a unique index, $PBS_ARRAY_INDEX
* job Identifiers of subjobs only differ by their indices
* the state of subjobs can differ (R, Q, etc.)
All subjobs within a job array have the same scheduling priority and schedule as independent jobs. An entire job array is submitted through a single `qsub` command and may be managed by `qdel`, `qalter`, `qhold`, `qrls`, and `qsig` commands as a single job.
## Shared Jobscript
All subjobs in a job array use the very same single jobscript. Each subjob runs its own instance of the jobscript. The instances execute different work controlled by the `$PBS_ARRAY_INDEX` variable.
Example:
Assume we have 900 input files with the name of each beginning with "file" (e.g. file001, ..., file900). Assume we would like to use each of these input files with myprog.x program executable, each as a separate job.
First, we create a tasklist file (or subjobs list), listing all tasks (subjobs) - all input files in our example:
```console
$ find . -name 'file*' > tasklist
```
Then we create a jobscript:
```bash
#!/bin/bash
#PBS -A OPEN-00-00
#PBS -q qprod
#PBS -l select=1,walltime=02:00:00
# change to scratch directory
SCRDIR=/scratch/project/${PBS_ACCOUNT,,}/${USER}/${PBS_JOBID}
mkdir -p $SCRDIR
cd $SCRDIR || exit
# get individual tasks from tasklist with index from PBS JOB ARRAY
TASK=$(sed -n "${PBS_ARRAY_INDEX}p" $PBS_O_WORKDIR/tasklist)
# copy input file and executable to scratch
cp $PBS_O_WORKDIR/$TASK input
cp $PBS_O_WORKDIR/myprog.x .
# execute the calculation
./myprog.x < input > output
# copy output file to submit directory
cp output $PBS_O_WORKDIR/$TASK.out
```
In this example, the submit directory contains the 900 input files, the myprog.x executable, and the jobscript file. As an input for each run, we take the filename of the input file from the created tasklist file. We copy the input file to the local scratch memory `/lscratch/$PBS_JOBID`, execute the myprog.x and copy the output file back to the submit directory, under the `$TASK.out` name. The myprog.x executable runs on one node only and must use threads to run in parallel. Be aware, that if the myprog.x **is not multithreaded**, then all the **jobs are run as single-thread programs in a sequential manner**. Due to the allocation of the whole node, the accounted time is equal to the usage of the whole node, while using only 1/16 of the node.
If running a huge number of parallel multicore (in means of multinode multithread, e.g. MPI enabled) jobs is needed, then a job array approach should be used. The main difference, as compared to the previous examples using one node, is that the local scratch memory should not be used (as it is not shared between nodes) and MPI or other techniques for parallel multinode processing has to be used properly.
## Submiting Job Array
To submit the job array, use the `qsub -J` command. The 900 jobs of the [example above][3] may be submitted like this:
```console
$ qsub -N JOBNAME -J 1-900 jobscript
506493[].isrv5
```
In this example, we submit a job array of 900 subjobs. Each subjob will run on one full node and is assumed to take less than 2 hours (note the #PBS directives in the beginning of the jobscript file, do not forget to set your valid PROJECT_ID and desired queue).
Sometimes for testing purposes, you may need to submit a one-element only array. This is not allowed by PBSPro, but there is a workaround:
```console
$ qsub -N JOBNAME -J 9-10:2 jobscript
```
This will only choose the lower index (9 in this example) for submitting/running your job.
## Managing Job Array
Check status of the job array using the `qstat` command.
```console
$ qstat -a 12345[].dm2
dm2:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
12345[].dm2 user2 qprod xx 13516 1 16 -- 00:50 B 00:02
```
When the status is B, it means that some subjobs are already running.
Check the status of the first 100 subjobs using the `qstat` command.
```console
$ qstat -a 12345[1-100].dm2
dm2:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
12345[1].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02
12345[2].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02
12345[3].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:01
12345[4].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q --
. . . . . . . . . . .
, . . . . . . . . . .
12345[100].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q --
```
Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted.
```console
$ qdel 12345[].dm2
```
Deleting large job arrays may take a while.
Display status information for all user's jobs, job arrays, and subjobs.
```console
$ qstat -u $USER -t
```
Display status information for all user's subjobs.
```console
$ qstat -u $USER -tJ
```
For more information on job arrays, see the [PBSPro Users guide][1].
## Examples
Download the examples in [capacity.zip][2], illustrating the above listed ways to run a huge number of jobs. We recommend trying out the examples before using this for running production jobs.
Unzip the archive in an empty directory on cluster and follow the instructions in the README file-
```console
$ unzip capacity.zip
$ cat README
```
[1]: ../pbspro.md
[2]: capacity.zip
[3]: #shared-jobscript
# Parallel Runs Setting on Karolina
Important aspect of each parallel application is correct placement of MPI processes
or threads to available hardware resources.
Since incorrect settings can cause significant degradation of performance,
all users should be familiar with basic principles explained below.
At the beginning, a basic [hardware overview][1] is provided,
since it influences settings of `mpirun` command.
Then placement is explained for major MPI implementations [Intel MPI][2] and [OpenMPI][3].
The last section describes an appropriate placement for [memory bound][4] and [compute bound][5] applications.
## Hardware Overview
[Karolina][6] contains several types of nodes.
This documentation contains description of basic hardware structure of universal and accelerated nodes.
More technical details can be found in [this presentation][a].
### Universal Nodes
- 720 x 2 x AMD 7H12, 64 cores, 2,6 GHz
<table>
<tr>
<td rowspan="8">universal<br/>node</td>
<td rowspan="4">socket 0<br/> AMD 7H12</td>
<td>NUMA 0</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td>NUMA 1</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td>NUMA 2</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td>NUMA 3</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td rowspan="4">socket 1<br/> AMD 7H12</td>
<td>NUMA 4</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td>NUMA 5</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td>NUMA 6</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td>NUMA 7</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
</table>
### Accelerated Nodes
- 72 x 2 x AMD 7763, 64 cores, 2,45 GHz
- 72 x 8 x NVIDIA A100 GPU
<table>
<tr>
<td rowspan="8">accelerated<br/>node</td>
<td rowspan="4">socket 0<br/> AMD 7763</td>
<td>NUMA 0</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td></td>
</tr>
<tr>
<td>NUMA 1</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td>2 x A100 </td>
</tr>
<tr>
<td>NUMA 2</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td></td>
</tr>
<tr>
<td>NUMA 3</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td>2 x A100 </td>
</tr>
<tr>
<td rowspan="4">socket 1<br/> AMD 7763</td>
<td>NUMA 4</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td></td>
</tr>
<tr>
<td>NUMA 5</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td>2 x A100 </td>
</tr>
<tr>
<td>NUMA 6</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td></td>
</tr>
<tr>
<td>NUMA 7</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td>2 x A100 </td>
</tr>
</table>
## Assigning Processes / Threads to Particular Hardware
When an application is started, the operating system maps MPI processes and threads to particular cores.
This mapping is not fixed as the system is allowed to move your application to other cores.
Inappropriate mapping or frequent moving can lead to significant degradation
of performance of your application.
Hence, a user should:
- set **mapping** according to their application needs;
- **pin** the application to particular hardware resources.
Settings can be described by environment variables that are briefly described on [HPC wiki][b].
However, the mapping and pining is highly non-portable.
It is dependent on a particular system and used MPI library.
The following sections describe settings for the Karolina cluster.
The number of MPI processes per node should be set by PBS via the [`qsub`][7] command.
Mapping and pinning are set for [Intel MPI](#intel-mpi) and [Open MPI](#open-mpi) differently.
## Open MPI
In the case of Open MPI, mapping can be set by the parameter `--map-by`.
Pinning can be set by the parameter `--bind-to`.
The list of all available options can be found [here](https://www-lb.open-mpi.org/doc/v4.1/man1/mpirun.1.php#sect6).
The most relevant options are:
- bind-to: core, l3cache, numa, socket
- map-by: core, l3cache, numa, socket, slot
Mapping and pinning to, for example, L3 cache can be set by the `mpirun` command in the following way:
```
mpirun -n 32 --map-by l3cache --bind-to l3cache ./app
```
Both parameters can be also set by environment variables:
```
export OMPI_MCA_rmaps_base_mapping_policy=l3cache
export OMPI_MCA_hwloc_base_binding_policy=l3cache
mpirun -n 32 ./app
```
## Intel MPI
In the case of Intel MPI, mapping and pinning can be set by environment variables
that are described [on Intel's Developer Reference][c].
The most important variable is `I_MPI_PIN_DOMAIN`.
It denotes the number of cores allocated for each MPI process
and specifies both mapping and pinning.
Default setting is `I_MPI_PIN_DOMAIN=auto:compact`.
It computes the number of cores allocated to each MPI process
from the number of available cores and requested number of MPI processes
(total cores / requested MPI processes).
It is usually the optimal settings and majority applications can be run
with the simple `mpirun -n N ./app` command, where `N` denotes the number of MPI processes.
### Examples of Placement to Different Hardware
Let us have a job allocated by the following `qsub`:
```console
qsub -lselect=2,nprocs=128,mpiprocs=4,ompthreads=4
```
Then the following table shows placement of `app` started
with 8 MPI processes on the universal node for various mapping and pining:
<table style="text-align: center">
<tr>
<th style="text-align: center" colspan="2">Open MPI</th>
<th style="text-align: center">Intel MPI</th>
<th style="text-align: center">node</th>
<th style="text-align: center" colspan="4">0</th>
<th style="text-align: center" colspan="4">1</th>
</tr>
<tr>
<th style="text-align: center">map-by</th>
<th style="text-align: center">bind-to</th>
<th style="text-align: center">I_MPI_PIN_DOMAIN</th>
<th style="text-align: center">rank</th>
<th style="text-align: center">0</th>
<th style="text-align: center">1</th>
<th style="text-align: center">2</th>
<th style="text-align: center">3</th>
<th style="text-align: center">4</th>
<th style="text-align: center">5</th>
<th style="text-align: center">6</th>
<th style="text-align: center">7</th>
</tr>
<tr>
<td rowspan="3">socket</td>
<td rowspan="3">socket</td>
<td rowspan="3">socket</td>
<td>socket</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>numa</td>
<td>0-3</td>
<td>4-7</td>
<td>0-3</td>
<td>4-7</td>
<td>0-3</td>
<td>4-7</td>
<td>0-3</td>
<td>4-7</td>
</tr>
<tr>
<td>cores</td>
<td>0-63</td>
<td>64-127</td>
<td>0-63</td>
<td>64-127</td>
<td>0-63</td>
<td>64-127</td>
<td>0-63</td>
<td>64-127</td>
</tr>
<tr>
<td rowspan="3">numa</td>
<td rowspan="3">numa</td>
<td rowspan="3">numa</td>
<td>socket</td>
<td colspan="4">0</td>
<td colspan="4">0</td>
</tr>
<tr>
<td>numa</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>cores</td>
<td>0-15</td>
<td>16-31</td>
<td>32-47</td>
<td>48-63</td>
<td>0-15</td>
<td>16-31</td>
<td>32-47</td>
<td>48-63</td>
</tr>
<tr>
<td rowspan="3">l3cache</td>
<td rowspan="3">l3cache</td>
<td rowspan="3"><s>cache3</s></td>
<td>socket</td>
<td colspan="4">0</td>
<td colspan="4">0</td>
</tr>
<tr>
<td>numa</td>
<td colspan="4">0</td>
<td colspan="4">0</td>
</tr>
<tr>
<td>cores</td>
<td>0-3</td>
<td>4-7</td>
<td>8-11</td>
<td>12-15</td>
<td>0-3</td>
<td>4-7</td>
<td>8-11</td>
<td>12-15</td>
</tr>
<tr>
<td rowspan="3">slot:pe=32</td>
<td rowspan="3">core</td>
<td rowspan="3">32</td>
<td>socket</td>
<td colspan="2">0</td>
<td colspan="2">1</td>
<td colspan="2">0</td>
<td colspan="2">1</td>
</tr>
<tr>
<td>numa</td>
<td>0-1</td>
<td>2-3</td>
<td>4-5</td>
<td>6-7</td>
<td>0-1</td>
<td>2-3</td>
<td>4-5</td>
<td>6-7</td>
</tr>
<tr>
<td>cores</td>
<td>0-31</td>
<td>32-63</td>
<td>64-95</td>
<td>96-127</td>
<td>0-31</td>
<td>32-63</td>
<td>64-95</td>
<td>96-127</td>
</tr>
</table>
We can see from the above table that mapping starts from the first node.
When the first node is fully occupied
(according to the number of MPI processes per node specified by `qsub`),
mapping continues to the second node, etc.
We note that in the case of `--map-by numa` and `--map-by l3cache`,
the application is not spawned across whole node.
For utilization of a whole node, more MPI processes per node should be used.
In addition, `I_MPI_PIN_DOMAIN=cache3` maps processes incorrectly.
The last mapping (`--map-by slot:pe=32` or `I_MPI_PIN_DOMAIN=32`) is the most general one.
In this way, a user can directly specify the number of cores for each MPI process
independently to a hardware specification.
## Memory Bound Applications
The performance of memory bound applications is dependent on throughput to the memory.
Hence, it is optimal to use the number of cores equal to the number of memory channels;
i.e., 16 cores per node (see the tables with the hardware description at the top of this document).
Running your memory bound application on more than 16 cores can cause lower performance.
Two MPI processes to each NUMA domain must be assigned in order to fully utilize bandwidth to the memory.
It can be achieved by the following commands (for a single node):
- Intel MPI: `mpirun -n 16 ./app`
- Open MPI: `mpirun -n 16 --map-by slot:pe=8 ./app`
Intel MPI automatically puts MPI processes to each 8th core.
In the case of Open MPI, parameter `--map-by` must be used.
Required mapping can be achieved, for example by `--map-by slot:pe=8`
that maps MPI processes to each 8-th core (in the same way as Intel MPI).
This mapping also assures that each MPI process will be assigned to different L3 cache.
## Compute Bound Applications
For compute bound applications it is optimal to use as much cores as possible; i.e. 128 cores per node.
The following command can be used:
- Intel MPI: `mpirun -n 128 ./app`
- Open MPI: `mpirun -n 128 --map-by core --bind-to core ./app`
Pinning assures that operating system does not migrate MPI processes among cores.
## Finding Optimal Setting for Your Application
Sometimes it is not clear what the best settings for your application is.
In that case, you should test your application with a different number of MPI processes.
A good practice is to test your application with 16-128 MPI per node
and measure the time required to finish the computation.
With Intel MPI, it is enough to start your application with a required number of MPI processes.
For Open MPI, you can specify mapping in the following way:
```
mpirun -n 16 --map-by slot:pe=8 --bind-to core ./app
mpirun -n 32 --map-by slot:pe=4 --bind-to core ./app
mpirun -n 64 --map-by slot:pe=2 --bind-to core ./app
mpirun -n 128 --map-by core --bind-to core ./app
```
[1]: #hardware-overview
[2]: #intel-mpi
[3]: #open-mpi
[4]: #memory-bound-applications
[5]: #compute-bound-applications
[6]: ../karolina/introduction.md
[7]: job-submission-and-execution.md
[a]: https://events.it4i.cz/event/123/attachments/417/1578/Technical%20features%20and%20the%20use%20of%20Karolina%20GPU%20accelerated%20partition.pdf
[b]: https://hpc-wiki.info/hpc/Binding/Pinning
[c]: https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/environment-variable-reference.html
...@@ -80,7 +80,11 @@ nav: ...@@ -80,7 +80,11 @@ nav:
- Resource Accounting Policy: general/resource-accounting.md - Resource Accounting Policy: general/resource-accounting.md
- Job Priority: general/job-priority.md - Job Priority: general/job-priority.md
- Job Submission and Execution: general/job-submission-and-execution.md - Job Submission and Execution: general/job-submission-and-execution.md
- Capacity Computing: general/capacity-computing.md - Capacity Computing:
- Introduction: general/capacity-computing.md
- Job Arrays: general/job-arrays.md
- HyperQueue: general/hyperqueue.md
- Parallel Computing and MPI: general/karolina-mpi.md
- Vnode Allocation: general/vnode-allocation.md - Vnode Allocation: general/vnode-allocation.md
- Migrating from SLURM: general/slurmtopbs.md - Migrating from SLURM: general/slurmtopbs.md
- Technical Information: - Technical Information:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment