Newer
Older
!!!warning
This page has not been updated yet. The page does not reflect the transition from PBS to Slurm.
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
# Parallel Runs Setting on Karolina
Important aspect of each parallel application is correct placement of MPI processes
or threads to available hardware resources.
Since incorrect settings can cause significant degradation of performance,
all users should be familiar with basic principles explained below.
At the beginning, a basic [hardware overview][1] is provided,
since it influences settings of `mpirun` command.
Then placement is explained for major MPI implementations [Intel MPI][2] and [OpenMPI][3].
The last section describes an appropriate placement for [memory bound][4] and [compute bound][5] applications.
## Hardware Overview
[Karolina][6] contains several types of nodes.
This documentation contains description of basic hardware structure of universal and accelerated nodes.
More technical details can be found in [this presentation][a].
### Universal Nodes
- 720 x 2 x AMD 7H12, 64 cores, 2,6 GHz
<table>
<tr>
<td rowspan="8">universal<br/>node</td>
<td rowspan="4">socket 0<br/> AMD 7H12</td>
<td>NUMA 0</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td>NUMA 1</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td>NUMA 2</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td>NUMA 3</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td rowspan="4">socket 1<br/> AMD 7H12</td>
<td>NUMA 4</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td>NUMA 5</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td>NUMA 6</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
<tr>
<td>NUMA 7</td>
<td>2 x ch DDR4-3200</td>
<td>4 x 16MB L3</td>
<td>16 cores (4 cores / L3)</td>
</tr>
</table>
### Accelerated Nodes
- 72 x 2 x AMD 7763, 64 cores, 2,45 GHz
- 72 x 8 x NVIDIA A100 GPU
<table>
<tr>
<td rowspan="8">accelerated<br/>node</td>
<td rowspan="4">socket 0<br/> AMD 7763</td>
<td>NUMA 0</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td></td>
</tr>
<tr>
<td>NUMA 1</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td>2 x A100 </td>
</tr>
<tr>
<td>NUMA 2</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td></td>
</tr>
<tr>
<td>NUMA 3</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td>2 x A100 </td>
</tr>
<tr>
<td rowspan="4">socket 1<br/> AMD 7763</td>
<td>NUMA 4</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td></td>
</tr>
<tr>
<td>NUMA 5</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td>2 x A100 </td>
</tr>
<tr>
<td>NUMA 6</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td></td>
</tr>
<tr>
<td>NUMA 7</td>
<td>2 x ch DDR4-3200</td>
<td>2 x 32MB L3</td>
<td>16 cores (8 cores / L3)</td>
<td>2 x A100 </td>
</tr>
</table>
## Assigning Processes / Threads to Particular Hardware
When an application is started, the operating system maps MPI processes and threads to particular cores.
This mapping is not fixed as the system is allowed to move your application to other cores.
Inappropriate mapping or frequent moving can lead to significant degradation
of performance of your application.
Hence, a user should:
- set **mapping** according to their application needs;
- **pin** the application to particular hardware resources.
Settings can be described by environment variables that are briefly described on [HPC wiki][b].
However, the mapping and pining is highly non-portable.
It is dependent on a particular system and used MPI library.
The following sections describe settings for the Karolina cluster.
The number of MPI processes per node should be set by PBS via the [`qsub`][7] command.
Mapping and pinning are set for [Intel MPI](#intel-mpi) and [Open MPI](#open-mpi) differently.
## Open MPI
In the case of Open MPI, mapping can be set by the parameter `--map-by`.
Pinning can be set by the parameter `--bind-to`.
The list of all available options can be found [here](https://www-lb.open-mpi.org/doc/v4.1/man1/mpirun.1.php#sect6).
The most relevant options are:
- bind-to: core, l3cache, numa, socket
- map-by: core, l3cache, numa, socket, slot
Mapping and pinning to, for example, L3 cache can be set by the `mpirun` command in the following way:
```
mpirun -n 32 --map-by l3cache --bind-to l3cache ./app
```
Both parameters can be also set by environment variables:
```
export OMPI_MCA_rmaps_base_mapping_policy=l3cache
export OMPI_MCA_hwloc_base_binding_policy=l3cache
mpirun -n 32 ./app
```
## Intel MPI
In the case of Intel MPI, mapping and pinning can be set by environment variables
that are described [on Intel's Developer Reference][c].
The most important variable is `I_MPI_PIN_DOMAIN`.
It denotes the number of cores allocated for each MPI process
and specifies both mapping and pinning.
Default setting is `I_MPI_PIN_DOMAIN=auto:compact`.
It computes the number of cores allocated to each MPI process
from the number of available cores and requested number of MPI processes
(total cores / requested MPI processes).
It is usually the optimal settings and majority applications can be run
with the simple `mpirun -n N ./app` command, where `N` denotes the number of MPI processes.
### Examples of Placement to Different Hardware
Let us have a job allocated by the following `qsub`:
```console
qsub -lselect=2,nprocs=128,mpiprocs=4,ompthreads=4
```
Then the following table shows placement of `app` started
with 8 MPI processes on the universal node for various mapping and pining:
<table style="text-align: center">
<tr>
<th style="text-align: center" colspan="2">Open MPI</th>
<th style="text-align: center">Intel MPI</th>
<th style="text-align: center">node</th>
<th style="text-align: center" colspan="4">0</th>
<th style="text-align: center" colspan="4">1</th>
</tr>
<tr>
<th style="text-align: center">map-by</th>
<th style="text-align: center">bind-to</th>
<th style="text-align: center">I_MPI_PIN_DOMAIN</th>
<th style="text-align: center">rank</th>
<th style="text-align: center">0</th>
<th style="text-align: center">1</th>
<th style="text-align: center">2</th>
<th style="text-align: center">3</th>
<th style="text-align: center">4</th>
<th style="text-align: center">5</th>
<th style="text-align: center">6</th>
<th style="text-align: center">7</th>
</tr>
<tr>
<td rowspan="3">socket</td>
<td rowspan="3">socket</td>
<td rowspan="3">socket</td>
<td>socket</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>numa</td>
<td>0-3</td>
<td>4-7</td>
<td>0-3</td>
<td>4-7</td>
<td>0-3</td>
<td>4-7</td>
<td>0-3</td>
<td>4-7</td>
</tr>
<tr>
<td>cores</td>
<td>0-63</td>
<td>64-127</td>
<td>0-63</td>
<td>64-127</td>
<td>0-63</td>
<td>64-127</td>
<td>0-63</td>
<td>64-127</td>
</tr>
<tr>
<td rowspan="3">numa</td>
<td rowspan="3">numa</td>
<td rowspan="3">numa</td>
<td>socket</td>
<td colspan="4">0</td>
<td colspan="4">0</td>
</tr>
<tr>
<td>numa</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>cores</td>
<td>0-15</td>
<td>16-31</td>
<td>32-47</td>
<td>48-63</td>
<td>0-15</td>
<td>16-31</td>
<td>32-47</td>
<td>48-63</td>
</tr>
<tr>
<td rowspan="3">l3cache</td>
<td rowspan="3">l3cache</td>
<td rowspan="3"><s>cache3</s></td>
<td>socket</td>
<td colspan="4">0</td>
<td colspan="4">0</td>
</tr>
<tr>
<td>numa</td>
<td colspan="4">0</td>
<td colspan="4">0</td>
</tr>
<tr>
<td>cores</td>
<td>0-3</td>
<td>4-7</td>
<td>8-11</td>
<td>12-15</td>
<td>0-3</td>
<td>4-7</td>
<td>8-11</td>
<td>12-15</td>
</tr>
<tr>
<td rowspan="3">slot:pe=32</td>
<td rowspan="3">core</td>
<td rowspan="3">32</td>
<td>socket</td>
<td colspan="2">0</td>
<td colspan="2">1</td>
<td colspan="2">0</td>
<td colspan="2">1</td>
</tr>
<tr>
<td>numa</td>
<td>0-1</td>
<td>2-3</td>
<td>4-5</td>
<td>6-7</td>
<td>0-1</td>
<td>2-3</td>
<td>4-5</td>
<td>6-7</td>
</tr>
<tr>
<td>cores</td>
<td>0-31</td>
<td>32-63</td>
<td>64-95</td>
<td>96-127</td>
<td>0-31</td>
<td>32-63</td>
<td>64-95</td>
<td>96-127</td>
</tr>
</table>
We can see from the above table that mapping starts from the first node.
When the first node is fully occupied
(according to the number of MPI processes per node specified by `qsub`),
mapping continues to the second node, etc.
We note that in the case of `--map-by numa` and `--map-by l3cache`,
the application is not spawned across whole node.
For utilization of a whole node, more MPI processes per node should be used.
In addition, `I_MPI_PIN_DOMAIN=cache3` maps processes incorrectly.
The last mapping (`--map-by slot:pe=32` or `I_MPI_PIN_DOMAIN=32`) is the most general one.
In this way, a user can directly specify the number of cores for each MPI process
independently to a hardware specification.
## Memory Bound Applications
The performance of memory bound applications is dependent on throughput to the memory.
Hence, it is optimal to use the number of cores equal to the number of memory channels;
i.e., 16 cores per node (see the tables with the hardware description at the top of this document).
Running your memory bound application on more than 16 cores can cause lower performance.
Two MPI processes to each NUMA domain must be assigned in order to fully utilize bandwidth to the memory.
It can be achieved by the following commands (for a single node):
- Intel MPI: `mpirun -n 16 ./app`
- Open MPI: `mpirun -n 16 --map-by slot:pe=8 ./app`
Intel MPI automatically puts MPI processes to each 8th core.
In the case of Open MPI, parameter `--map-by` must be used.
Required mapping can be achieved, for example by `--map-by slot:pe=8`
that maps MPI processes to each 8-th core (in the same way as Intel MPI).
This mapping also assures that each MPI process will be assigned to different L3 cache.
## Compute Bound Applications
For compute bound applications it is optimal to use as much cores as possible; i.e. 128 cores per node.
The following command can be used:
- Intel MPI: `mpirun -n 128 ./app`
- Open MPI: `mpirun -n 128 --map-by core --bind-to core ./app`
Pinning assures that operating system does not migrate MPI processes among cores.
## Finding Optimal Setting for Your Application
Sometimes it is not clear what the best settings for your application is.
In that case, you should test your application with a different number of MPI processes.
A good practice is to test your application with 16-128 MPI per node
and measure the time required to finish the computation.
With Intel MPI, it is enough to start your application with a required number of MPI processes.
For Open MPI, you can specify mapping in the following way:
```
mpirun -n 16 --map-by slot:pe=8 --bind-to core ./app
mpirun -n 32 --map-by slot:pe=4 --bind-to core ./app
mpirun -n 64 --map-by slot:pe=2 --bind-to core ./app
mpirun -n 128 --map-by core --bind-to core ./app
```
[1]: #hardware-overview
[2]: #intel-mpi
[3]: #open-mpi
[4]: #memory-bound-applications
[5]: #compute-bound-applications
[6]: ../karolina/introduction.md
[7]: job-submission-and-execution.md
[a]: https://events.it4i.cz/event/123/attachments/417/1578/Technical%20features%20and%20the%20use%20of%20Karolina%20GPU%20accelerated%20partition.pdf
[b]: https://hpc-wiki.info/hpc/Binding/Pinning
[c]: https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/environment-variable-reference.html