Skip to content
Snippets Groups Projects
Commit af5d7f62 authored by Branislav Jansik's avatar Branislav Jansik
Browse files

Edit capacity-computing.md

parent 7af70a88
No related branches found
No related tags found
No related merge requests found
Pipeline #44110 failed
......@@ -8,16 +8,18 @@ achieving the best runtime, throughput, and computer utilization. This is called
However, executing a huge number of jobs via the Slurm queue may strain the system. This strain may
result in slow response to commands, inefficient scheduling, and overall degradation of performance
and user experience for all users. We recommend using [**Job arrays**][1] or [**HyperQueue**][2] to execute many jobs.
and user experience for all users. We **recommend** using [**Job arrays**][1] or [**HyperQueue**][2] to execute many jobs.
There are two primary scenarios:
1. Numeber of jobs < 1500, **and** the jobs are able to utilize one or more full nodes:
Use [**Job arrays**][2]. The Job array allows to sumbmit and control many jobs (tasks) in one packet. Several job arrays may be sumitted.
Use [**Job arrays**][2].
The Job array allows to sumbmit and control many jobs (tasks) in one packet. Several job arrays may be sumitted.
2. Number of jobs >> 1500, **or** the jobs only utilze a few cores each:
Use [**HyperQueue**][1]. HyperQueue can help efficiently
load balance a very large number of (small) jobs amongst available computing nodes. HyperQueue may be also used if you have dependenices among the jobs.
Use [**HyperQueue**][1].
HyperQueue can help efficiently load balance a very large number of (small) jobs amongst available computing nodes.
HyperQueue may be also used if you have dependenices among the jobs.
[1]: job-arrays.md
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment