Skip to content
Snippets Groups Projects
Commit 39a615a7 authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Update file capacity-computing.md

parent 5d5008a2
No related branches found
No related tags found
No related merge requests found
Pipeline #44160 failed
......@@ -4,7 +4,7 @@
In many cases, it is useful to submit a huge number of computational jobs into the Slurm queue system.
A huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations,
achieving the best runtime, throughput, and computer utilization. This is called the **Capacity Computing**
achieving the best runtime, throughput, and computer utilization. This is called **Capacity Computing**
However, executing a huge number of jobs via the Slurm queue may strain the system. This strain may
result in slow response to commands, inefficient scheduling, and overall degradation of performance
......@@ -15,12 +15,12 @@ There are two primary scenarios:
1. Number of jobs < 1500, **and** the jobs are able to utilize one or more **full** nodes:
Use [**Job arrays**][1].
The Job array allows to sumbmit and control up to 1500 jobs (tasks) in one packet. Several job arrays may be sumitted.
The Job array allows to submit and control up to 1500 jobs (tasks) in one packet. Several job arrays may be submitted.
2. Number of jobs >> 1500, **or** the jobs only utilze a **few cores/accelerators** each:
2. Number of jobs >> 1500, **or** the jobs only utilize a **few cores/accelerators** each:
Use [**HyperQueue**][2].
HyperQueue can help efficiently load balance a very large number of jobs (tasks) amongst available computing nodes.
HyperQueue may be also used if you have dependenices among the jobs.
HyperQueue may be also used if you have dependencies among the jobs.
[1]: job-arrays.md
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment