Newer
Older
In many cases, it is useful to submit a huge (>100) number of computational jobs into the Slurm queue system.
A huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations,
achieving the best runtime, throughput, and computer utilization.
However, executing a huge number of jobs via the Slurm queue may strain the system. This strain may
result in slow response to commands, inefficient scheduling, and overall degradation of performance
and user experience for all users.
[//]: # (For this reason, the number of jobs is **limited to 100 jobs per user, 4,000 jobs and subjobs per user, 1,500 subjobs per job array**.)
!!! note
Follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
You can use [HyperQueue][1] when running a huge number of jobs. HyperQueue can help efficiently
load balance a large number of jobs amongst available computing nodes.