From 6c6658677f4501b7dee30bd4b979012daeb0e5d8 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Luk=C3=A1=C5=A1=20Krup=C4=8D=C3=ADk?= <lukas.krupcik@vsb.cz>
Date: Wed, 8 Apr 2020 10:14:58 +0200
Subject: [PATCH] Update capacity-computing.md

---
 docs.it4i/general/capacity-computing.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs.it4i/general/capacity-computing.md b/docs.it4i/general/capacity-computing.md
index ec6525fd6..2d2d2e7ae 100644
--- a/docs.it4i/general/capacity-computing.md
+++ b/docs.it4i/general/capacity-computing.md
@@ -4,7 +4,7 @@
 
 In many cases, it is useful to submit a huge (>100) number of computational jobs into the PBS queue system. A huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving the best runtime, throughput, and computer utilization.
 
-However, executing a huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling, and overall degradation of performance and user experience for all users. For this reason, the number of jobs is **limited to 100 per user, 1000 per job array**.
+However, executing a huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling, and overall degradation of performance and user experience for all users. For this reason, the number of jobs is **limited to 100 jobs per user, 4000 jobs and subjobs per user, 1500 subjobs per job array**.
 
 !!! note
     Follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
-- 
GitLab