diff --git a/docs.it4i/general/capacity-computing.md b/docs.it4i/general/capacity-computing.md
index b4f4759b9e6ebce5ee7115f33d67727d1029d677..e65ea43f34ebf3a01f0f7b13201741873b9de19e 100644
--- a/docs.it4i/general/capacity-computing.md
+++ b/docs.it4i/general/capacity-computing.md
@@ -8,15 +8,16 @@ achieving the best runtime, throughput, and computer utilization. This is called
 
 However, executing a huge number of jobs via the Slurm queue may strain the system. This strain may
 result in slow response to commands, inefficient scheduling, and overall degradation of performance
-and user experience for all users. We **recommend** using [**Job arrays**][1] or [**HyperQueue**][2] to execute many jobs.
+and user experience for all users.  
+We **recommend** using [**Job arrays**][1] or [**HyperQueue**][2] to execute many jobs.
 
 There are two primary scenarios:
 
-1. Numeber of jobs < 1500, **and** the jobs are able to utilize one or more full nodes:  
+1. Numeber of jobs < 1500, **and** the jobs are able to utilize one or more **full** nodes:  
     Use [**Job arrays**][2].  
     The Job array allows to sumbmit and control many jobs (tasks) in one packet. Several job arrays may be sumitted.
 
-2. Number of jobs >> 1500, **or** the jobs only utilze a few cores each:  
+2. Number of jobs >> 1500, **or** the jobs only utilze a **few cores/accelerators** each:  
     Use [**HyperQueue**][1].  
     HyperQueue can help efficiently load balance a very large number of (small) jobs amongst available computing nodes.
     HyperQueue may be also used if you have dependenices among the jobs.