diff --git a/docs.it4i/general/capacity-computing.md b/docs.it4i/general/capacity-computing.md
index 43ad1936fbbf6a6650a1d909e84086c779a949ad..b4f4759b9e6ebce5ee7115f33d67727d1029d677 100644
--- a/docs.it4i/general/capacity-computing.md
+++ b/docs.it4i/general/capacity-computing.md
@@ -8,16 +8,18 @@ achieving the best runtime, throughput, and computer utilization. This is called
 
 However, executing a huge number of jobs via the Slurm queue may strain the system. This strain may
 result in slow response to commands, inefficient scheduling, and overall degradation of performance
-and user experience for all users. We recommend using [**Job arrays**][1] or [**HyperQueue**][2] to execute many jobs.
+and user experience for all users. We **recommend** using [**Job arrays**][1] or [**HyperQueue**][2] to execute many jobs.
 
 There are two primary scenarios:
 
 1. Numeber of jobs < 1500, **and** the jobs are able to utilize one or more full nodes:  
-    Use [**Job arrays**][2]. The Job array allows to sumbmit and control many jobs (tasks) in one packet. Several job arrays may be sumitted.
+    Use [**Job arrays**][2].  
+    The Job array allows to sumbmit and control many jobs (tasks) in one packet. Several job arrays may be sumitted.
 
 2. Number of jobs >> 1500, **or** the jobs only utilze a few cores each:  
-    Use [**HyperQueue**][1].  HyperQueue can help efficiently
-    load balance a very large number of (small) jobs amongst available computing nodes. HyperQueue may be also used if you have dependenices among the jobs.
+    Use [**HyperQueue**][1].  
+    HyperQueue can help efficiently load balance a very large number of (small) jobs amongst available computing nodes.
+    HyperQueue may be also used if you have dependenices among the jobs.
 
 
 [1]: job-arrays.md