diff --git a/docs.it4i/general/capacity-computing.md b/docs.it4i/general/capacity-computing.md
index 7f6773b6469f23c2ed452f2e97a927ea7afe7756..304876256bc5208922edc57859750d2f06919a76 100644
--- a/docs.it4i/general/capacity-computing.md
+++ b/docs.it4i/general/capacity-computing.md
@@ -4,7 +4,7 @@
 
 In many cases, it is useful to submit a huge number of computational jobs into the Slurm queue system.
 A huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations,
-achieving the best runtime, throughput, and computer utilization. This is called the **Capacity Computing**
+achieving the best runtime, throughput, and computer utilization. This is called **Capacity Computing**
 
 However, executing a huge number of jobs via the Slurm queue may strain the system. This strain may
 result in slow response to commands, inefficient scheduling, and overall degradation of performance
@@ -15,12 +15,12 @@ There are two primary scenarios:
 
 1. Number of jobs < 1500, **and** the jobs are able to utilize one or more **full** nodes:  
     Use [**Job arrays**][1].  
-    The Job array allows to sumbmit and control up to 1500 jobs (tasks) in one packet. Several job arrays may be sumitted.
+    The Job array allows to submit and control up to 1500 jobs (tasks) in one packet. Several job arrays may be submitted.
 
-2. Number of jobs >> 1500, **or** the jobs only utilze a **few cores/accelerators** each:  
+2. Number of jobs >> 1500, **or** the jobs only utilize a **few cores/accelerators** each:  
     Use [**HyperQueue**][2].  
     HyperQueue can help efficiently load balance a very large number of jobs (tasks) amongst available computing nodes.
-    HyperQueue may be also used if you have dependenices among the jobs.
+    HyperQueue may be also used if you have dependencies among the jobs.
 
 
 [1]: job-arrays.md