diff --git a/docs.it4i/anselm-cluster-documentation/capacity-computing.md b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
index 90b9934dacaba4ce370e1191cf4c356a29796f64..78350a812c442ba53f91be6ee9c36daa8174f9d1 100644
--- a/docs.it4i/anselm-cluster-documentation/capacity-computing.md
+++ b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
@@ -16,11 +16,13 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 
 Policy
 ------
+
 1.  A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
 2.  The array size is at most 1000 subjobs.
 
 Job arrays
 --------------
+
 !!! Note "Note"
 	Huge number of jobs may be easily submitted and managed as a job array.
 
@@ -221,7 +223,7 @@ In this example, we submit a job of 101 tasks. 16 input files will be processed
 Please note the #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
 
 Job arrays and GNU parallel
--------------------------------
+---------------------------
 
 !!! Note "Note"
 	Combine the Job arrays and GNU parallel for best throughput of single core jobs
@@ -307,6 +309,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo
 
 Examples
 --------
+
 Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
 
 Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
diff --git a/docs.it4i/salomon/capacity-computing.md b/docs.it4i/salomon/capacity-computing.md
index 28c57a64e27567b5670b59a84330782fd47bd563..b405e5c3d1962d86c0bc707b11ab7a0f1810fc58 100644
--- a/docs.it4i/salomon/capacity-computing.md
+++ b/docs.it4i/salomon/capacity-computing.md
@@ -3,6 +3,7 @@ Capacity computing
 
 Introduction
 ------------
+
 In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization.
 
 However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1500 per job array**
@@ -16,11 +17,13 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 
 Policy
 ------
+
 1.  A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
 2.  The array size is at most 1000 subjobs.
 
 Job arrays
 --------------
+
 !!! Note "Note"
 	Huge number of jobs may be easily submitted and managed as a job array.
 
@@ -152,6 +155,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/)
 
 GNU parallel
 ----------------
+
 !!! Note "Note"
 	Use GNU parallel to run many single core tasks on one node.
 
@@ -223,6 +227,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo
 
 Job arrays and GNU parallel
 -------------------------------
+
 !!! Note "Note"
 	Combine the Job arrays and GNU parallel for best throughput of single core jobs
 
@@ -307,6 +312,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo
 
 Examples
 --------
+
 Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
 
 Unzip the archive in an empty directory on Anselm and follow the instructions in the README file