diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
index d166f892ad9a7a7d3463cc22b05a83e0e4ee4e76..993f6d08d0e76157e03e94ff5c864447a6ad360c 100644
--- a/.gitlab-ci.yml
+++ b/.gitlab-ci.yml
@@ -9,7 +9,7 @@ docs:
   image: davidhrbac/docker-mdcheck:latest
   allow_failure: true
   script:
-  - mdl -r ~MD013,~MD033,~MD014 *.md docs.it4i/
+  - mdl -r ~MD013,~MD033,~MD014,~MD026,~MD037 *.md docs.it4i/
 
 two spaces:
   stage: test
diff --git a/docs.it4i/anselm-cluster-documentation/capacity-computing.md b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
index 59d58906020f9d73c15c259916c421a5e47b048b..6ce94ca34b77ac4b6cc24168fc36ae4e8e0839fa 100644
--- a/docs.it4i/anselm-cluster-documentation/capacity-computing.md
+++ b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
@@ -9,14 +9,14 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 !!! note
     Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
 
-*   Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
-*   Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
-*   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
+* Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
+* Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
+* Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
 
 ## Policy
 
-1.  A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
-2.  The array size is at most 1000 subjobs.
+1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
+1. The array size is at most 1000 subjobs.
 
 ## Job Arrays
 
@@ -25,9 +25,9 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 
 A job array is a compact representation of many jobs, called subjobs. The subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
 
-*   each subjob has a unique index, $PBS_ARRAY_INDEX
-*   job Identifiers of subjobs only differ by their indices
-*   the state of subjobs can differ (R,Q,...etc.)
+* each subjob has a unique index, $PBS_ARRAY_INDEX
+* job Identifiers of subjobs only differ by their indices
+* the state of subjobs can differ (R,Q,...etc.)
 
 All subjobs within a job array have the same scheduling priority and schedule as independent jobs. Entire job array is submitted through a single qsub command and may be managed by qdel, qalter, qhold, qrls and qsig commands as a single job.
 
@@ -101,10 +101,10 @@ Check status of the job array by the qstat command.
 $ qstat -a 12345[].dm2
 
 dm2:
-                                                            Req'd  Req'd   Elap
-Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
+                                                            Req'd Req'd   Elap
+Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
 --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
-12345[].dm2     user2    qprod    xx          13516   1  16    --  00:50 B 00:02
+12345[].dm2     user2    qprod    xx          13516   1 16    --  00:50 B 00:02
 ```
 
 The status B means that some subjobs are already running.
@@ -114,16 +114,16 @@ Check status of the first 100 subjobs by the qstat command.
 $ qstat -a 12345[1-100].dm2
 
 dm2:
-                                                            Req'd  Req'd   Elap
-Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
+                                                            Req'd Req'd   Elap
+Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
 --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
-12345[1].dm2    user2    qprod    xx          13516   1  16    --  00:50 R 00:02
-12345[2].dm2    user2    qprod    xx          13516   1  16    --  00:50 R 00:02
-12345[3].dm2    user2    qprod    xx          13516   1  16    --  00:50 R 00:01
-12345[4].dm2    user2    qprod    xx          13516   1  16    --  00:50 Q   --
+12345[1].dm2    user2    qprod    xx          13516   1 16    --  00:50 R 00:02
+12345[2].dm2    user2    qprod    xx          13516   1 16    --  00:50 R 00:02
+12345[3].dm2    user2    qprod    xx          13516   1 16    --  00:50 R 00:01
+12345[4].dm2    user2    qprod    xx          13516   1 16    --  00:50 Q   --
      .             .        .      .             .    .   .     .    .   .    .
      ,             .        .      .             .    .   .     .    .   .    .
-12345[100].dm2  user2    qprod    xx          13516   1  16    --  00:50 Q   --
+12345[100].dm2 user2    qprod    xx          13516   1 16    --  00:50 Q   --
 ```
 
 Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted.
@@ -152,7 +152,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/)
 !!! note
     Use GNU parallel to run many single core tasks on one node.
 
-GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on  Anselm.
+GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on Anselm.
 
 For more information and examples see the parallel man page:
 
@@ -197,7 +197,7 @@ TASK=$1
 cp $PBS_O_WORKDIR/$TASK input
 
 # execute the calculation
-cat  input > output
+cat input > output
 
 # copy output file to submit directory
 cp output $PBS_O_WORKDIR/$TASK.out
@@ -214,7 +214,7 @@ $ qsub -N JOBNAME jobscript
 12345.dm2
 ```
 
-In this example, we submit a job of 101 tasks. 16 input files will be processed in  parallel. The 101 tasks on 16 cores are assumed to complete in less than 2 hours.
+In this example, we submit a job of 101 tasks. 16 input files will be processed in parallel. The 101 tasks on 16 cores are assumed to complete in less than 2 hours.
 
 !!! hint
     Use #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
@@ -279,18 +279,18 @@ cat input > output
 cp output $PBS_O_WORKDIR/$TASK.out
 ```
 
-In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node.  Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.  The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks  in numtasks file is reached.
+In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node.  Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.  The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks in numtasks file is reached.
 
 !!! note
-    Select  subjob walltime and number of tasks per subjob  carefully
+    Select subjob walltime and number of tasks per subjob carefully
 
 When deciding this values, think about following guiding rules:
 
-1.  Let n=N/16.  Inequality (n+1) \* T < W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
-2.  Number of tasks should be modulo 16.
-3.  These rules are valid only when all tasks have similar task walltimes T.
+1. Let n=N/16.  Inequality (n+1) \* T < W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
+1. Number of tasks should be modulo 16.
+1. These rules are valid only when all tasks have similar task walltimes T.
 
-### Submit the Job Array
+### Submit the Job Array (-J)
 
 To submit the job array, use the qsub -J command. The 992 tasks' job of the [example above](capacity-computing/#combined_example) may be submitted like this:
 
diff --git a/docs.it4i/anselm-cluster-documentation/compute-nodes.md b/docs.it4i/anselm-cluster-documentation/compute-nodes.md
index 440201ed024b87d101ea4a28ffd9ad393df83e37..6f54035acc2c510d8bb63aa18be7f04a9aa3e0ad 100644
--- a/docs.it4i/anselm-cluster-documentation/compute-nodes.md
+++ b/docs.it4i/anselm-cluster-documentation/compute-nodes.md
@@ -6,46 +6,46 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu
 
 ### Compute Nodes Without Accelerator
 
-*   180 nodes
-*   2880 cores in total
-*   two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
-*   64 GB of physical memory per node
-*   one 500GB SATA 2,5” 7,2 krpm HDD per node
-*   bullx B510 blade servers
-*   cn[1-180]
+* 180 nodes
+* 2880 cores in total
+* two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
+* 64 GB of physical memory per node
+* one 500GB SATA 2,5” 7,2 krpm HDD per node
+* bullx B510 blade servers
+* cn[1-180]
 
 ### Compute Nodes With GPU Accelerator
 
-*   23 nodes
-*   368 cores in total
-*   two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
-*   96 GB of physical memory per node
-*   one 500GB SATA 2,5” 7,2 krpm HDD per node
-*   GPU accelerator 1x NVIDIA Tesla Kepler K20 per node
-*   bullx B515 blade servers
-*   cn[181-203]
+* 23 nodes
+* 368 cores in total
+* two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
+* 96 GB of physical memory per node
+* one 500GB SATA 2,5” 7,2 krpm HDD per node
+* GPU accelerator 1x NVIDIA Tesla Kepler K20 per node
+* bullx B515 blade servers
+* cn[181-203]
 
 ### Compute Nodes With MIC Accelerator
 
-*   4 nodes
-*   64 cores in total
-*   two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
-*   96 GB of physical memory per node
-*   one 500GB SATA 2,5” 7,2 krpm HDD per node
-*   MIC accelerator 1x Intel Phi 5110P per node
-*   bullx B515 blade servers
-*   cn[204-207]
+* 4 nodes
+* 64 cores in total
+* two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
+* 96 GB of physical memory per node
+* one 500GB SATA 2,5” 7,2 krpm HDD per node
+* MIC accelerator 1x Intel Phi 5110P per node
+* bullx B515 blade servers
+* cn[204-207]
 
 ### Fat Compute Nodes
 
-*   2 nodes
-*   32 cores in total
-*   2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
-*   512 GB of physical memory per node
-*   two 300GB SAS 3,5”15krpm HDD (RAID1) per node
-*   two 100GB SLC SSD per node
-*   bullx R423-E3 servers
-*   cn[208-209]
+* 2 nodes
+* 32 cores in total
+* 2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
+* 512 GB of physical memory per node
+* two 300GB SAS 3,5”15krpm HDD (RAID1) per node
+* two 100GB SLC SSD per node
+* bullx R423-E3 servers
+* cn[208-209]
 
 ![](../img/bullxB510.png)
 **Figure Anselm bullx B510 servers**
@@ -65,23 +65,23 @@ Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes
 
 ### Intel Sandy Bridge E5-2665 Processor
 
-*   eight-core
-*   speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology
-*   peak performance:  19.2 GFLOP/s per core
-*   caches:
-   * L2: 256 KB per core
-   * L3: 20 MB per processor
-*   memory bandwidth at the level of the processor: 51.2 GB/s
+* eight-core
+* speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology
+* peak performance:  19.2 GFLOP/s per core
+* caches:
+  * L2: 256 KB per core
+  * L3: 20 MB per processor
+* memory bandwidth at the level of the processor: 51.2 GB/s
 
 ### Intel Sandy Bridge E5-2470 Processor
 
-*   eight-core
-*   speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology
-*   peak performance:  18.4 GFLOP/s per core
-*   caches:
-   * L2: 256 KB per core
-   * L3: 20 MB per processor
-*   memory bandwidth at the level of the processor: 38.4 GB/s
+* eight-core
+* speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology
+* peak performance:  18.4 GFLOP/s per core
+* caches:
+  * L2: 256 KB per core
+  * L3: 20 MB per processor
+* memory bandwidth at the level of the processor: 38.4 GB/s
 
 Nodes equipped with Intel Xeon E5-2665 CPU have set PBS resource attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU have set PBS resource attribute cpu_freq = 23.
 
@@ -101,30 +101,30 @@ Intel Turbo Boost Technology is used by default,  you can disable it for all nod
 
 ### Compute Node Without Accelerator
 
-*   2 sockets
-*   Memory Controllers are integrated into processors.
-   * 8 DDR3 DIMMs per node
-   * 4 DDR3 DIMMs per CPU
-   * 1 DDR3 DIMMs per channel
-   * Data rate support: up to 1600MT/s
-*   Populated memory: 8 x 8 GB DDR3 DIMM 1600 MHz
+* 2 sockets
+* Memory Controllers are integrated into processors.
+  * 8 DDR3 DIMMs per node
+  * 4 DDR3 DIMMs per CPU
+  * 1 DDR3 DIMMs per channel
+  * Data rate support: up to 1600MT/s
+* Populated memory: 8 x 8 GB DDR3 DIMM 1600 MHz
 
 ### Compute Node With GPU or MIC Accelerator
 
-*   2 sockets
-*   Memory Controllers are integrated into processors.
-   * 6 DDR3 DIMMs per node
-   * 3 DDR3 DIMMs per CPU
-   * 1 DDR3 DIMMs per channel
-   * Data rate support: up to 1600MT/s
-*   Populated memory: 6 x 16 GB DDR3 DIMM 1600 MHz
+* 2 sockets
+* Memory Controllers are integrated into processors.
+  * 6 DDR3 DIMMs per node
+  * 3 DDR3 DIMMs per CPU
+  * 1 DDR3 DIMMs per channel
+  * Data rate support: up to 1600MT/s
+* Populated memory: 6 x 16 GB DDR3 DIMM 1600 MHz
 
 ### Fat Compute Node
 
-*   2 sockets
-*   Memory Controllers are integrated into processors.
-   * 16 DDR3 DIMMs per node
-   * 8 DDR3 DIMMs per CPU
-   * 2 DDR3 DIMMs per channel
-   * Data rate support: up to 1600MT/s
-*   Populated memory: 16 x 32 GB DDR3 DIMM 1600 MHz
+* 2 sockets
+* Memory Controllers are integrated into processors.
+  * 16 DDR3 DIMMs per node
+  * 8 DDR3 DIMMs per CPU
+  * 2 DDR3 DIMMs per channel
+  * Data rate support: up to 1600MT/s
+* Populated memory: 16 x 32 GB DDR3 DIMM 1600 MHz
diff --git a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md
index 1439c6733c35f3440df643da8e83e1c6308726c7..2aae813076a8f25d8b265cccb3d856f0dc8109fe 100644
--- a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md
+++ b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md
@@ -1,6 +1,6 @@
 # Environment and Modules
 
-### Environment Customization
+## Environment Customization
 
 After logging in, you may want to configure the environment. Write your preferred path definitions, aliases, functions and module loads in the .bashrc file
 
@@ -26,9 +26,9 @@ fi
 !!! note
     Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Conside utilization of SSH session interactivity for such commands as stated in the previous example.
 
-### Application Modules
+## Application Modules
 
-In order to configure your shell for  running particular application on Anselm we use Module package interface.
+In order to configure your shell for running particular application on Anselm we use Module package interface.
 
 !!! note
     The modules set up the application paths, library paths and environment variables for running particular application.
@@ -43,7 +43,7 @@ To check available modules use
 $ module avail
 ```
 
-To load a module, for example the octave module  use
+To load a module, for example the octave module use
 
 ```bash
 $ module load octave
@@ -75,7 +75,7 @@ PrgEnv-gnu sets up the GNU development environment in conjunction with the bullx
 
 PrgEnv-intel sets up the INTEL development environment in conjunction with the Intel MPI library
 
-### Application Modules Path Expansion
+## Application Modules Path Expansion
 
 All application modules on Salomon cluster (and further) will be build using tool called [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). In case that you want to use some applications that are build by EasyBuild already, you have to modify your MODULEPATH environment variable.
 
diff --git a/docs.it4i/anselm-cluster-documentation/hardware-overview.md b/docs.it4i/anselm-cluster-documentation/hardware-overview.md
index b477688da52a57619221a9bcc22397e0e7769191..f130bd152f8666dd30cf9d3a7021d04f4ffa99f3 100644
--- a/docs.it4i/anselm-cluster-documentation/hardware-overview.md
+++ b/docs.it4i/anselm-cluster-documentation/hardware-overview.md
@@ -1,6 +1,6 @@
 # Hardware Overview
 
-The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated nodes, 4 MIC Xeon Phi 5110P accelerated nodes and 2 fat nodes. Each node is a  powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB RAM, and local hard drive. The user access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320 TB /home disk storage to store the user files. The 146 TB shared /scratch storage is available for the scratch data.
+The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated nodes, 4 MIC Xeon Phi 5110P accelerated nodes and 2 fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB RAM, and local hard drive. The user access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320 TB /home disk storage to store the user files. The 146 TB shared /scratch storage is available for the scratch data.
 
 The Fat nodes are equipped with large amount (512 GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) made by a PI.
 
@@ -12,10 +12,10 @@ The cluster compute nodes cn[1-207] are organized within 13 chassis.
 
 There are four types of compute nodes:
 
-*   180 compute nodes without the accelerator
-*   23 compute nodes with GPU accelerator - equipped with NVIDIA Tesla Kepler K20
-*   4 compute nodes with MIC accelerator - equipped with Intel Xeon Phi 5110P
-*   2 fat nodes - equipped with 512 GB RAM and two 100 GB SSD drives
+* 180 compute nodes without the accelerator
+* 23 compute nodes with GPU accelerator - equipped with NVIDIA Tesla Kepler K20
+* 4 compute nodes with MIC accelerator - equipped with Intel Xeon Phi 5110P
+* 2 fat nodes - equipped with 512 GB RAM and two 100 GB SSD drives
 
 [More about Compute nodes](compute-nodes/).
 
diff --git a/docs.it4i/anselm-cluster-documentation/introduction.md b/docs.it4i/anselm-cluster-documentation/introduction.md
index 6cf377ecf0fe651ee18040170eeb29a5578f4907..73e8631574b0f2200b2efe6db084a6c0073e4d07 100644
--- a/docs.it4i/anselm-cluster-documentation/introduction.md
+++ b/docs.it4i/anselm-cluster-documentation/introduction.md
@@ -2,7 +2,7 @@
 
 Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM and giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB RAM, and 500 GB hard disk drive. Nodes are interconnected by fully non-blocking fat-tree InfiniBand network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
 
-The cluster runs [operating system](software/operating-system/), which is compatible with the  RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/).
+The cluster runs [operating system](software/operating-system/), which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/).
 
 User data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users.
 
diff --git a/docs.it4i/anselm-cluster-documentation/job-priority.md b/docs.it4i/anselm-cluster-documentation/job-priority.md
index 2eebe9d54719878015d18604abc33d7be16db422..06c7e921d38a35fac318acc7485dcf2c1a015ddf 100644
--- a/docs.it4i/anselm-cluster-documentation/job-priority.md
+++ b/docs.it4i/anselm-cluster-documentation/job-priority.md
@@ -6,9 +6,9 @@ Scheduler gives each job an execution priority and then uses this job execution
 
 Job execution priority on Anselm is determined by these job properties (in order of importance):
 
-1.  queue priority
-2.  fair-share priority
-3.  eligible time
+1. queue priority
+1. fair-share priority
+1. eligible time
 
 ### Queue Priority
 
diff --git a/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md b/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
index 2f76f9280b0751802d69d11587aee54f41f611b7..e0abdcb04f1d07b06edd594c2cb49aa5f550429f 100644
--- a/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
+++ b/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
@@ -4,12 +4,12 @@
 
 When allocating computational resources for the job, please specify
 
-1.  suitable queue for your job (default is qprod)
-2.  number of computational nodes required
-3.  number of cores per node required
-4.  maximum wall time allocated to your calculation, note that jobs exceeding maximum wall time will be killed
-5.  Project ID
-6.  Jobscript or interactive switch
+1. suitable queue for your job (default is qprod)
+1. number of computational nodes required
+1. number of cores per node required
+1. maximum wall time allocated to your calculation, note that jobs exceeding maximum wall time will be killed
+1. Project ID
+1. Jobscript or interactive switch
 
 !!! note
     Use the **qsub** command to submit your job to a queue for allocation of the computational resources.
@@ -40,13 +40,13 @@ In this example, we allocate 4 nodes, 16 cores per node, for 1 hour. We allocate
 $ qsub -A OPEN-0-0 -q qnvidia -l select=10:ncpus=16 ./myjob
 ```
 
-In this example, we allocate 10 nvidia accelerated nodes, 16 cores per node, for  24 hours. We allocate these resources via the qnvidia queue. Jobscript myjob will be executed on the first node in the allocation.
+In this example, we allocate 10 nvidia accelerated nodes, 16 cores per node, for 24 hours. We allocate these resources via the qnvidia queue. Jobscript myjob will be executed on the first node in the allocation.
 
 ```bash
 $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=16 ./myjob
 ```
 
-In this example, we allocate 10  nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation.
+In this example, we allocate 10 nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation.
 
 All qsub options may be [saved directly into the jobscript](job-submission-and-execution/#PBSsaved). In such a case, no options to qsub are needed.
 
@@ -126,7 +126,7 @@ In the following example, we select an allocation for benchmarking a very specia
     -N Benchmark ./mybenchmark
 ```
 
-The MPI processes will be distributed differently on the nodes connected to the two switches. On the isw10 nodes, we will run 1 MPI process per node 16 threads per process, on isw20  nodes we will run 16 plain MPI processes.
+The MPI processes will be distributed differently on the nodes connected to the two switches. On the isw10 nodes, we will run 1 MPI process per node 16 threads per process, on isw20 nodes we will run 16 plain MPI processes.
 
 Although this example is somewhat artificial, it demonstrates the flexibility of the qsub command options.
 
@@ -148,12 +148,12 @@ Example:
 $ qstat -a
 
 srv11:
-                                                            Req'd  Req'd   Elap
-Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
+                                                            Req'd Req'd   Elap
+Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
 --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
-16287.srv11     user1    qlong    job1         6183   4  64    --  144:0 R 38:25
-16468.srv11     user1    qlong    job2         8060   4  64    --  144:0 R 17:44
-16547.srv11     user2    qprod    job3x       13516   2  32    --  48:00 R 00:58
+16287.srv11     user1    qlong    job1         6183   4 64    --  144:0 R 38:25
+16468.srv11     user1    qlong    job2         8060   4 64    --  144:0 R 17:44
+16547.srv11     user2    qprod    job3x       13516   2 32    --  48:00 R 00:58
 ```
 
 In this example user1 and user2 are running jobs named job1, job2 and job3x. The jobs job1 and job2 are using 4 nodes, 16 cores per node each. The job1 already runs for 38 hours and 25 minutes, job2 for 17 hours 44 minutes. The job1 already consumed `64 x 38.41 = 2458.6` core hours. The job3x already consumed `0.96 x 32 = 30.93` core hours. These consumed core hours will be accounted on the respective project accounts, regardless of whether the allocated cores were actually used for computations.
@@ -251,10 +251,10 @@ $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
 $ qstat -n -u username
 
 srv11:
-                                                            Req'd  Req'd   Elap
-Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
+                                                            Req'd Req'd   Elap
+Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
 --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
-15209.srv11     username qexp     Name0        5530   4  64    --  01:00 R 00:00
+15209.srv11     username qexp     Name0        5530   4 64    --  01:00 R 00:00
    cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16
 ```
 
diff --git a/docs.it4i/anselm-cluster-documentation/network.md b/docs.it4i/anselm-cluster-documentation/network.md
index a682f44ff119881d0f0e1ad1695afdd8046b5ec0..a2af06f97a85472d327eeffc4a743d5eb70d6bb1 100644
--- a/docs.it4i/anselm-cluster-documentation/network.md
+++ b/docs.it4i/anselm-cluster-documentation/network.md
@@ -22,10 +22,10 @@ The compute nodes may be accessed via the regular Gigabit Ethernet network inter
 ```bash
 $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
 $ qstat -n -u username
-                                                            Req'd  Req'd   Elap
-Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
+                                                            Req'd Req'd   Elap
+Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
 --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
-15209.srv11     username qexp     Name0        5530   4  64    --  01:00 R 00:00
+15209.srv11     username qexp     Name0        5530   4 64    --  01:00 R 00:00
    cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16
 
 $ ssh 10.2.1.110
diff --git a/docs.it4i/anselm-cluster-documentation/prace.md b/docs.it4i/anselm-cluster-documentation/prace.md
index 1754d8e28202c6c553e9607846f4ab664a600bbd..de9888a67426a519304905b56f65936ce76663ef 100644
--- a/docs.it4i/anselm-cluster-documentation/prace.md
+++ b/docs.it4i/anselm-cluster-documentation/prace.md
@@ -28,11 +28,11 @@ The user will need a valid certificate and to be present in the PRACE LDAP (plea
 
 Most of the information needed by PRACE users accessing the Anselm TIER-1 system can be found here:
 
-*   [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
-*   [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
-*   [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
-*   [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
-*   [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
+* [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
+* [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
+* [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
+* [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
+* [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
 
 Before you start to use any of the services don't forget to create a proxy certificate from your certificate:
 
@@ -48,7 +48,7 @@ To check whether your proxy certificate is still valid (by default it's valid 12
 
 To access Anselm cluster, two login nodes running GSI SSH service are available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
 
-**Access from PRACE network:**
+#### Access from PRACE network:
 
 It is recommended to use the single DNS name anselm-prace.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
 
@@ -68,7 +68,7 @@ When logging from other PRACE system, the prace_service script can be used:
     $ gsissh `prace_service -i -s anselm`
 ```
 
-**Access from public Internet:**
+#### Access from public Internet:
 
 It is recommended to use the single DNS name anselm.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
 
@@ -122,7 +122,7 @@ Apart from the standard mechanisms, for PRACE users to transfer data to/from Ans
 
 There's one control server and three backend servers for striping and/or backup in case one of them would fail.
 
-**Access from PRACE network:**
+### Access from PRACE network
 
 | Login address                | Port | Node role                   |
 | ---------------------------- | ---- | --------------------------- |
@@ -137,7 +137,7 @@ Copy files **to** Anselm by running the following commands on your local machine
     $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
 ```
 
-Or by using  prace_service script:
+Or by using prace_service script:
 
 ```bash
     $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
@@ -149,13 +149,13 @@ Copy files **from** Anselm:
     $ globus-url-copy gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
 ```
 
-Or by using  prace_service script:
+Or by using prace_service script:
 
 ```bash
     $ globus-url-copy gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
 ```
 
-**Access from public Internet:**
+### Access from public Internet
 
 | Login address          | Port | Node role                   |
 | ---------------------- | ---- | --------------------------- |
@@ -170,7 +170,7 @@ Copy files **to** Anselm by running the following commands on your local machine
     $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
 ```
 
-Or by using  prace_service script:
+Or by using prace_service script:
 
 ```bash
     $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
@@ -182,7 +182,7 @@ Copy files **from** Anselm:
     $ globus-url-copy gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
 ```
 
-Or by using  prace_service script:
+Or by using prace_service script:
 
 ```bash
     $ globus-url-copy gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
diff --git a/docs.it4i/anselm-cluster-documentation/remote-visualization.md b/docs.it4i/anselm-cluster-documentation/remote-visualization.md
index fc01d18790668c5a4eee13dbc0c8090f8cd3d89c..171c8e003216d4a6d3efe573e7bd9ef15d88070d 100644
--- a/docs.it4i/anselm-cluster-documentation/remote-visualization.md
+++ b/docs.it4i/anselm-cluster-documentation/remote-visualization.md
@@ -98,7 +98,7 @@ $ ssh login2.anselm.it4i.cz -L 5901:localhost:5901
 ```
 
 x-window-system/
-If you use Windows and Putty, please refer to port forwarding setup  in the documentation:
+If you use Windows and Putty, please refer to port forwarding setup in the documentation:
 [x-window-and-vnc#section-12](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/)
 
 #### 7. If You Don't Have Turbo VNC Installed on Your Workstation
diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md
index 37a8f71f127a81669472f9a5b7fa5df910193fde..b04a95ead56383feaf887c3121495c091d0d380a 100644
--- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md
+++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md
@@ -6,11 +6,11 @@ To run a [job](../introduction/), [computational resources](../introduction/) fo
 
 The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](job-priority/) at Anselm ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are available to Anselm users:
 
-*   **qexp**, the Express queue
-*   **qprod**, the Production queue
-*   **qlong**, the Long queue, regula
-*   **qnvidia**, **qmic**, **qfat**, the Dedicated queues
-*   **qfree**, the Free resource utilization queue
+* **qexp**, the Express queue
+* **qprod**, the Production queue
+* **qlong**, the Long queue, regula
+* **qnvidia**, **qmic**, **qfat**, the Dedicated queues
+* **qfree**, the Free resource utilization queue
 
 !!! note
     Check the queue status at <https://extranet.it4i.cz/anselm/>
diff --git a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
index ba4dde0614f2159082eaf6983a867af9b66d4ab1..16cb7510d63075d413a19a9a9702ebbf23a4fb78 100644
--- a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
+++ b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
@@ -20,11 +20,11 @@ The resources are allocated to the job in a fair-share fashion, subject to const
 
 **The qexp queue is equipped with the nodes not having the very same CPU clock speed.** Should you need the very same CPU speed, you have to select the proper nodes during the PSB job submission.
 
-*   **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user, from a pool of nodes containing Nvidia accelerated nodes (cn181-203), MIC accelerated nodes (cn204-207) and Fat nodes with 512GB RAM (cn208-209). This enables to test and tune also accelerated code or code with higher RAM requirements. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
-*   **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, except the reserved ones. 178 nodes without accelerator are included. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
-*   **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 60 nodes without acceleration may be accessed via the qlong queue. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 x 48 h).
-*   **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project.
-*   **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
+* **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user, from a pool of nodes containing Nvidia accelerated nodes (cn181-203), MIC accelerated nodes (cn204-207) and Fat nodes with 512GB RAM (cn208-209). This enables to test and tune also accelerated code or code with higher RAM requirements. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
+* **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, except the reserved ones. 178 nodes without accelerator are included. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
+* **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 60 nodes without acceleration may be accessed via the qlong queue. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 x 48 h).
+* **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project.
+* **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
 
 ### Notes
 
@@ -59,9 +59,9 @@ Options:
   --get-node-ncpu-chart
                         Print chart of allocated ncpus per node
   --summary             Print summary
-  --get-server-details  Print server
+  --get-server-details Print server
   --get-queues          Print queues
-  --get-queues-details  Print queues details
+  --get-queues-details Print queues details
   --get-reservations    Print reservations
   --get-reservations-details
                         Print reservations details
@@ -92,7 +92,7 @@ Options:
   --get-user-ncpus      Print number of allocated ncpus per user
   --get-qlist-nodes     Print qlist nodes
   --get-qlist-nodeset   Print qlist nodeset
-  --get-ibswitch-nodes  Print ibswitch nodes
+  --get-ibswitch-nodes Print ibswitch nodes
   --get-ibswitch-nodeset
                         Print ibswitch nodeset
   --state=STATE         Only for given job state
diff --git a/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md b/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
index 38fbda64678ccf7c7a406163ce8f27be753ac67a..0256b6990d59cbf43a654d3d1a10019beb969db6 100644
--- a/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
+++ b/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
@@ -47,7 +47,7 @@ After logging in, you will see the command prompt:
 
                         http://www.it4i.cz/?lang=en
 
-Last login: Tue Jul  9 15:57:38 2013 from your-host.example.com
+Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com
 [username@login2.anselm ~]$
 ```
 
@@ -194,13 +194,13 @@ Once the proxy server is running, establish ssh port forwarding from Anselm to t
 local $ ssh -R 6000:localhost:1080 anselm.it4i.cz
 ```
 
-Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding  to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well.
+Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well.
 
 ## Graphical User Interface
 
-*   The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
-*   The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
+* The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
+* The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
 
 ## VPN Access
 
-*   Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/).
+* Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/).
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md
index 34069a554e8108df81a8af468f360d42f7b1c6a5..392e5dbf04414e368497dedeb3169cdfef7a879a 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md
@@ -34,7 +34,7 @@ procs_per_host=1
 hl=""
 for host in `cat $PBS_NODEFILE`
 do
- if [ "$hl" = "" ]
+ if ["$hl" = "" ]
  then hl="$host:$procs_per_host"
  else hl="${hl}:$host:$procs_per_host"
  fi
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md
index 89fedb100bd003835dd0f8efa8f21431790c5f88..ff1f7cdd21a26283fd7522fc2cc286f00bde73a7 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md
@@ -62,11 +62,11 @@ The appropriate dimension of the problem has to be set by parameter (2d/3d).
 fluent solver_version [FLUENT_options] -i journal_file -pbs
 ```
 
-This syntax will start the ANSYS FLUENT job under PBS Professional using the  qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of _job_ID.hostname_. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as  qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o _job_ID_.
+This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of _job_ID.hostname_. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o _job_ID_.
 
 ## Running Fluent via User's Config File
 
-The sample script uses a configuration file called pbs_fluent.conf  if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of  pbs_fluent.conf can be:
+The sample script uses a configuration file called pbs_fluent.conf if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of pbs_fluent.conf can be:
 
 ```bash
 input="example_small.flin"
@@ -100,7 +100,7 @@ To run ANSYS Fluent in batch mode with user's config file you can utilize/modify
  cd $PBS_O_WORKDIR
 
  #We assume that if they didn’t specify arguments then they should use the
- #config file if [ "xx${input}${case}${mpp}${fluent_args}zz" = "xxzz" ]; then
+ #config file if ["xx${input}${case}${mpp}${fluent_args}zz" = "xxzz" ]; then
    if [ -f pbs_fluent.conf ]; then
      . pbs_fluent.conf
    else
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md
index e033709fd65858003bbfcb8bd931b74e455ee499..18a0193bcbe0b49e2a6c30f5106fbef0c658e069 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md
@@ -1,6 +1,6 @@
 # ANSYS LS-DYNA
 
-**[ANSYSLS-DYNA](http://www.ansys.com/products/structures/ansys-ls-dyna)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern  graphical user environment.
+**[ANSYSLS-DYNA](http://www.ansys.com/products/structures/ansys-ls-dyna)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment.
 
 To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command.
 
@@ -39,7 +39,7 @@ procs_per_host=1
 hl=""
 for host in `cat $PBS_NODEFILE`
 do
- if [ "$hl" = "" ]
+ if ["$hl" = "" ]
  then hl="$host:$procs_per_host"
  else hl="${hl}:$host:$procs_per_host"
  fi
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
index d99fe416ada838585b509d0de20998286fa2bb32..cdaac19ff664acbcd79c8c234ff30ff54cf06cad 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
@@ -35,7 +35,7 @@ procs_per_host=1
 hl=""
 for host in `cat $PBS_NODEFILE`
 do
- if [ "$hl" = "" ]
+ if ["$hl" = "" ]
  then hl="$host:$procs_per_host"
  else hl="${hl}:$host:$procs_per_host"
  fi
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md
index 1ab4477c2f3c477695b072bfec6340c371d37f12..16be5639d93fc6d14baaff251a5b09a1d0e31b62 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md
@@ -2,7 +2,7 @@
 
 **[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
 
-Anselm provides commercial as well as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa\_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products). [ More  about licensing here](ansys/licensing/)
+Anselm provides commercial as well as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa\_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products). [More about licensing here](ansys/licensing/)
 
 To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
index e8827e17c777055dbf9d916d5fd55174be129bee..9b08cb6ec8d2137e936f391eae4af97789d4f229 100644
--- a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
+++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
@@ -33,7 +33,7 @@ Compilation parameters are default:
 Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details.
 
 !!! note
-    The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option  mpiprocs=16:ompthreads=1 to PBS.
+    The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
 
 You are advised to use the -d option to point to a directory in [SCRATCH file system](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
index 569f20771197f93a74559037809e38bb606449f0..9f09fe794a121ddc173d3a037fe0e6e3e7101163 100644
--- a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
+++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
@@ -1,7 +1,5 @@
 # NWChem
 
-**High-Performance Computational Chemistry**
-
 ## Introduction
 
 NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.
@@ -12,10 +10,10 @@ NWChem aims to provide its users with computational chemistry tools that are sca
 
 The following versions are currently installed:
 
-*   6.1.1, not recommended, problems have been observed with this version
-*   6.3-rev2-patch1, current release with QMD patch applied. Compiled with Intel compilers, MKL and Intel MPI
-*   6.3-rev2-patch1-openmpi, same as above, but compiled with OpenMPI and NWChem provided BLAS instead of MKL. This version is expected to be slower
-*   6.3-rev2-patch1-venus, this version contains only libraries for VENUS interface linking. Does not provide standalone NWChem executable
+* 6.1.1, not recommended, problems have been observed with this version
+* 6.3-rev2-patch1, current release with QMD patch applied. Compiled with Intel compilers, MKL and Intel MPI
+* 6.3-rev2-patch1-openmpi, same as above, but compiled with OpenMPI and NWChem provided BLAS instead of MKL. This version is expected to be slower
+* 6.3-rev2-patch1-venus, this version contains only libraries for VENUS interface linking. Does not provide standalone NWChem executable
 
 For a current list of installed versions, execute:
 
@@ -40,5 +38,5 @@ NWChem is compiled for parallel MPI execution. Normal procedure for MPI jobs app
 
 Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
 
-*   MEMORY : controls the amount of memory NWChem will use
-*   SCRATCH_DIR : set this to a directory in [SCRATCH file system](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, e.g.. "scf direct"
+* MEMORY : controls the amount of memory NWChem will use
+* SCRATCH_DIR : set this to a directory in [SCRATCH file system](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, e.g.. "scf direct"
diff --git a/docs.it4i/anselm-cluster-documentation/software/compilers.md b/docs.it4i/anselm-cluster-documentation/software/compilers.md
index deb6c1122ee897e1eeb2a7df4a44158ac53ace58..d1e59f29fd5c7862e8ad28780c1355ba837f8da1 100644
--- a/docs.it4i/anselm-cluster-documentation/software/compilers.md
+++ b/docs.it4i/anselm-cluster-documentation/software/compilers.md
@@ -4,11 +4,11 @@
 
 Currently there are several compilers for different programming languages available on the Anselm cluster:
 
-*   C/C++
-*   Fortran 77/90/95
-*   Unified Parallel C
-*   Java
-*   NVIDIA CUDA
+* C/C++
+* Fortran 77/90/95
+* Unified Parallel C
+* Java
+* NVIDIA CUDA
 
 The C/C++ and Fortran compilers are divided into two main groups GNU and Intel.
 
@@ -18,7 +18,7 @@ For information about the usage of Intel Compilers and other Intel products, ple
 
 ## GNU C/C++ and Fortran Compilers
 
-For compatibility reasons there are still available the original (old 4.4.6-4) versions of GNU compilers as part of the OS. These are accessible in the search path  by default.
+For compatibility reasons there are still available the original (old 4.4.6-4) versions of GNU compilers as part of the OS. These are accessible in the search path by default.
 
 It is strongly recommended to use the up to date version (4.8.1) which comes with the module gcc:
 
@@ -45,8 +45,8 @@ For more information about the possibilities of the compilers, please see the ma
 
  UPC is supported by two compiler/runtime implementations:
 
-*   GNU - SMP/multi-threading support only
-*   Berkley - multi-node support as well as SMP/multi-threading support
+* GNU - SMP/multi-threading support only
+* Berkley - multi-node support as well as SMP/multi-threading support
 
 ### GNU UPC Compiler
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
index 5dd2b59ef6413cd27c2b47f5740c9affbadf995b..457c1aa8fc5d34a4429d5684f977da70d7683b4a 100644
--- a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
+++ b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
@@ -1,4 +1,4 @@
-# COMSOL Multiphysics	
+# COMSOL Multiphysics
 
 ## Introduction
 
@@ -6,11 +6,11 @@
 standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical
 applications.
 
-*   [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
-*   [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
-*   [CFD Module](http://www.comsol.com/cfd-module),
-*   [Acoustics Module](http://www.comsol.com/acoustics-module),
-*   and [many others](http://www.comsol.com/products)
+* [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
+* [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
+* [CFD Module](http://www.comsol.com/cfd-module),
+* [Acoustics Module](http://www.comsol.com/acoustics-module),
+* and [many others](http://www.comsol.com/products)
 
 COMSOL also allows an interface support for equation-based modelling of partial differential equations.
 
@@ -18,8 +18,8 @@ COMSOL also allows an interface support for equation-based modelling of partial
 
 On the Anselm cluster COMSOL is available in the latest stable version. There are two variants of the release:
 
-*   **Non commercial** or so called **EDU variant**, which can be used for research and educational purposes.
-*   **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
+* **Non commercial** or so called **EDU variant**, which can be used for research and educational purposes.
+* **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
 
 To load the of COMSOL load the module
 
@@ -49,7 +49,7 @@ To run COMSOL in batch mode, without the COMSOL Desktop GUI environment, user ca
 #PBS -l select=3:ncpus=16
 #PBS -q qprod
 #PBS -N JOB_NAME
-#PBS -A  PROJECT_ID
+#PBS -A PROJECT_ID
 
 cd /scratch/$USER/ || exit
 
@@ -95,7 +95,7 @@ To run LiveLink for MATLAB in batch mode with (comsol_matlab.pbs) job script you
 #PBS -l select=3:ncpus=16
 #PBS -q qprod
 #PBS -N JOB_NAME
-#PBS -A  PROJECT_ID
+#PBS -A PROJECT_ID
 
 cd /scratch/$USER || exit
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
index bd5ece9bd34415d42838ce75defb300695738338..07d41915d3fa3e202a151abae32023c81c8cc125 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
@@ -10,13 +10,13 @@ Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed for profil
 
 On Anselm users can debug OpenMP or MPI code that runs up to 64 parallel processes. In case of debugging GPU or Xeon Phi accelerated codes the limit is 8 accelerators. These limitation means that:
 
-*   1 user can debug up 64 processes, or
-*   32 users can debug 2 processes, etc.
+* 1 user can debug up 64 processes, or
+* 32 users can debug 2 processes, etc.
 
 In case of debugging on accelerators:
 
-*   1 user can debug on up to 8 accelerators, or
-*   8 users can debug on single accelerator.
+* 1 user can debug on up to 8 accelerators, or
+* 8 users can debug on single accelerator.
 
 ## Compiling Code to Run With DDT
 
@@ -48,12 +48,12 @@ $ mpif90 -g -O0 -o test_debug test.f
 Before debugging, you need to compile your code with theses flags:
 
 !!! note
-    - **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
-    - **O0** : Suppress all optimizations.
+    \* **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
+    \* **O0** : Suppress all optimizations.
 
 ## Starting a Job With DDT
 
-Be sure to log in with an  X window forwarding enabled. This could mean using the -X in the ssh:
+Be sure to log in with an X window forwarding enabled. This could mean using the -X in the ssh:
 
 ```bash
     $ ssh -X username@anselm.it4i.cz
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md
index 799b10bad52639bf995ed19ae609c1ef2e42c503..a7f88955e78159f5800a37e603f91fa09e3ccdbe 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md
@@ -4,15 +4,15 @@
 
 CUBE is a graphical performance report explorer for displaying data from Score-P and Scalasca (and other compatible tools). The name comes from the fact that it displays performance data in a three-dimensions :
 
-*   **performance metric**, where a number of metrics are available, such as communication time or cache misses,
-*   **call path**, which contains the call tree of your program
-*   **system resource**, which contains system's nodes, processes and threads, depending on the parallel programming model.
+* **performance metric**, where a number of metrics are available, such as communication time or cache misses,
+* **call path**, which contains the call tree of your program
+* **system resource**, which contains system's nodes, processes and threads, depending on the parallel programming model.
 
 Each dimension is organized in a tree, for example the time performance metric is divided into Execution time and Overhead time, call path dimension is organized by files and routines in your source code etc.
 
 ![](../../../img/Snmekobrazovky20141204v12.56.36.png)
 
-_Figure 1. Screenshot of CUBE displaying data from Scalasca._
+\*Figure 1. Screenshot of CUBE displaying data from Scalasca.\*
 
 Each node in the tree is colored by severity (the color scheme is displayed at the bottom of the window, ranging from the least severe blue to the most severe being red). For example in Figure 1, we can see that most of the point-to-point MPI communication happens in routine exch_qbc, colored red.
 
@@ -20,8 +20,8 @@ Each node in the tree is colored by severity (the color scheme is displayed at t
 
 Currently, there are two versions of CUBE 4.2.3 available as [modules](../../environment-and-modules/):
 
-*   cube/4.2.3-gcc, compiled with GCC
-*   cube/4.2.3-icc, compiled with Intel compiler
+* cube/4.2.3-gcc, compiled with GCC
+* cube/4.2.3-icc, compiled with Intel compiler
 
 ## Usage
 
@@ -30,7 +30,7 @@ CUBE is a graphical application. Refer to Graphical User Interface documentation
 !!! note
     Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes.
 
-After loading the appropriate module, simply launch cube command, or alternatively you can use  scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available.
+After loading the appropriate module, simply launch cube command, or alternatively you can use scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available.
 
 References
 1\.  <http://www.scalasca.org/software/cube-4.x/download.html>
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
index d9b878254aab451e1ec5b8c9ffe7efb1a3dca363..f9e8e88dcaf2186ea59519f7a7b31305fd1287d6 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
@@ -52,12 +52,12 @@ Sample output:
     --                   System Read Throughput(MB/s):      4.93                  --
     --                  System Write Throughput(MB/s):      3.43                  --
     --                 System Memory Throughput(MB/s):      8.35                  --
-    ---------------------------------------||--------------------------------------- 
+    ---------------------------------------||---------------------------------------
 ```
 
 ### Pcm-Msr
 
-Command  pcm-msr.x can be used to read/write model specific registers of the CPU.
+Command pcm-msr.x can be used to read/write model specific registers of the CPU.
 
 ### Pcm-Numa
 
@@ -276,6 +276,6 @@ Sample output:
 
 ## References
 
-1.  <https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization>
-2.  <https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf> Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide.
-3.  <http://intel-pcm-api-documentation.github.io/classPCM.html> API Documentation
+1. <https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization>
+1. <https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf> Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide.
+1. <http://intel-pcm-api-documentation.github.io/classPCM.html> API Documentation
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
index 3c3f1a8af340e402b2e8797a8fbb802ae2b5257d..e9921046dd13f4b3b3b345f2666b426f2bd5ca9c 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
@@ -4,11 +4,11 @@
 
 Intel VTune Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
 
-*   Hotspot analysis
-*   Locks and waits analysis
-*   Low level specific counters, such as branch analysis and memory
+* Hotspot analysis
+* Locks and waits analysis
+* Low level specific counters, such as branch analysis and memory
     bandwidth
-*   Power usage analysis - frequency and sleep states.
+* Power usage analysis - frequency and sleep states.
 
 ![screenshot](../../../img/vtune-amplifier.png)
 
@@ -56,7 +56,7 @@ Application:  ssh
 
 Application parameters:  mic0 source ~/.profile && /path/to/your/bin
 
-Note that we include  source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/).
+Note that we include source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/).
 
 !!! note
     If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
@@ -70,4 +70,4 @@ You may also use remote analysis to collect data from the MIC and then analyze i
 
 ## References
 
-1.  <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors
+1. <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
index ee7d63fe69c9440d50da4200bdca431687ab9cec..87595f5ef931937f6e4af6a8e3e1d94684194cfa 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
@@ -20,7 +20,7 @@ This will load the default version. Execute module avail papi for a list of inst
 
 ## Utilities
 
-The  bin directory of PAPI (which is automatically added to  $PATH upon loading the module) contains various utilites.
+The bin directory of PAPI (which is automatically added to  $PATH upon loading the module) contains various utilites.
 
 ### Papi_avail
 
@@ -76,15 +76,15 @@ Prints information about the memory architecture of the current CPU.
 
 PAPI provides two kinds of events:
 
-*   **Preset events** is a set of predefined common CPU events, standardized across platforms.
-*   **Native events **is a set of all events supported by the current hardware. This is a larger set of features than preset. For other components than CPU, only native events are usually available.
+* **Preset events** is a set of predefined common CPU events, standardized across platforms.
+* **Native events **is a set of all events supported by the current hardware. This is a larger set of features than preset. For other components than CPU, only native events are usually available.
 
 To use PAPI in your application, you need to link the appropriate include file.
 
-*   papi.h for C
-*   f77papi.h for Fortran 77
-*   f90papi.h for Fortran 90
-*   fpapi.h for Fortran with preprocessor
+* papi.h for C
+* f77papi.h for Fortran 77
+* f90papi.h for Fortran 90
+* fpapi.h for Fortran with preprocessor
 
 The include path is automatically added by papi module to $INCLUDE.
 
@@ -124,7 +124,7 @@ The following example prints MFLOPS rate of a naive matrix-matrix multiplication
      /* Initialize the Matrix arrays */
      for ( i=0; i<SIZE*SIZE; i++ ){
      mresult[0][i] = 0.0;
-     matrixa[0][i] = matrixb[0][i] = rand()*(float)1.1; 
+     matrixa[0][i] = matrixb[0][i] = rand()*(float)1.1;
      }
 
      /* Setup PAPI library and begin collecting data from the counters */
@@ -232,6 +232,6 @@ To use PAPI in offload mode, you need to provide both host and MIC versions of P
 
 ## References
 
-1.  <http://icl.cs.utk.edu/papi/> Main project page
-2.  <http://icl.cs.utk.edu/projects/papi/wiki/Main_Page> Wiki
-3.  <http://icl.cs.utk.edu/papi/docs/> API Documentation
+1. <http://icl.cs.utk.edu/papi/> Main project page
+1. <http://icl.cs.utk.edu/projects/papi/wiki/Main_Page> Wiki
+1. <http://icl.cs.utk.edu/papi/docs/> API Documentation
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
index fe807dc4994b7934dca927fd3e4ef0a14f6249a3..e14f18f9bfdb6f21acea338fabaff2add8343588 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
@@ -10,16 +10,16 @@ Scalasca supports profiling of MPI, OpenMP and hybrid MPI+OpenMP applications.
 
 There are currently two versions of Scalasca 2.0 [modules](../../environment-and-modules/) installed on Anselm:
 
-*   scalasca2/2.0-gcc-openmpi, for usage with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/),
-*   scalasca2/2.0-icc-impi, for usage with [Intel Compiler](../compilers.html) and [Intel MPI](../mpi/running-mpich2/).
+* scalasca2/2.0-gcc-openmpi, for usage with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/),
+* scalasca2/2.0-icc-impi, for usage with [Intel Compiler](../compilers.html) and [Intel MPI](../mpi/running-mpich2/).
 
 ## Usage
 
 Profiling a parallel application with Scalasca consists of three steps:
 
-1.  Instrumentation, compiling the application such way, that the profiling data can be generated.
-2.  Runtime measurement, running the application with the Scalasca profiler to collect performance data.
-3.  Analysis of reports
+1. Instrumentation, compiling the application such way, that the profiling data can be generated.
+1. Runtime measurement, running the application with the Scalasca profiler to collect performance data.
+1. Analysis of reports
 
 ### Instrumentation
 
@@ -39,8 +39,8 @@ An example :
 
 Some notable Scalasca options are:
 
-*   **-t Enable trace data collection. By default, only summary data are collected.**
-*   **-e &lt;directory> Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep\_, followed by name of the executable and launch configuration.**
+* **-t Enable trace data collection. By default, only summary data are collected.**
+* **-e &lt;directory> Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep\_, followed by name of the executable and launch configuration.**
 
 !!! note
     Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage/storage/).
@@ -67,4 +67,4 @@ Refer to [CUBE documentation](cube/) on usage of the GUI viewer.
 
 ## References
 
-1.  <http://www.scalasca.org/>
+1. <http://www.scalasca.org/>
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
index f0d0c33b8e48afa24e51d6540d53705dfa1e477a..215e8b75069b26d7b1b3508a64afb5fb3f7966c5 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
@@ -10,16 +10,16 @@ Score-P can be used as an instrumentation tool for [Scalasca](scalasca/).
 
 There are currently two versions of Score-P version 1.2.6 [modules](../../environment-and-modules/) installed on Anselm :
 
-*   scorep/1.2.3-gcc-openmpi, for usage     with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/)
-*   scorep/1.2.3-icc-impi, for usage with [Intel Compiler](../compilers.html)> and [Intel MPI](../mpi/running-mpich2/)>.
+* scorep/1.2.3-gcc-openmpi, for usage     with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/)
+* scorep/1.2.3-icc-impi, for usage with [Intel Compiler](../compilers.html)> and [Intel MPI](../mpi/running-mpich2/)>.
 
 ## Instrumentation
 
 There are three ways to instrument your parallel applications in order to enable performance data collection:
 
-1.  Automated instrumentation using compiler
-2.  Manual instrumentation using API calls
-3.  Manual instrumentation using directives
+1. Automated instrumentation using compiler
+1. Manual instrumentation using API calls
+1. Manual instrumentation using directives
 
 ### Automated Instrumentation
 
@@ -34,14 +34,14 @@ $ mpif90 -o myapp foo.o bar.o
 with:
 
 ```bash
-$ scorep  mpif90 -c foo.f90
-$ scorep  mpif90 -c bar.f90
-$ scorep  mpif90 -o myapp foo.o bar.o
+$ scorep mpif90 -c foo.f90
+$ scorep mpif90 -c bar.f90
+$ scorep mpif90 -o myapp foo.o bar.o
 ```
 
-Usually your program is compiled using a Makefile or similar script, so it advisable to add the  scorep command to your definition of variables  CC, CXX, FCC etc.
+Usually your program is compiled using a Makefile or similar script, so it advisable to add the scorep command to your definition of variables CC, CXX, FCC etc.
 
-It is important that  scorep is prepended also to the linking command, in order to link with Score-P instrumentation libraries.
+It is important that scorep is prepended also to the linking command, in order to link with Score-P instrumentation libraries.
 
 ### Manual Instrumentation Using API Calls
 
@@ -78,7 +78,7 @@ Please refer to the [documentation for description of the API](https://silc.zih.
 
 ### Manual Instrumentation Using Directives
 
-This method uses POMP2 directives to mark regions to be instrumented. To use this method, use command  scorep --pomp.
+This method uses POMP2 directives to mark regions to be instrumented. To use this method, use command scorep --pomp.
 
 Example directives in C/C++ :
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md
index 2265a89b6e4b51024f36fcabb0b537426931ca60..b4f710675111efe35ea5779625ac53046bc2722b 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md
@@ -1,6 +1,6 @@
 # Total View
 
-\##TotalView is a GUI-based source code multi-process, multi-thread debugger.
+TotalView is a GUI-based source code multi-process, multi-thread debugger.
 
 ## License and Limitations for Anselm Users
 
@@ -20,7 +20,7 @@ You can check the status of the licenses here:
 
     # totalview
     # -------------------------------------------------
-    # FEATURE                       TOTAL   USED  AVAIL
+    # FEATURE                       TOTAL   USED AVAIL
     # -------------------------------------------------
     TotalView_Team                     64      0     64
     Replay                             64      0     64
@@ -58,8 +58,8 @@ Compile the code:
 Before debugging, you need to compile your code with theses flags:
 
 !!! note
-    - **-g** : Generates extra debugging information usable by GDB. **-g3** includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
-    - **-O0** : Suppress all optimizations.
+    \* **-g** : Generates extra debugging information usable by GDB. **-g3** includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
+    \* **-O0** : Suppress all optimizations.
 
 ## Starting a Job With TotalView
 
@@ -121,7 +121,7 @@ The source code of this function can be also found in
 ```
 
 !!! note
-    You can also add only following line to you ~/.tvdrc file instead of the entire function: 
+    You can also add only following line to you ~/.tvdrc file instead of the entire function:
     **source /apps/mpi/openmpi/intel/1.6.5/etc/openmpi-totalview.tcl**
 
 You need to do this step only once.
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
index bfcfc9a86aeb60b88cf8a06ce45fd741bd34768d..2602fdbf24c9bdf16503740541ed81c536628b5a 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
@@ -10,19 +10,19 @@ Valgind is an extremely useful tool for debugging memory errors such as [off-by-
 
 The main tools available in Valgrind are :
 
-*   **Memcheck**, the original, must used and default tool. Verifies memory access in you program and can detect use of unitialized memory, out of bounds memory access, memory leaks, double free, etc.
-*   **Massif**, a heap profiler.
-*   **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications.
-*   **Cachegrind**, a cache profiler.
-*   **Callgrind**, a callgraph analyzer.
-*   For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/).
+* **Memcheck**, the original, must used and default tool. Verifies memory access in you program and can detect use of unitialized memory, out of bounds memory access, memory leaks, double free, etc.
+* **Massif**, a heap profiler.
+* **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications.
+* **Cachegrind**, a cache profiler.
+* **Callgrind**, a callgraph analyzer.
+* For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/).
 
 ## Installed Versions
 
 There are two versions of Valgrind available on Anselm.
 
-*   Version 3.6.0, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support.
-*   Version 3.9.0 with support for Intel MPI, available in [module](../../environment-and-modules/) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind.
+* Version 3.6.0, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support.
+* Version 3.9.0 with support for Intel MPI, available in [module](../../environment-and-modules/) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind.
 
 ## Usage
 
@@ -156,7 +156,7 @@ The default version without MPI support will however report a large number of fa
     ==30166== by 0x4008BD: main (valgrind-example-mpi.c:18)
 ```
 
-so it is better to use the MPI-enabled valgrind from module. The MPI version requires library /apps/tools/valgrind/3.9.0/impi/lib/valgrind/libmpiwrap-amd64-linux.so, which must be included in the  LD_PRELOAD environment variable.
+so it is better to use the MPI-enabled valgrind from module. The MPI version requires library /apps/tools/valgrind/3.9.0/impi/lib/valgrind/libmpiwrap-amd64-linux.so, which must be included in the LD_PRELOAD environment variable.
 
 Lets look at this MPI example :
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
index df0b0a8a124a2f70d11a2e2adc4eb3d17cf227a0..66de3b77a06d7333464336ada10d68cd3a899aa8 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
@@ -32,5 +32,5 @@ Read more at <http://software.intel.com/sites/products/documentation/doclib/stdx
 
 Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon will use Haswell architecture. >The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. >To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags >designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this:
 
-*   Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge nodes.
-*   Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries.
+* Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge nodes.
+* Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries.
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
index dcd4f6c7ea441ca6fed50a0da94ba9dfc974b1ae..aed92ae69da6f721f676fa5e4180945711fe5fba 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
@@ -4,14 +4,14 @@
 
 Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, extensively threaded and optimized for maximum performance. Intel MKL provides these basic math kernels:
 
-*   BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations.
-*   The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations.
-*   ScaLAPACK distributed processing linear algebra routines for Linux and Windows operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).
-*   Fast Fourier transform (FFT) functions in one, two, or three dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions.
-*   Vector Math Library (VML) routines for optimized mathematical operations on vectors.
-*   Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for    several probability distributions, convolution and correlation routines, and summary statistics functions.
-*   Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
-*   Extended Eigensolver, a shared memory  version of an eigensolver based on the Feast Eigenvalue Solver.
+* BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations.
+* The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations.
+* ScaLAPACK distributed processing linear algebra routines for Linux and Windows operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).
+* Fast Fourier transform (FFT) functions in one, two, or three dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions.
+* Vector Math Library (VML) routines for optimized mathematical operations on vectors.
+* Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for    several probability distributions, convolution and correlation routines, and summary statistics functions.
+* Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
+* Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver.
 
 For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm).
 
@@ -39,7 +39,7 @@ The MKL library provides number of interfaces. The fundamental once are the LP64
 
 Linking MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl/#examples) below.
 
-You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include  rpath on the compile line:
+You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include rpath on the compile line:
 
 ```bash
     $ icc .... -Wl,-rpath=$LIBRARY_PATH ...
@@ -74,7 +74,7 @@ Number of examples, demonstrating use of the MKL library and its linking is avai
     $ make sointel64 function=cblas_dgemm
 ```
 
-In this example, we compile, link and run the cblas_dgemm  example, demonstrating use of MKL example suite installed on Anselm.
+In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL example suite installed on Anselm.
 
 ### Example: MKL and Intel Compiler
 
@@ -88,14 +88,14 @@ In this example, we compile, link and run the cblas_dgemm  example, demonstratin
     $ ./cblas_dgemmx.x data/cblas_dgemmx.d
 ```
 
-In this example, we compile, link and run the cblas_dgemm  example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to:
+In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to:
 
 ```bash
     $ icc -w source/cblas_dgemmx.c source/common_func.c -o cblas_dgemmx.x
     -I$MKL_INC_DIR -L$MKL_LIB_DIR -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5
 ```
 
-In this example, we compile and link the cblas_dgemm  example, using LP64 interface to threaded MKL and Intel OMP threads implementation.
+In this example, we compile and link the cblas_dgemm example, using LP64 interface to threaded MKL and Intel OMP threads implementation.
 
 ### Example: MKL and GNU Compiler
 
@@ -111,7 +111,7 @@ In this example, we compile and link the cblas_dgemm  example, using LP64 interf
     $ ./cblas_dgemmx.x data/cblas_dgemmx.d
 ```
 
-In this example, we compile, link and run the cblas_dgemm  example, using LP64 interface to threaded MKL and gnu OMP threads implementation.
+In this example, we compile, link and run the cblas_dgemm example, using LP64 interface to threaded MKL and gnu OMP threads implementation.
 
 ## MKL and MIC Accelerators
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md b/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md
index 0390dff9411db764274c0e8e8caaed9293aed1c6..baa0e6b531398be43761ac5198d430ea1f032d5e 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md
@@ -245,7 +245,7 @@ Some interesting compiler flags useful not only for code debugging are:
 
 Intel MKL includes an Automatic Offload (AO) feature that enables computationally intensive MKL functions called in user code to benefit from attached Intel Xeon Phi coprocessors automatically and transparently.
 
-Behavioral of automatic offload mode is controlled by functions called within the program or by environmental variables. Complete list of controls is listed [ here](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_userguide_lnx/GUID-3DC4FC7D-A1E4-423D-9C0C-06AB265FFA86.htm).
+Behavioral of automatic offload mode is controlled by functions called within the program or by environmental variables. Complete list of controls is listed [here](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_userguide_lnx/GUID-3DC4FC7D-A1E4-423D-9C0C-06AB265FFA86.htm).
 
 The Automatic Offload may be enabled by either an MKL function call within the code:
 
@@ -259,7 +259,7 @@ or by setting environment variable
     $ export MKL_MIC_ENABLE=1
 ```
 
-To get more information about automatic offload please refer to "[Using Intel® MKL Automatic Offload on Intel ® Xeon Phi™ Coprocessors](http://software.intel.com/sites/default/files/11MIC42_How_to_Use_MKL_Automatic_Offload_0.pdf)" white paper or [ Intel MKL documentation](https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation).
+To get more information about automatic offload please refer to "[Using Intel® MKL Automatic Offload on Intel ® Xeon Phi™ Coprocessors](http://software.intel.com/sites/default/files/11MIC42_How_to_Use_MKL_Automatic_Offload_0.pdf)" white paper or [Intel MKL documentation](https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation).
 
 ### Automatic Offload Example
 
@@ -632,7 +632,7 @@ The output should be similar to:
 There are two ways how to execute an MPI code on a single coprocessor: 1.) lunch the program using "**mpirun**" from the
 coprocessor; or 2.) lunch the task using "**mpiexec.hydra**" from a host.
 
-**Execution on coprocessor**
+#### Execution on coprocessor
 
 Similarly to execution of OpenMP programs in native mode, since the environmental module are not supported on MIC, user has to setup paths to Intel MPI libraries and binaries manually. One time setup can be done by creating a "**.profile**" file in user's home directory. This file sets up the environment on the MIC automatically once user access to the accelerator through the SSH.
 
@@ -651,8 +651,8 @@ Similarly to execution of OpenMP programs in native mode, since the environmenta
 ```
 
 !!! note
-    - this file sets up both environmental variable for both MPI and OpenMP libraries.
-    - this file sets up the paths to a particular version of Intel MPI library and particular version of an Intel compiler. These versions have to match with loaded modules.
+    \* this file sets up both environmental variable for both MPI and OpenMP libraries.
+    \* this file sets up the paths to a particular version of Intel MPI library and particular version of an Intel compiler. These versions have to match with loaded modules.
 
 To access a MIC accelerator located on a node that user is currently connected to, use:
 
@@ -681,7 +681,7 @@ The output should be similar to:
     Hello world from process 0 of 4 on host cn207-mic0
 ```
 
-**Execution on host**
+#### Execution on host
 
 If the MPI program is launched from host instead of the coprocessor, the environmental variables are not set using the ".profile" file. Therefore user has to specify library paths from the command line when calling "mpiexec".
 
@@ -704,8 +704,8 @@ or using mpirun
 ```
 
 !!! note
-    - the full path to the binary has to specified (here: `>~/mpi-test-mic`)
-    - the `LD_LIBRARY_PATH` has to match with Intel MPI module used to compile the MPI code
+    \* the full path to the binary has to specified (here: `>~/mpi-test-mic`)
+    \* the `LD_LIBRARY_PATH` has to match with Intel MPI module used to compile the MPI code
 
 The output should be again similar to:
 
@@ -726,7 +726,7 @@ A simple test to see if the file is present is to execute:
       /bin/pmi_proxy
 ```
 
-**Execution on host - MPI processes distributed over multiple accelerators on multiple nodes**
+#### Execution on host - MPI processes distributed over multiple accelerators on multiple nodes**
 
 To get access to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters:
 
@@ -886,7 +886,7 @@ A possible output of the MPI "hello-world" example executed on two hosts and two
 !!! note
     At this point the MPI communication between MIC accelerators on different nodes uses 1Gb Ethernet only.
 
-**Using the PBS automatically generated node-files**
+### Using the PBS automatically generated node-files
 
 PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
index d2512ae42bd0466f4daface9d4449d2fd0177d5c..a7c898aae1fe3387d011e976500a65d22278af99 100644
--- a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
+++ b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
@@ -38,7 +38,7 @@ Example of the Commercial Matlab license state:
     $ cat /apps/user/licenses/matlab_features_state.txt
     # matlab
     # -------------------------------------------------
-    # FEATURE                       TOTAL   USED  AVAIL
+    # FEATURE                       TOTAL   USED AVAIL
     # -------------------------------------------------
     MATLAB                              1      1      0
     SIMULINK                            1      0      1
@@ -61,21 +61,21 @@ The general format of the name is `feature__APP__FEATURE`.
 
 Names of applications (APP):
 
-*   ansys
-*   comsol
-*   comsol-edu
-*   matlab
-*   matlab-edu
+* ansys
+* comsol
+* comsol-edu
+* matlab
+* matlab-edu
 
 To get the FEATUREs of a license take a look into the corresponding state file ([see above](isv_licenses/#Licence)), or use:
 
-**Application and List of provided features**
+### Application and List of provided features
 
-*   **ansys** $ grep -v "#" /apps/user/licenses/ansys_features_state.txt | cut -f1 -d' '
-*   **comsol** $ grep -v "#" /apps/user/licenses/comsol_features_state.txt | cut -f1 -d' '
-*   **comsol-ed** $ grep -v "#" /apps/user/licenses/comsol-edu_features_state.txt | cut -f1 -d' '
-*   **matlab** $ grep -v "#" /apps/user/licenses/matlab_features_state.txt | cut -f1 -d' '
-*   **matlab-edu** $ grep -v "#" /apps/user/licenses/matlab-edu_features_state.txt | cut -f1 -d' '
+* **ansys** $ grep -v "#" /apps/user/licenses/ansys_features_state.txt | cut -f1 -d' '
+* **comsol** $ grep -v "#" /apps/user/licenses/comsol_features_state.txt | cut -f1 -d' '
+* **comsol-ed** $ grep -v "#" /apps/user/licenses/comsol-edu_features_state.txt | cut -f1 -d' '
+* **matlab** $ grep -v "#" /apps/user/licenses/matlab_features_state.txt | cut -f1 -d' '
+* **matlab-edu** $ grep -v "#" /apps/user/licenses/matlab-edu_features_state.txt | cut -f1 -d' '
 
 Example of PBS Pro resource name, based on APP and FEATURE name:
 
@@ -92,7 +92,7 @@ Example of PBS Pro resource name, based on APP and FEATURE name:
 | matlab-edu  | MATLAB_Distrib_Comp_Engine | feature_matlab-edu_MATLAB_Distrib_Comp_Engine   |
 | matlab-edu  | Image_Acquisition_Toolbox  | feature_matlab-edu_Image_Acquisition_Toolbox\\  |
 
-**Be aware, that the resource names in PBS Pro are CASE SENSITIVE!**
+#### Be aware, that the resource names in PBS Pro are CASE SENSITIVE!
 
 ### Example of qsub Statement
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
index 8ecfd71d5e62637b80eebd54f1cc32dedb818f5e..a5e8238c7d39dc20bf542a547d583cee7df867b2 100644
--- a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
+++ b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
@@ -6,11 +6,11 @@ Running virtual machines on compute nodes
 
 There are situations when Anselm's environment is not suitable for user needs.
 
-*   Application requires different operating system (e.g Windows), application is not available for Linux
-*   Application requires different versions of base system libraries and tools
-*   Application requires specific setup (installation, configuration) of complex software stack
-*   Application requires privileged access to operating system
-*   ... and combinations of above cases
+* Application requires different operating system (e.g Windows), application is not available for Linux
+* Application requires different versions of base system libraries and tools
+* Application requires specific setup (installation, configuration) of complex software stack
+* Application requires privileged access to operating system
+* ... and combinations of above cases
 
 We offer solution for these cases - **virtualization**. Anselm's environment gives the possibility to run virtual machines on compute nodes. Users can create their own images of operating system with specific software stack and run instances of these images as virtual machines on compute nodes. Run of virtual machines is provided by standard mechanism of [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction/).
 
@@ -53,11 +53,11 @@ Our recommended solution is that job script creates distinct shared job director
 
 ### Procedure
 
-1.  Prepare image of your virtual machine
-2.  Optimize image of your virtual machine for Anselm's virtualization
-3.  Modify your image for running jobs
-4.  Create job script for executing virtual machine
-5.  Run jobs
+1. Prepare image of your virtual machine
+1. Optimize image of your virtual machine for Anselm's virtualization
+1. Modify your image for running jobs
+1. Create job script for executing virtual machine
+1. Run jobs
 
 ### Prepare Image of Your Virtual Machine
 
@@ -65,13 +65,13 @@ You can either use your existing image or create new image from scratch.
 
 QEMU currently supports these image types or formats:
 
-*   raw
-*   cloop
-*   cow
-*   qcow
-*   qcow2
-*   vmdk - VMware 3 & 4, or 6 image format, for exchanging images with that product
-*   vdi - VirtualBox 1.1 compatible image format, for exchanging images with VirtualBox.
+* raw
+* cloop
+* cow
+* qcow
+* qcow2
+* vmdk - VMware 3 & 4, or 6 image format, for exchanging images with that product
+* vdi - VirtualBox 1.1 compatible image format, for exchanging images with VirtualBox.
 
 You can convert your existing image using qemu-img convert command. Supported formats of this command are: blkdebug blkverify bochs cloop cow dmg file ftp ftps host_cdrom host_device host_floppy http https nbd parallels qcow qcow2 qed raw sheepdog tftp vdi vhdx vmdk vpc vvfat.
 
@@ -97,10 +97,10 @@ Your image should run some kind of operating system startup script. Startup scri
 
 We recommend, that startup script
 
-*   maps Job Directory from host (from compute node)
-*   runs script (we call it "run script") from Job Directory and waits for application's exit
-   * for management purposes if run script does not exist wait for some time period (few minutes)
-*   shutdowns/quits OS
+* maps Job Directory from host (from compute node)
+* runs script (we call it "run script") from Job Directory and waits for application's exit
+  * for management purposes if run script does not exist wait for some time period (few minutes)
+* shutdowns/quits OS
 
 For Windows operating systems we suggest using Local Group Policy Startup script, for Linux operating systems rc.local, runlevel init script or similar service.
 
@@ -198,7 +198,7 @@ Example run script (run.bat) for Windows virtual machine:
     call application.bat z:data z:output
 ```
 
-Run script runs application from shared job directory (mapped as drive z:), process input data (z:data) from job directory  and store output to job directory (z:output).
+Run script runs application from shared job directory (mapped as drive z:), process input data (z:data) from job directory and store output to job directory (z:output).
 
 ### Run Jobs
 
@@ -279,7 +279,7 @@ Optimized network setup with sharing and port forwarding
 
 ### Advanced Networking
 
-**Internet access**
+#### Internet access
 
 Sometime your virtual machine needs access to internet (install software, updates, software activation, etc). We suggest solution using Virtual Distributed Ethernet (VDE) enabled QEMU with SLIRP running on login node tunneled to compute node. Be aware, this setup has very low performance, the worst performance of all described solutions.
 
@@ -321,7 +321,7 @@ Optimized setup
     $ qemu-system-x86_64 ... -device virtio-net-pci,netdev=net0 -netdev vde,id=net0,sock=/tmp/sw0
 ```
 
-**TAP interconnect**
+#### TAP interconnect
 
 Both user and vde network back-end have low performance. For fast interconnect (10 Gbit/s and more) of compute node (host) and virtual machine (guest) we suggest using Linux kernel TAP device.
 
@@ -338,9 +338,9 @@ Interface tap0 has IP address 192.168.1.1 and network mask 255.255.255.0 (/24).
 
 Redirected ports:
 
-*   DNS udp/53->udp/3053, tcp/53->tcp3053
-*   DHCP udp/67->udp3067
-*   SMB tcp/139->tcp3139, tcp/445->tcp3445).
+* DNS udp/53->udp/3053, tcp/53->tcp3053
+* DHCP udp/67->udp3067
+* SMB tcp/139->tcp3139, tcp/445->tcp3445).
 
 You can configure IP address of virtual machine statically or dynamically. For dynamic addressing provide your DHCP server on port 3067 of tap0 interface, you can also provide your DNS server on port 3053 of tap0 interface for example:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
index f164792863fdfb0b5fd83b41f5a8efd9328b301a..bc60afb16ebee9968d942c0e4189f79705118276 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
@@ -19,7 +19,7 @@ Look up section modulefiles/mpi in module avail
 ```bash
     $ module avail
     ------------------------- /opt/modules/modulefiles/mpi -------------------------
-    bullxmpi/bullxmpi-1.2.4.1  mvapich2/1.9-icc
+    bullxmpi/bullxmpi-1.2.4.1 mvapich2/1.9-icc
     impi/4.0.3.008             openmpi/1.6.5-gcc(default)
     impi/4.1.0.024             openmpi/1.6.5-gcc46
     impi/4.1.0.030             openmpi/1.6.5-icc
@@ -108,7 +108,7 @@ Compile the above example with
 ## Running MPI Programs
 
 !!! note
-    The MPI program executable must be compatible with the loaded MPI module. 
+    The MPI program executable must be compatible with the loaded MPI module.
     Always compile and execute using the very same MPI module.
 
 It is strongly discouraged to mix mpi implementations. Linking an application with one MPI implementation and running mpirun/mpiexec form other implementation may result in unexpected errors.
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md
index 1a8972c390a62ba9e29cb67e3993b7b8c1ea412f..64d3c620fddf82b25339d535fb984067924ef29a 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md
@@ -41,7 +41,7 @@ You need to preload the executable, if running on the local scratch /lscratch fi
     Hello world! from rank 3 of 4 on host cn110
 ```
 
-In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the cp command via mpirun, copying the executable from shared home to local scratch . Second  mpirun will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node.
+In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the cp command via mpirun, copying the executable from shared home to local scratch . Second mpirun will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node.
 
 !!! note
     MPI process mapping may be controlled by PBS parameters.
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
index d693a1872e3cf23badce337d4715ee679b2f00e8..10126cc5d864f5d00a8319c88dedd9ee402f226f 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
@@ -4,8 +4,8 @@
 
 Matlab is available in versions R2015a and R2015b. There are always two variants of the release:
 
-*   Non commercial or so called EDU variant, which can be used for common research and educational purposes.
-*   Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
+* Non commercial or so called EDU variant, which can be used for common research and educational purposes.
+* Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
 
 To load the latest version of Matlab load the module
 
@@ -274,7 +274,7 @@ You can use MATLAB on UV2000 in two parallel modes:
 
 ### Threaded Mode
 
-Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as  fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes.
+Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes.
 
 ### Local Cluster Mode
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
index f9cf95feb5013a3843458fb22fd2f8eaa6e9f5e9..8c1012531c67f272907e154addb5f336e636eaf6 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
@@ -7,8 +7,8 @@
 
 Matlab is available in the latest stable version. There are always two variants of the release:
 
-*   Non commercial or so called EDU variant, which can be used for common research and educational purposes.
-*   Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
+* Non commercial or so called EDU variant, which can be used for common research and educational purposes.
+* Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
 
 To load the latest version of Matlab load the module
 
@@ -42,7 +42,6 @@ $ matlab -nodesktop -nosplash
 
 Plots, images, etc... will be still available.
 
-
 ## Running Parallel Matlab Using Distributed Computing Toolbox / Engine
 
 Recommended parallel mode for running parallel Matlab on Anselm is MPIEXEC mode. In this mode user allocates resources through PBS prior to starting Matlab. Once resources are granted the main Matlab instance is started on the first compute node assigned to job by PBS and workers are started on all remaining nodes. User can use both interactive and non-interactive PBS sessions. This mode guarantees that the data processing is not performed on login nodes, but all processing is on compute nodes.
@@ -54,6 +53,7 @@ For the performance reasons Matlab should use system MPI. On Anselm the supporte
 ```bash
 $ vim ~/matlab/mpiLibConf.m
 ```
+
 ```bash
 function [lib, extras] = mpiLibConf
 %MATLAB MPI Library overloading for Infiniband Networks
@@ -72,7 +72,7 @@ extras = {};
 System MPI library allows Matlab to communicate through 40 Gbit/s InfiniBand QDR interconnect instead of slower 1 Gbit Ethernet network.
 
 !!! note
-    The path to MPI library in "mpiLibConf.m" has to match with version of loaded Intel MPI module. In this example the version 4.1.1.036 of Intel MPI is used by Matlab and therefore module impi/4.1.1.036  has to be loaded prior to starting Matlab.
+    The path to MPI library in "mpiLibConf.m" has to match with version of loaded Intel MPI module. In this example the version 4.1.1.036 of Intel MPI is used by Matlab and therefore module impi/4.1.1.036 has to be loaded prior to starting Matlab.
 
 ### Parallel Matlab Interactive Session
 
@@ -193,7 +193,7 @@ You can copy and paste the example in a .m file and execute. Note that the matla
 
 ### Non-Interactive Session and Licenses
 
-If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the "` -l __feature__matlab__MATLAB=1`" for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../isv_licenses/).
+If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the ` -l __feature__matlab__MATLAB=1` for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../isv_licenses/).
 
 In case of non-interactive session please read the [following information](../isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
index 038de8aade954aa089d5e2878ef733861fde8fea..eb82371f76c7b6a7ea59e3b83fbec4a3dd36a083 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
@@ -90,8 +90,8 @@ In this example, the calculation was automatically divided among the CPU cores a
 
 A version of [native](../intel-xeon-phi/#section-4) Octave is compiled for Xeon Phi accelerators. Some limitations apply for this version:
 
-*   Only command line support. GUI, graph plotting etc. is not supported.
-*   Command history in interactive mode is not supported.
+* Only command line support. GUI, graph plotting etc. is not supported.
+* Command history in interactive mode is not supported.
 
 Octave is linked with parallel Intel MKL, so it best suited for batch processing of tasks that utilize BLAS, LAPACK and FFT operations. By default, number of threads is set to 120, you can control this with > OMP_NUM_THREADS environment
 variable.
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md
index 56426eb06591ed490d317fa14a65ddfd2bc4290f..f62cad83d6f5e29a8310cef81d10eef8df6fcb60 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md
@@ -70,7 +70,7 @@ This script may be submitted directly to the PBS workload manager via the qsub c
 
 ## Parallel R
 
-Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where  parallel constructs are directly stated within the R script.
+Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where parallel constructs are directly stated within the R script.
 
 ## Package Parallel
 
@@ -375,7 +375,7 @@ Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, runnin
     #PBS -N Rjob
     #PBS -l select=100:ncpus=16:mpiprocs=16:ompthreads=1
 
-    # change to  scratch directory
+    # change to scratch directory
     SCRDIR=/scratch/$USER/myjob
     cd $SCRDIR || exit
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md
index 67c3fdf090a195dc3cd88d66dd920e0cd6163648..038e1223a44cde79a37f2f7fe59fab9f7e5a8e8e 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md
@@ -2,7 +2,7 @@
 
 The discrete Fourier transform in one or more dimensions, MPI parallel
 
-FFTW is a C subroutine library for computing the discrete Fourier transform  in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, e.g. the discrete cosine/sine transforms or DCT/DST). The FFTW library allows for MPI parallel, in-place discrete Fourier transform, with data distributed over number of nodes.
+FFTW is a C subroutine library for computing the discrete Fourier transform in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, e.g. the discrete cosine/sine transforms or DCT/DST). The FFTW library allows for MPI parallel, in-place discrete Fourier transform, with data distributed over number of nodes.
 
 Two versions, **3.3.3** and **2.1.5** of FFTW are available on Anselm, each compiled for **Intel MPI** and **OpenMPI** using **intel** and **gnu** compilers. These are available via modules:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md
index 0f21187f23982ea4eebba887fa786e7cef0467c6..6b5308df3dabbbfe12a8763a955562e311eff35a 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md
@@ -8,39 +8,39 @@ The GNU Scientific Library (GSL) provides a wide range of mathematical routines
 
 The library covers a wide range of topics in numerical computing. Routines are available for the following areas:
 
-                           Complex Numbers          	Roots of Polynomials
+                           Complex Numbers              Roots of Polynomials
 
-                           Special Functions        	Vectors and Matrices
+                           Special Functions            Vectors and Matrices
 
-                           Permutations            	Combinations
+                           Permutations                 Combinations
 
-                           Sorting                  	BLAS Support
+                           Sorting                      BLAS Support
 
-                           Linear Algebra           	CBLAS Library
+                           Linear Algebra               CBLAS Library
 
-                           Fast Fourier Transforms  	Eigensystems
+                           Fast Fourier Transforms      Eigensystems
 
-                           Random Numbers           	Quadrature
+                           Random Numbers               Quadrature
 
-                           Random Distributions     	Quasi-Random Sequences
+                           Random Distributions         Quasi-Random Sequences
 
-                           Histograms               	Statistics
+                           Histograms                   Statistics
 
-                           Monte Carlo Integration  	N-Tuples
+                           Monte Carlo Integration      N-Tuples
 
-                           Differential Equations   	Simulated Annealing
+                           Differential Equations       Simulated Annealing
 
                            Numerical Differentiation    Interpolation
 
-                           Series Acceleration      	Chebyshev Approximations
+                           Series Acceleration          Chebyshev Approximations
 
-                           Root-Finding             	Discrete Hankel Transforms
+                           Root-Finding                 Discrete Hankel Transforms
 
-                           Least-Squares Fitting    	Minimization
+                           Least-Squares Fitting        Minimization
 
-                           IEEE Floating-Point      	Physical Constants
+                           IEEE Floating-Point          Physical Constants
 
-                           Basis Splines            	Wavelets
+                           Basis Splines                Wavelets
 
 ## Modules
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
index c4b1c262b007a7b34eac140fa2e31a65d9513512..8ce0b79e0ce63aff1cfea48f72e009ad111a79a1 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
@@ -2,7 +2,7 @@
 
 Next generation dense algebra library for heterogeneous systems with accelerators
 
-### Compiling and Linking With MAGMA
+## Compiling and Linking With MAGMA
 
 To be able to compile and link code with MAGMA library user has to load following module:
 
@@ -23,7 +23,7 @@ Compilation example:
 ```bash
     $ icc -mkl -O3 -DHAVE_MIC -DADD_ -Wall $MAGMA_INC -c testing_dgetrf_mic.cpp -o testing_dgetrf_mic.o
 
-    $ icc -mkl -O3 -DHAVE_MIC -DADD_ -Wall -fPIC -Xlinker -zmuldefs -Wall -DNOCHANGE -DHOST  testing_dgetrf_mic.o  -o testing_dgetrf_mic $MAGMA_LIBS
+    $ icc -mkl -O3 -DHAVE_MIC -DADD_ -Wall -fPIC -Xlinker -zmuldefs -Wall -DNOCHANGE -DHOST testing_dgetrf_mic.o  -o testing_dgetrf_mic $MAGMA_LIBS
 ```
 
 ### Running MAGMA Code
@@ -54,15 +54,15 @@ To test if the MAGMA server runs properly we can run one of examples that are pa
 
       M     N     CPU GFlop/s (sec)   MAGMA GFlop/s (sec)   ||PA-LU||/(||A||*N)
     =========================================================================
-     1088  1088     ---   (  ---  )     13.93 (   0.06)     ---
-     2112  2112     ---   (  ---  )     77.85 (   0.08)     ---
-     3136  3136     ---   (  ---  )    183.21 (   0.11)     ---
-     4160  4160     ---   (  ---  )    227.52 (   0.21)     ---
-     5184  5184     ---   (  ---  )    258.61 (   0.36)     ---
-     6208  6208     ---   (  ---  )    333.12 (   0.48)     ---
-     7232  7232     ---   (  ---  )    416.52 (   0.61)     ---
-     8256  8256     ---   (  ---  )    446.97 (   0.84)     ---
-     9280  9280     ---   (  ---  )    461.15 (   1.16)     ---
+     1088 1088     ---   (  ---  )     13.93 (   0.06)     ---
+     2112 2112     ---   (  ---  )     77.85 (   0.08)     ---
+     3136 3136     ---   (  ---  )    183.21 (   0.11)     ---
+     4160 4160     ---   (  ---  )    227.52 (   0.21)     ---
+     5184 5184     ---   (  ---  )    258.61 (   0.36)     ---
+     6208 6208     ---   (  ---  )    333.12 (   0.48)     ---
+     7232 7232     ---   (  ---  )    416.52 (   0.61)     ---
+     8256 8256     ---   (  ---  )    446.97 (   0.84)     ---
+     9280 9280     ---   (  ---  )    461.15 (   1.16)     ---
     10304 10304     ---   (  ---  )    500.70 (   1.46)     ---
 ```
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md
index 6d0b8fb58fae24e98cf4fe1f682e119890a12d67..528d13ddbcaffdc9f8b0a80bee379b05602317d7 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md
@@ -8,11 +8,11 @@ PETSc (Portable, Extensible Toolkit for Scientific Computation) is a suite of bu
 
 ## Resources
 
-*   [project webpage](http://www.mcs.anl.gov/petsc/)
-*   [documentation](http://www.mcs.anl.gov/petsc/documentation/)
-   * [PETSc Users  Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)
-   * [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)
-*   PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)
+* [project webpage](http://www.mcs.anl.gov/petsc/)
+* [documentation](http://www.mcs.anl.gov/petsc/documentation/)
+  * [PETSc Users Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)
+  * [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)
+* PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)
 
 ## Modules
 
@@ -36,25 +36,25 @@ All these libraries can be used also alone, without PETSc. Their static or share
 
 ### Libraries Linked to PETSc on Anselm (As of 11 April 2015)
 
-*   dense linear algebra
-   * [Elemental](http://libelemental.org/)
-*   sparse linear system solvers
-   * [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)
-   * [MUMPS](http://mumps.enseeiht.fr/)
-   * [PaStiX](http://pastix.gforge.inria.fr/)
-   * [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
-   * [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)
-   * [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)
-*   input/output
-   * [ExodusII](http://sourceforge.net/projects/exodusii/)
-   * [HDF5](http://www.hdfgroup.org/HDF5/)
-   * [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)
-*   partitioning
-   * [Chaco](http://www.cs.sandia.gov/CRF/chac.html)
-   * [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)
-   * [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)
-   * [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)
-*   preconditioners & multigrid
-   * [Hypre](http://www.nersc.gov/users/software/programming-libraries/math-libraries/petsc/)
-   * [Trilinos ML](http://trilinos.sandia.gov/packages/ml/)
-   * [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)
+* dense linear algebra
+  * [Elemental](http://libelemental.org/)
+* sparse linear system solvers
+  * [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)
+  * [MUMPS](http://mumps.enseeiht.fr/)
+  * [PaStiX](http://pastix.gforge.inria.fr/)
+  * [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
+  * [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)
+  * [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)
+* input/output
+  * [ExodusII](http://sourceforge.net/projects/exodusii/)
+  * [HDF5](http://www.hdfgroup.org/HDF5/)
+  * [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)
+* partitioning
+  * [Chaco](http://www.cs.sandia.gov/CRF/chac.html)
+  * [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)
+  * [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)
+  * [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)
+* preconditioners & multigrid
+  * [Hypre](http://www.nersc.gov/users/software/programming-libraries/math-libraries/petsc/)
+  * [Trilinos ML](http://trilinos.sandia.gov/packages/ml/)
+  * [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md
index dbf7f01a5ec323eef138a163a2a89034d814065a..42f8bc0dc4ca5318cca883193e5fc61eb207b9b1 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md
@@ -2,29 +2,29 @@
 
 Packages for large scale scientific and engineering problems. Provides MPI and hybrid parallelization.
 
-### Introduction
+## Introduction
 
 Trilinos is a collection of software packages for the numerical solution of large scale scientific and engineering problems. It is based on C++ and features modern object-oriented design. Both serial as well as parallel computations based on MPI and hybrid parallelization are supported within Trilinos packages.
 
-### Installed Packages
+## Installed Packages
 
 Current Trilinos installation on ANSELM contains (among others) the following main packages
 
-*   **Epetra** - core linear algebra package containing classes for manipulation with serial and distributed vectors, matrices, and graphs. Dense linear solvers are supported via interface to BLAS and LAPACK (Intel MKL on ANSELM). Its extension **EpetraExt** contains e.g. methods for matrix-matrix multiplication.
-*   **Tpetra** - next-generation linear algebra package. Supports 64-bit indexing and arbitrary data type using C++ templates.
-*   **Belos** - library of various iterative solvers (CG, block CG, GMRES, block GMRES etc.).
-*   **Amesos** - interface to direct sparse solvers.
-*   **Anasazi** - framework for large-scale eigenvalue algorithms.
-*   **IFPACK** - distributed algebraic preconditioner (includes e.g. incomplete LU factorization)
-*   **Teuchos** - common tools packages. This package contains classes for memory management, output, performance monitoring, BLAS and LAPACK wrappers etc.
+* **Epetra** - core linear algebra package containing classes for manipulation with serial and distributed vectors, matrices, and graphs. Dense linear solvers are supported via interface to BLAS and LAPACK (Intel MKL on ANSELM). Its extension **EpetraExt** contains e.g. methods for matrix-matrix multiplication.
+* **Tpetra** - next-generation linear algebra package. Supports 64-bit indexing and arbitrary data type using C++ templates.
+* **Belos** - library of various iterative solvers (CG, block CG, GMRES, block GMRES etc.).
+* **Amesos** - interface to direct sparse solvers.
+* **Anasazi** - framework for large-scale eigenvalue algorithms.
+* **IFPACK** - distributed algebraic preconditioner (includes e.g. incomplete LU factorization)
+* **Teuchos** - common tools packages. This package contains classes for memory management, output, performance monitoring, BLAS and LAPACK wrappers etc.
 
 For the full list of Trilinos packages, descriptions of their capabilities, and user manuals see [http://trilinos.sandia.gov.](http://trilinos.sandia.gov)
 
-### Installed Version
+## Installed Version
 
 Currently, Trilinos in version 11.2.3 compiled with Intel Compiler is installed on ANSELM.
 
-### Compiling Against Trilinos
+## Compiling Against Trilinos
 
 First, load the appropriate module:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
index 375d3732c504cd6d56d945aec1ce69711137efec..6291a4f29830f7df5f64a006d89938691a3b9893 100644
--- a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
+++ b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
@@ -1,6 +1,6 @@
 # NVIDIA CUDA
 
-## Guide to NVIDIA CUDA Programming and GPU Usage
+Guide to NVIDIA CUDA Programming and GPU Usage
 
 ## CUDA Programming on Anselm
 
@@ -198,7 +198,7 @@ To run the code use interactive PBS session to get access to one of the GPU acce
 
 The NVIDIA CUDA Basic Linear Algebra Subroutines (cuBLAS) library is a GPU-accelerated version of the complete standard BLAS library with 152 standard BLAS routines. Basic description of the library together with basic performance comparison with MKL can be found [here](https://developer.nvidia.com/cublas "Nvidia cuBLAS").
 
-**cuBLAS example: SAXPY**
+#### cuBLAS example: SAXPY
 
 SAXPY function multiplies the vector x by the scalar alpha and adds it to the vector y overwriting the latest vector with the result. The description of the cuBLAS function can be found in [NVIDIA CUDA documentation](http://docs.nvidia.com/cuda/cublas/index.html#cublas-lt-t-gt-axpy "Nvidia CUDA documentation "). Code can be pasted in the file and compiled without any modification.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md
index c968fb56351fc78cfbcfb0576ccc0ef8063a898c..bc366f79e2aba3f752befae5627815a2d15325a6 100644
--- a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md
+++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md
@@ -11,7 +11,7 @@ The pipeline inputs the raw data produced by the sequencing machines and undergo
 ![OMICS MASTER solution overview. Data is produced in the external labs and comes to IT4I (represented by the blue dashed line). The data pre-processor converts raw data into a list of variants and annotations for each sequenced patient. These lists files together with primary and secondary (alignment) data files are stored in IT4I sequence DB and uploaded to the discovery (candidate priorization) or diagnostic component where they can be analysed directly by the user that produced
 them, depending of the experimental design carried out.](../../../img/fig1.png)
 
-** Figure 1. ** OMICS MASTER solution overview. Data is produced in the external labs and comes to IT4I (represented by the blue dashed line). The data pre-processor converts raw data into a list of variants and annotations for each sequenced patient. These lists files together with primary and secondary (alignment) data files are stored in IT4I sequence DB and uploaded to the discovery (candidate prioritization) or diagnostic component where they can be analyzed directly by the user that produced them, depending of the experimental design carried out.
+Figure 1. OMICS MASTER solution overview. Data is produced in the external labs and comes to IT4I (represented by the blue dashed line). The data pre-processor converts raw data into a list of variants and annotations for each sequenced patient. These lists files together with primary and secondary (alignment) data files are stored in IT4I sequence DB and uploaded to the discovery (candidate prioritization) or diagnostic component where they can be analyzed directly by the user that produced them, depending of the experimental design carried out.
 
 Typical genomics pipelines are composed by several components that need to be launched manually. The advantage of OMICS MASTER pipeline is that all these components are invoked sequentially in an automated way.
 
@@ -35,40 +35,40 @@ FastQC& FastQC.
 
 These steps are carried out over the original FASTQ file with optimized scripts and includes the following steps: sequence cleansing, estimation of base quality scores, elimination of duplicates and statistics.
 
-Input: ** FASTQ file **.
+Input: FASTQ file.
 
-Output: ** FASTQ file plus an HTML file containing statistics on the data **.
+Output: FASTQ file plus an HTML file containing statistics on the data.
 
 FASTQ format It represents the nucleotide sequence and its corresponding quality scores.
 
 ![FASTQ file.](../../../img/fig2.png "fig2.png")
-** Figure 2 **.FASTQ file.
+Figure 2.FASTQ file.
 
 #### Mapping
 
-Component: ** Hpg-aligner **.
+Component: Hpg-aligner.
 
 Sequence reads are mapped over the human reference genome. SOLiD reads are not covered by this solution; they should be mapped with specific software (among the few available options, SHRiMP seems to be the best one). For the rest of NGS machine outputs we use HPG Aligner. HPG-Aligner is an innovative solution, based on a combination of mapping with BWT and local alignment with Smith-Waterman (SW), that drastically increases mapping accuracy (97% versus 62-70% by current mappers, in the most common scenarios). This proposal provides a simple and fast solution that maps almost all the reads, even those containing a high number of mismatches or indels.
 
-Input: ** FASTQ file **.
+Input: FASTQ file.
 
-Output: ** Aligned file in BAM format **.
+Output: Aligned file in BAM format.
 
-** Sequence Alignment/Map (SAM) **
+#### Sequence Alignment/Map (SAM)
 
 It is a human readable tab-delimited format in which each read and its alignment is represented on a single line. The format can represent unmapped reads, reads that are mapped to unique locations, and reads that are mapped to multiple locations.
 
 The SAM format (1) consists of one header section and one alignment section. The lines in the header section start with character ‘@’, and lines in the alignment section do not. All lines are TAB delimited.
 
-In SAM, each alignment line has 11 mandatory fields and a variable number of optional fields. The mandatory fields are briefly described in Table 1. They must be present but their value can be a ‘\*’ or a zero (depending on the field) if the
+In SAM, each alignment line has 11 mandatory fields and a variable number of optional fields. The mandatory fields are briefly described in Table 1. They must be present but their value can be a ‘\’ or a zero (depending on the field) if the
 corresponding information is unavailable.
 
-| ** No. ** | ** Name ** | ** Description **                                     |
+|  No.  |  Name  |  Description                                      |
 | --------- | ---------- | ----------------------------------------------------- |
 | 1         | QNAME      | Query NAME of the read or the read pai                |
 | 2         | FLAG       | Bitwise FLAG (pairing,strand,mate strand,etc.)        |
 | 3         | RNAME      | <p>Reference sequence NAME                            |
-| 4         | POS        | <p>1-Based  leftmost POSition of clipped alignment    |
+| 4         | POS        | <p>1-Based leftmost POSition of clipped alignment    |
 | 5         | MAPQ       | <p>MAPping Quality (Phred-scaled)                     |
 | 6         | CIGAR      | <p>Extended CIGAR string (operations:MIDNSHP)         |
 | 7         | MRNM       | <p>Mate REference NaMe ('=' if same RNAME)            |
@@ -77,92 +77,92 @@ corresponding information is unavailable.
 | 10        | SEQ        | <p>Query SEQuence on the same strand as the reference |
 | 11        | QUAL       | <p>Query QUALity (ASCII-33=Phred base quality)        |
 
-** Table 1 **. Mandatory fields in the SAM format.
+ Table 1 . Mandatory fields in the SAM format.
 
 The standard CIGAR description of pairwise alignment defines three operations: ‘M’ for match/mismatch, ‘I’ for insertion compared with the reference and ‘D’ for deletion. The extended CIGAR proposed in SAM added four more operations: ‘N’ for skipped bases on the reference, ‘S’ for soft clipping, ‘H’ for hard clipping and ‘P’ for padding. These support splicing, clipping, multi-part and padded alignments. Figure 3 shows examples of CIGAR strings for different types of alignments.
 
 ![SAM format file. The ‘@SQ’ line in the header section gives the order of reference sequences. Notably, r001 is the name of a read pair. According to FLAG 163 (=1+2+32+128), the read mapped to position 7 is the second read in the pair (128) and regarded as properly paired (1 + 2); its mate is mapped to 37 on the reverse strand (32). Read r002 has three soft-clipped (unaligned) bases. The coordinate shown in SAM is the position of the first aligned base. The CIGAR string for this alignment contains a P (padding) operation which correctly aligns the inserted sequences. Padding operations can be absent when an aligner does not support multiple sequence alignment. The last six bases of read r003 map to position 9, and the first five to position 29 on the reverse strand. The hard clipping operation H indicates that the clipped sequence is not present in the sequence field. The NM tag gives the number of mismatches. Read r004 is aligned across an intron, indicated by the N operation.](../../../img/fig3.png)
 
-** Figure 3 **. SAM format file. The ‘@SQ’ line in the header section gives the order of reference sequences. Notably, r001 is the name of a read pair. According to FLAG 163 (=1+2+32+128), the read mapped to position 7 is the second read in the pair (128) and regarded as properly paired (1 + 2); its mate is mapped to 37 on the reverse strand (32). Read r002 has three soft-clipped (unaligned) bases. The coordinate shown in SAM is the position of the first aligned base. The CIGAR string for this alignment contains a P (padding) operation which correctly aligns the inserted sequences. Padding operations can be absent when an aligner does not support multiple sequence alignment. The last six bases of read r003 map to position 9, and the first five to position 29 on the reverse strand. The hard clipping operation H indicates that the clipped sequence is not present in the sequence field. The NM tag gives the number of mismatches. Read r004 is aligned across an intron, indicated by the N operation.
+ Figure 3 . SAM format file. The ‘@SQ’ line in the header section gives the order of reference sequences. Notably, r001 is the name of a read pair. According to FLAG 163 (=1+2+32+128), the read mapped to position 7 is the second read in the pair (128) and regarded as properly paired (1 + 2); its mate is mapped to 37 on the reverse strand (32). Read r002 has three soft-clipped (unaligned) bases. The coordinate shown in SAM is the position of the first aligned base. The CIGAR string for this alignment contains a P (padding) operation which correctly aligns the inserted sequences. Padding operations can be absent when an aligner does not support multiple sequence alignment. The last six bases of read r003 map to position 9, and the first five to position 29 on the reverse strand. The hard clipping operation H indicates that the clipped sequence is not present in the sequence field. The NM tag gives the number of mismatches. Read r004 is aligned across an intron, indicated by the N operation.
 
-** Binary Alignment/Map (BAM) **
+##### Binary Alignment/Map (BAM)
 
 BAM is the binary representation of SAM and keeps exactly the same information as SAM. BAM uses lossless compression to reduce the size of the data by about 75% and provides an indexing system that allows reads that overlap a region of the genome to be retrieved and rapidly traversed.
 
 #### Quality Control, Preprocessing and Statistics for BAM
 
-** Component **: Hpg-Fastq & FastQC.
+Component: Hpg-Fastq & FastQC.
 
 Some features
 
-*   Quality control
-   * reads with N errors
-   * reads with multiple mappings
-   * strand bias
-   * paired-end insert
-*   Filtering: by number of errors, number of hits
-   * Comparator: stats, intersection, ...
+ Quality control
+   reads with N errors
+   reads with multiple mappings
+   strand bias
+   paired-end insert
+ Filtering: by number of errors, number of hits
+   Comparator: stats, intersection, ...
 
-** Input: ** BAM file.
+Input: BAM file.
 
-** Output: ** BAM file plus an HTML file containing statistics.
+Output: BAM file plus an HTML file containing statistics.
 
 #### Variant Calling
 
-Component: ** GATK **.
+Component: GATK.
 
 Identification of single nucleotide variants and indels on the alignments is performed using the Genome Analysis Toolkit (GATK). GATK (2) is a software package developed at the Broad Institute to analyze high-throughput sequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance.
 
-** Input: ** BAM
+Input: BAM
 
-** Output: ** VCF
+Output:VCF
 
-** Variant Call Format (VCF) **
+Variant Call Format (VCF)
 
 VCF (3) is a standardized format for storing the most prevalent types of sequence variation, including SNPs, indels and larger structural variants, together with rich annotations. The format was developed with the primary intention to represent human genetic variation, but its use is not restricted >to diploid genomes and can be used in different contexts as well. Its flexibility and user extensibility allows representation of a wide variety of genomic variation with respect to a single reference sequence.
 
-A VCF file consists of a header section and a data section. The header contains an arbitrary number of metainformation lines, each starting with characters ‘##’, and a TAB delimited field definition line, starting with a single ‘#’ character. The meta-information header lines provide a standardized description of tags and annotations used in the data section. The use of meta-information allows the information stored within a VCF file to be tailored to the dataset in question. It can be also used to provide information about the means of file creation, date of creation, version of the reference sequence, software used and any other information relevant to the history of the file. The field definition line names eight mandatory columns, corresponding to data columns representing the chromosome (CHROM), a 1-based position of the start of the variant (POS), unique identifiers of the variant (ID), the reference allele (REF), a comma separated list of  alternate non-reference alleles (ALT), a phred-scaled quality score (QUAL), site filtering information (FILTER) and a semicolon separated list of additional, user extensible annotation (INFO). In addition, if samples are present in the file, the mandatory header columns are followed by a FORMAT column and an arbitrary number of sample IDs that define the samples included in the VCF file. The FORMAT column is used to define  the information contained within each subsequent genotype column, which consists of a colon separated list of fields. For example, the FORMAT field GT:GQ:DP in the fourth data entry of Figure 1a indicates that the subsequent entries contain information regarding the genotype, genotype quality and  read depth for each sample. All data lines are TAB delimited and the number of fields in each data line must match the number of fields in the header line. It is strongly recommended that all annotation tags used are declared in the VCF header section.
+A VCF file consists of a header section and a data section. The header contains an arbitrary number of metainformation lines, each starting with characters ‘##’, and a TAB delimited field definition line, starting with a single ‘#’ character. The meta-information header lines provide a standardized description of tags and annotations used in the data section. The use of meta-information allows the information stored within a VCF file to be tailored to the dataset in question. It can be also used to provide information about the means of file creation, date of creation, version of the reference sequence, software used and any other information relevant to the history of the file. The field definition line names eight mandatory columns, corresponding to data columns representing the chromosome (CHROM), a 1-based position of the start of the variant (POS), unique identifiers of the variant (ID), the reference allele (REF), a comma separated list of alternate non-reference alleles (ALT), a phred-scaled quality score (QUAL), site filtering information (FILTER) and a semicolon separated list of additional, user extensible annotation (INFO). In addition, if samples are present in the file, the mandatory header columns are followed by a FORMAT column and an arbitrary number of sample IDs that define the samples included in the VCF file. The FORMAT column is used to define the information contained within each subsequent genotype column, which consists of a colon separated list of fields. For example, the FORMAT field GT:GQ:DP in the fourth data entry of Figure 1a indicates that the subsequent entries contain information regarding the genotype, genotype quality and read depth for each sample. All data lines are TAB delimited and the number of fields in each data line must match the number of fields in the header line. It is strongly recommended that all annotation tags used are declared in the VCF header section.
 
 ![a) Example of valid VCF. The header lines ##fileformat and #CHROM are mandatory, the rest is optional but strongly recommended. Each line of the body describes variants present in the sampled population at one genomic position or region. All alternate alleles are listed in the ALT column and referenced from the genotype fields as 1-based indexes to
 this list; the reference haplotype is designated as 0. For multiploid data, the separator indicates whether the data are phased (|) or unphased (/). Thus, the two alleles C and G at the positions 2 and 5 in this figure occur on the same chromosome in SAMPLE1. The first data line shows an example of a deletion (present in SAMPLE1) and a replacement of
 two bases by another base (SAMPLE2); the second line shows a SNP and an insertion; the third a SNP; the fourth a large structural variant described by the annotation in the INFO column, the coordinate is that of the base before the variant. (b–f ) Alignments and VCF representations of different sequence variants: SNP, insertion, deletion, replacement, and a large deletion. The REF columns shows the reference bases replaced by the haplotype in the ALT column. The coordinate refers to the first reference base. (g) Users are advised to use simplest representation possible and lowest coordinate in cases where the position is ambiguous.](../../../img/fig4.png)
 
-** Figure 4 **. (a) Example of valid VCF. The header lines ##fileformat and #CHROM are mandatory, the rest is optional but strongly recommended. Each line of the body describes variants present in the sampled population at one genomic position or region. All alternate alleles are listed in the ALT column and referenced from the genotype fields as 1-based indexes to this list; the reference haplotype is designated as 0. For multiploid data, the separator indicates whether the data are phased (|) or unphased (/). Thus, the two alleles C and G at the positions 2 and 5 in this figure occur on the same chromosome in SAMPLE1. The first data line shows an example of a deletion (present in SAMPLE1) and a replacement of two bases by another base (SAMPLE2); the second line shows a SNP and an insertion; the third a SNP; the fourth a large structural variant described by the annotation in the INFO column, the coordinate is that of the base before the variant. (b–f ) Alignments and VCF representations of different sequence variants: SNP, insertion, deletion, replacement, and a large deletion. The REF columns shows the reference bases replaced by the haplotype in the ALT column. The coordinate refers to the first reference base. (g) Users are advised to use simplest representation possible and lowest coordinate in cases where the position is ambiguous.
+ Figure 4 . (a) Example of valid VCF. The header lines ##fileformat and #CHROM are mandatory, the rest is optional but strongly recommended. Each line of the body describes variants present in the sampled population at one genomic position or region. All alternate alleles are listed in the ALT column and referenced from the genotype fields as 1-based indexes to this list; the reference haplotype is designated as 0. For multiploid data, the separator indicates whether the data are phased (|) or unphased (/). Thus, the two alleles C and G at the positions 2 and 5 in this figure occur on the same chromosome in SAMPLE1. The first data line shows an example of a deletion (present in SAMPLE1) and a replacement of two bases by another base (SAMPLE2); the second line shows a SNP and an insertion; the third a SNP; the fourth a large structural variant described by the annotation in the INFO column, the coordinate is that of the base before the variant. (b–f ) Alignments and VCF representations of different sequence variants: SNP, insertion, deletion, replacement, and a large deletion. The REF columns shows the reference bases replaced by the haplotype in the ALT column. The coordinate refers to the first reference base. (g) Users are advised to use simplest representation possible and lowest coordinate in cases where the position is ambiguous.
 
 ### Annotating
 
-** Component: ** HPG-Variant
+Component:  HPG-Variant
 
 The functional consequences of every variant found are then annotated using the HPG-Variant software, which extracts from CellBase, the Knowledge database, all the information relevant on the predicted pathologic effect of the variants.
 
 VARIANT (VARIant Analysis Tool) (4) reports information on the variants found that include consequence type and annotations taken from different databases and repositories (SNPs and variants from dbSNP and 1000 genomes, and disease-related variants from the Genome-Wide Association Study (GWAS) catalog, Online Mendelian Inheritance in Man (OMIM), Catalog of Somatic Mutations in Cancer (COSMIC) mutations, etc. VARIANT also produces a rich variety of annotations that include information on the regulatory (transcription factor or miRNAbinding sites, etc.) or structural roles, or on the selective pressures on the sites affected by the variation. This information allows extending the conventional reports beyond the coding regions and expands the knowledge on the contribution of non-coding or synonymous variants to the phenotype studied.
 
-** Input: ** VCF
+ Input:  VCF
 
-** Output: ** The output of this step is the Variant Calling Format (VCF) file, which contains changes with respect to the reference genome with the corresponding QC and functional annotations.
+ Output:  The output of this step is the Variant Calling Format (VCF) file, which contains changes with respect to the reference genome with the corresponding QC and functional annotations.
 
 #### CellBase
 
 CellBase(5) is a relational database integrates biological information from different sources and includes:
 
-** Core features: **
+Core features
 
 We took genome sequences, genes, transcripts, exons, cytobands or cross references (xrefs) identifiers (IDs) from Ensembl (6). Protein information including sequences, xrefs or protein features (natural variants, mutagenesis sites, post-translational modifications, etc.) were imported from UniProt (7).
 
-** Regulatory: **
+Regulatory
 
 CellBase imports miRNA from miRBase (8); curated and non-curated miRNA targets from miRecords (9), miRTarBase (10),
 TargetScan(11) and microRNA.org (12) and CpG islands and conserved regions from the UCSC database (13).
 
-** Functional annotation **
+Functional annotation
 
 OBO Foundry (14) develops many biomedical ontologies that are implemented in OBO format. We designed a SQL schema to store these OBO ontologies and 30 ontologies were imported. OBO ontology term annotations were taken from Ensembl (6). InterPro (15) annotations were also imported.
 
-** Variation **
+Variation
 
 CellBase includes SNPs from dbSNP (16)^; SNP population frequencies from HapMap (17), 1000 genomes project (18) and Ensembl (6); phenotypically annotated SNPs were imported from NHRI GWAS Catalog (19),HGMD (20), Open Access GWAS Database (21), UniProt (7) and OMIM (22); mutations from COSMIC (23) and structural variations from Ensembl (6).
 
-** Systems biology **
+Systems biology
 
 We also import systems biology information like interactome information from IntAct (24). Reactome (25) stores pathway and interaction information in BioPAX (26) format. BioPAX data exchange format enables the integration of diverse pathway
 resources. We successfully solved the problem of storing data released in BioPAX format into a SQL relational schema, which allowed us importing Reactome in CellBase.
@@ -173,13 +173,13 @@ resources. We successfully solved the problem of storing data released in BioPAX
 
 ## Usage
 
-First of all, we should load  ngsPipeline module:
+First of all, we should load ngsPipeline module:
 
 ```bash
     $ module load ngsPipeline
 ```
 
-This command will load  python/2.7.5 module and all the required modules (hpg-aligner, gatk, etc)
+This command will load python/2.7.5 module and all the required modules (hpg-aligner, gatk, etc)
 
 If we launch ngsPipeline with ‘-h’, we will get the usage help:
 
@@ -192,7 +192,7 @@ If we launch ngsPipeline with ‘-h’, we will get the usage help:
     Python pipeline
 
     optional arguments:
-      -h, --help  show this help message and exit
+      -h, --help show this help message and exit
       -i INPUT, --input INPUT
       -o OUTPUT, --output OUTPUT
         Output Data directory
@@ -212,26 +212,26 @@ If we launch ngsPipeline with ‘-h’, we will get the usage help:
 Let us see a brief description of the arguments:
 
 ```bash
-      *-h --help*. Show the help.
+      -h --help. Show the help.
 
-      *-i, --input.* The input data directory. This directory must to have a special structure. We have to create one folder per sample (with the same name). These folders will host the fastq files. These fastq files must have the following pattern “sampleName” + “_” + “1 or 2” + “.fq”. 1 for the first pair (in paired-end sequences), and 2 for the
+      -i, --input. The input data directory. This directory must to have a special structure. We have to create one folder per sample (with the same name). These folders will host the fastq files. These fastq files must have the following pattern “sampleName” + “_” + “1 or 2” + “.fq”. 1 for the first pair (in paired-end sequences), and 2 for the
 second one.
 
-      *-o , --output.* The output folder. This folder will contain all the intermediate and final folders. When the pipeline will be executed completely, we could remove the intermediate folders and keep only the final one (with the VCF file containing all the variants)
+      -o , --output. The output folder. This folder will contain all the intermediate and final folders. When the pipeline will be executed completely, we could remove the intermediate folders and keep only the final one (with the VCF file containing all the variants)
 
-      *-p , --ped*. The ped file with the pedigree. This file contains all the sample names. These names must coincide with the names of the input folders. If our input folder contains more samples than the .ped file, the pipeline will use only the samples from the .ped file.
+      -p , --ped. The ped file with the pedigree. This file contains all the sample names. These names must coincide with the names of the input folders. If our input folder contains more samples than the .ped file, the pipeline will use only the samples from the .ped file.
 
-      *--email.* Email for PBS notifications.
+      --email. Email for PBS notifications.
 
-      *--prefix.* Prefix for PBS Job names.
+      --prefix. Prefix for PBS Job names.
 
-      *-s, --start & -e, --end.*  Initial and final stage. If we want to launch the pipeline in a specific stage we must use -s. If we want to end the pipeline in a specific stage we must use -e.
+      -s, --start & -e, --end.  Initial and final stage. If we want to launch the pipeline in a specific stage we must use -s. If we want to end the pipeline in a specific stage we must use -e.
 
-      *--log*. Using log argument NGSpipeline will prompt all the logs to this file.
+      --log. Using log argument NGSpipeline will prompt all the logs to this file.
 
-      *--project*>. Project ID of your supercomputer allocation.
+      --project>. Project ID of your supercomputer allocation.
 
-      *--queue*. [Queue](../../resource-allocation-and-job-execution/introduction.html) to run the jobs in.
+      --queue. [Queue](../../resource-allocation-and-job-execution/introduction.html) to run the jobs in.
 ```
 
 Input, output and ped arguments are mandatory. If the output folder does not exist, the pipeline will create it.
@@ -290,47 +290,47 @@ If we want to re-launch the pipeline from stage 4 until stage 20 we should use t
 
 The pipeline calls the following tools
 
-*   [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/), quality control tool for high throughput sequence data.
-*   [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at
+ [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/), quality control tool for high throughput sequence data.
+ [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at
       the Broad Institute to analyze high-throughput sequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance. Its robust architecture, powerful processing engine and high-performance computing features make it capable of taking on projects of any size.
-*   [hpg-aligner](https://github.com/opencb-hpg/hpg-aligner), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: _Burrows-Wheeler Transform_ (BWT) to speed-up mapping high-quality reads, and _Smith-Waterman_> (SW) to increase sensitivity when reads cannot be mapped using BWT.
-*   [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data.
-*   [hpg-variant](http://docs.bioinfo.cipf.es/projects/hpg-variant/wiki), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible.
-*   [picard](http://picard.sourceforge.net/), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported.
-*   [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
-*   [snpEff](http://snpeff.sourceforge.net/), Genetic variant annotation and effect prediction toolbox.
+ [hpg-aligner](https://github.com/opencb-hpg/hpg-aligner), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: _Burrows-Wheeler Transform_ (BWT) to speed-up mapping high-quality reads, and _Smith-Waterman_> (SW) to increase sensitivity when reads cannot be mapped using BWT.
+ [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data.
+ [hpg-variant](http://docs.bioinfo.cipf.es/projects/hpg-variant/wiki), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible.
+ [picard](http://picard.sourceforge.net/), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported.
+ [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
+ [snpEff](http://snpeff.sourceforge.net/), Genetic variant annotation and effect prediction toolbox.
 
 This listing show which tools are used in each step of the pipeline
 
-*   stage-00: fastqc
-*   stage-01: hpg_fastq
-*   stage-02: fastqc
-*   stage-03: hpg_aligner and samtools
-*   stage-04: samtools
-*   stage-05: samtools
-*   stage-06: fastqc
-*   stage-07: picard
-*   stage-08: fastqc
-*   stage-09: picard
-*   stage-10: gatk
-*   stage-11: gatk
-*   stage-12: gatk
-*   stage-13: gatk
-*   stage-14: gatk
-*   stage-15: gatk
-*   stage-16: samtools
-*   stage-17: samtools
-*   stage-18: fastqc
-*   stage-19: gatk
-*   stage-20: gatk
-*   stage-21: gatk
-*   stage-22: gatk
-*   stage-23: gatk
-*   stage-24: hpg-variant
-*   stage-25: hpg-variant
-*   stage-26: snpEff
-*   stage-27: snpEff
-*   stage-28: hpg-variant
+ stage-00: fastqc
+ stage-01: hpg_fastq
+ stage-02: fastqc
+ stage-03: hpg_aligner and samtools
+ stage-04: samtools
+ stage-05: samtools
+ stage-06: fastqc
+ stage-07: picard
+ stage-08: fastqc
+ stage-09: picard
+ stage-10: gatk
+ stage-11: gatk
+ stage-12: gatk
+ stage-13: gatk
+ stage-14: gatk
+ stage-15: gatk
+ stage-16: samtools
+ stage-17: samtools
+ stage-18: fastqc
+ stage-19: gatk
+ stage-20: gatk
+ stage-21: gatk
+ stage-22: gatk
+ stage-23: gatk
+ stage-24: hpg-variant
+ stage-25: hpg-variant
+ stage-26: snpEff
+ stage-27: snpEff
+ stage-28: hpg-variant
 
 ## Interpretation
 
@@ -338,54 +338,54 @@ The output folder contains all the subfolders with the intermediate data. This f
 
 ![TEAM upload panel. Once the file has been uploaded, a panel must be chosen from the Panel list. Then, pressing the Run button the diagnostic process starts.]\((../../../img/fig7.png)
 
-** Figure 7. ** _TEAM upload panel._ _Once the file has been uploaded, a panel must be chosen from the Panel_ list. Then, pressing the Run button the diagnostic process starts.
+ Figure 7.  _TEAM upload panel._ _Once the file has been uploaded, a panel must be chosen from the Panel_ list. Then, pressing the Run button the diagnostic process starts.
 
 Once the file has been uploaded, a panel must be chosen from the Panel list. Then, pressing the Run button the diagnostic process starts. TEAM searches first for known diagnostic mutation(s) taken from four databases: HGMD-public (20), [HUMSAVAR](http://www.uniprot.org/docs/humsavar), ClinVar (29) and COSMIC (23).
 
 ![The panel manager. The elements used to define a panel are (A) disease terms, (B) diagnostic mutations and (C) genes. Arrows represent actions that can be taken in the panel manager. Panels can be defined by using the known mutations and genes of a particular disease. This can be done by dragging them to the Primary Diagnostic box (action D). This action, in addition to defining the diseases in the Primary Diagnostic box, automatically adds the corresponding genes to the Genes box. The panels can be customized by adding new genes (action F) or removing undesired genes (action G). New disease mutations can be added independently or associated to an already existing disease term (action E). Disease terms can be removed by simply dragging themback (action H).](../../../img/fig7x.png)
 
-** Figure 7. ** The panel manager. The elements used to define a panel are (** A **) disease terms, (** B **) diagnostic mutations and (** C **) genes. Arrows represent actions that can be taken in the panel manager. Panels can be defined by using the known mutations and genes of a particular disease. This can be done by dragging them to the ** Primary Diagnostic ** box (action ** D **). This action, in addition to defining the diseases in the ** Primary Diagnostic ** box, automatically adds the corresponding genes to the ** Genes ** box. The panels can be customized by adding new genes (action ** F **) or removing undesired genes (action **G**). New disease mutations can be added independently or associated to an already existing disease term (action ** E **). Disease terms can be removed by simply dragging them back (action ** H **).
+ Figure 7.  The panel manager. The elements used to define a panel are ( A ) disease terms, ( B ) diagnostic mutations and ( C ) genes. Arrows represent actions that can be taken in the panel manager. Panels can be defined by using the known mutations and genes of a particular disease. This can be done by dragging them to the  Primary Diagnostic  box (action  D ). This action, in addition to defining the diseases in the  Primary Diagnostic  box, automatically adds the corresponding genes to the  Genes  box. The panels can be customized by adding new genes (action  F ) or removing undesired genes (action G). New disease mutations can be added independently or associated to an already existing disease term (action  E ). Disease terms can be removed by simply dragging them back (action  H ).
 
 For variant discovering/filtering we should upload the VCF file into BierApp by using the following form:
 
-_![BierApp VCF upload panel. It is recommended to choose a name for the job as well as a description.](../../../img/fig8.png)_
+\![BierApp VCF upload panel. It is recommended to choose a name for the job as well as a description.](../../../img/fig8.png)\
 
-** Figure 8 **. \*BierApp VCF upload panel. It is recommended to choose a name for the job as well as a description \*\*.
+ Figure 8 . \BierApp VCF upload panel. It is recommended to choose a name for the job as well as a description \\.
 
 Each prioritization (‘job’) has three associated screens that facilitate the filtering steps. The first one, the ‘Summary’ tab, displays a statistic of the data set analyzed, containing the samples analyzed, the number and types of variants found and its distribution according to consequence types. The second screen, in the ‘Variants and effect’ tab, is the actual filtering tool, and the third one, the ‘Genome view’ tab, offers a representation of the selected variants within the genomic context provided by an embedded version of the Genome Maps Tool (30).
 
 ![This picture shows all the information associated to the variants. If a variant has an associated phenotype we could see it in the last column. In this case, the variant 7:132481242 CT is associated to the phenotype: large intestine tumor.](../../../img/fig9.png)
 
-** Figure 9 **. This picture shows all the information associated to the variants. If a variant has an associated phenotype we could see it in the last column. In this case, the variant 7:132481242 CT is associated to the phenotype: large intestine tumor.
+ Figure 9 . This picture shows all the information associated to the variants. If a variant has an associated phenotype we could see it in the last column. In this case, the variant 7:132481242 CT is associated to the phenotype: large intestine tumor.
 
 ## References
 
-1.  Heng Li, Bob Handsaker, Alec Wysoker, Tim Fennell, Jue Ruan, Nils Homer, Gabor Marth5, Goncalo Abecasis6, Richard Durbin and 1000 Genome Project Data Processing Subgroup: The Sequence Alignment/Map format and SAMtools. Bioinformatics 2009, 25: 2078-2079.
-2.  McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, Garimella K, Altshuler D, Gabriel S, Daly M, DePristo MA: The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. _Genome Res_ >2010, 20:1297-1303.
-3.  Petr Danecek, Adam Auton, Goncalo Abecasis, Cornelis A. Albers, Eric Banks, Mark A. DePristo, Robert E. Handsaker, Gerton Lunter, Gabor T. Marth, Stephen T. Sherry, Gilean McVean, Richard Durbin, and 1000 Genomes Project Analysis Group. The variant call format and VCFtools. Bioinformatics 2011, 27: 2156-2158.
-4.  Medina I, De Maria A, Bleda M, Salavert F, Alonso R, Gonzalez CY, Dopazo J: VARIANT: Command Line, Web service and Web interface for fast and accurate functional characterization of variants found by Next-Generation Sequencing. Nucleic Acids Res 2012, 40:W54-58.
-5.  Bleda M, Tarraga J, de Maria A, Salavert F, Garcia-Alonso L, Celma M, Martin A, Dopazo J, Medina I: CellBase, a  comprehensive collection of RESTful web services for retrieving relevant biological information from heterogeneous sources. Nucleic Acids Res 2012, 40:W609-614.
-6.  Flicek,P., Amode,M.R., Barrell,D., Beal,K., Brent,S., Carvalho-Silva,D., Clapham,P., Coates,G., Fairley,S., Fitzgerald,S. et al. (2012) Ensembl 2012. Nucleic Acids Res., 40, D84–D90.
-7.  UniProt Consortium. (2012) Reorganizing the protein space at the Universal Protein Resource (UniProt). Nucleic   Acids Res., 40, D71–D75.
-8.  Kozomara,A. and Griffiths-Jones,S. (2011) miRBase: integrating microRNA annotation and deep-sequencing data. Nucleic Acids Res., 39, D152–D157.
-9.  Xiao,F., Zuo,Z., Cai,G., Kang,S., Gao,X. and Li,T. (2009) miRecords: an integrated resource for microRNA-target interactions. Nucleic Acids Res., 37, D105–D110.
-10. Hsu,S.D., Lin,F.M., Wu,W.Y., Liang,C., Huang,W.C., Chan,W.L., Tsai,W.T., Chen,G.Z., Lee,C.J., Chiu,C.M. et al. (2011) miRTarBase: a database curates experimentally validated microRNA-target interactions. Nucleic Acids Res., 39, D163–D169.
-11. Friedman,R.C., Farh,K.K., Burge,C.B. and Bartel,D.P. (2009) Most mammalian mRNAs are conserved targets of microRNAs. Genome Res., 19, 92–105. 12. Betel,D., Wilson,M., Gabow,A., Marks,D.S. and Sander,C. (2008) The microRNA.org resource: targets and expression. Nucleic Acids Res., 36, D149–D153.
-12. Dreszer,T.R., Karolchik,D., Zweig,A.S., Hinrichs,A.S., Raney,B.J., Kuhn,R.M., Meyer,L.R., Wong,M., Sloan,C.A., Rosenbloom,K.R. et al. (2012) The UCSC genome browser database: extensions and updates 2011. Nucleic Acids Res.,40, D918–D923.
-13. Smith,B., Ashburner,M., Rosse,C., Bard,J., Bug,W., Ceusters,W., Goldberg,L.J., Eilbeck,K., Ireland,A., Mungall,C.J. et al. (2007) The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration. Nat. Biotechnol., 25, 1251–1255.
-14. Hunter,S., Jones,P., Mitchell,A., Apweiler,R., Attwood,T.K.,Bateman,A., Bernard,T., Binns,D., Bork,P., Burge,S. et al. (2012) InterPro in 2011: new developments in the family and domain prediction  database. Nucleic Acids Res.,40, D306–D312.
-15. Sherry,S.T., Ward,M.H., Kholodov,M., Baker,J., Phan,L., Smigielski,E.M. and Sirotkin,K. (2001) dbSNP: the NCBI database of genetic variation. Nucleic Acids Res., 29, 308–311.
-16. Altshuler,D.M., Gibbs,R.A., Peltonen,L., Dermitzakis,E., Schaffner,S.F., Yu,F., Bonnen,P.E., de Bakker,P.I.,  Deloukas,P., Gabriel,S.B. et al. (2010) Integrating common and rare genetic variation in diverse human populations. Nature, 467, 52–58.
-17. 1000 Genomes Project Consortium. (2010) A map of human genome variation from population-scale sequencing. Nature, 467, 1061–1073.
-18. Hindorff,L.A., Sethupathy,P., Junkins,H.A., Ramos,E.M., Mehta,J.P., Collins,F.S. and Manolio,T.A. (2009)   Potential etiologic and functional implications of genome-wide association loci for human diseases and traits. Proc. Natl Acad. Sci. USA, 106, 9362–9367.
-19. Stenson,P.D., Ball,E.V., Mort,M., Phillips,A.D., Shiel,J.A., Thomas,N.S., Abeysinghe,S., Krawczak,M. and Cooper,D.N. (2003) Human gene mutation database (HGMD): 2003 update. Hum. Mutat., 21, 577–581.
-20. Johnson,A.D. and O’Donnell,C.J. (2009) An open access database of genome-wide association results. BMC Med. Genet, 10, 6.
-21. McKusick,V. (1998) A Catalog of Human Genes and Genetic Disorders, 12th edn. John Hopkins University Press,Baltimore, MD.
-22. Forbes,S.A., Bindal,N., Bamford,S., Cole,C., Kok,C.Y., Beare,D., Jia,M., Shepherd,R., Leung,K., Menzies,A. et al. (2011) COSMIC: mining complete cancer genomes in the catalogue of somatic mutations in cancer. Nucleic Acids Res., 39, D945–D950.
-23. Kerrien,S., Aranda,B., Breuza,L., Bridge,A., Broackes-Carter,F., Chen,C., Duesbury,M., Dumousseau,M., Feuermann,M., Hinz,U. et al. (2012) The Intact molecular interaction database in 2012. Nucleic Acids Res., 40, D841–D846.
-24. Croft,D., O’Kelly,G., Wu,G., Haw,R., Gillespie,M., Matthews,L., Caudy,M., Garapati,P., Gopinath,G., Jassal,B. et al. (2011) Reactome: a database of reactions, pathways and biological processes. Nucleic Acids Res.,    39, D691–D697.
-25. Demir,E., Cary,M.P., Paley,S., Fukuda,K., Lemer,C., Vastrik,I.,Wu,G., D’Eustachio,P., Schaefer,C., Luciano,J. et al. (2010) The BioPAX community standard for pathway data sharing. Nature Biotechnol., 28, 935–942.
-26. Alemán Z, García-García F, Medina I, Dopazo J (2014): A web tool for the design and management of panels of genes for targeted enrichment and massive sequencing for clinical applications. Nucleic Acids Res 42: W83-7.
-27. [Alemán A](http://www.ncbi.nlm.nih.gov/pubmed?term=Alem%C3%A1n%20A%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Garcia-Garcia F](http://www.ncbi.nlm.nih.gov/pubmed?term=Garcia-Garcia%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Salavert F](http://www.ncbi.nlm.nih.gov/pubmed?term=Salavert%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Medina I](http://www.ncbi.nlm.nih.gov/pubmed?term=Medina%20I%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Dopazo J](http://www.ncbi.nlm.nih.gov/pubmed?term=Dopazo%20J%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)> (2014). A web-based interactive framework to assist in the prioritization of disease candidate genes in whole-exome sequencing studies. [Nucleic Acids Res.](http://www.ncbi.nlm.nih.gov/pubmed/?term=BiERapp "Nucleic acids research.")>42 :W88-93.
-28. Landrum,M.J., Lee,J.M., Riley,G.R., Jang,W., Rubinstein,W.S., Church,D.M. and Maglott,D.R. (2014) ClinVar: public archive of relationships among sequence variation and human phenotype. Nucleic Acids Res., 42, D980–D985.
-29. Medina I, Salavert F, Sanchez R, de Maria A, Alonso R, Escobar P, Bleda M, Dopazo J: Genome Maps, a new generation genome browser. Nucleic Acids Res 2013, 41:W41-46.
+1. Heng Li, Bob Handsaker, Alec Wysoker, Tim Fennell, Jue Ruan, Nils Homer, Gabor Marth5, Goncalo Abecasis6, Richard Durbin and 1000 Genome Project Data Processing Subgroup: The Sequence Alignment/Map format and SAMtools. Bioinformatics 2009, 25: 2078-2079.
+1. McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, Garimella K, Altshuler D, Gabriel S, Daly M, DePristo MA: The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. _Genome Res_ >2010, 20:1297-1303.
+1. Petr Danecek, Adam Auton, Goncalo Abecasis, Cornelis A. Albers, Eric Banks, Mark A. DePristo, Robert E. Handsaker, Gerton Lunter, Gabor T. Marth, Stephen T. Sherry, Gilean McVean, Richard Durbin, and 1000 Genomes Project Analysis Group. The variant call format and VCFtools. Bioinformatics 2011, 27: 2156-2158.
+1. Medina I, De Maria A, Bleda M, Salavert F, Alonso R, Gonzalez CY, Dopazo J: VARIANT: Command Line, Web service and Web interface for fast and accurate functional characterization of variants found by Next-Generation Sequencing. Nucleic Acids Res 2012, 40:W54-58.
+1. Bleda M, Tarraga J, de Maria A, Salavert F, Garcia-Alonso L, Celma M, Martin A, Dopazo J, Medina I: CellBase, a comprehensive collection of RESTful web services for retrieving relevant biological information from heterogeneous sources. Nucleic Acids Res 2012, 40:W609-614.
+1. Flicek,P., Amode,M.R., Barrell,D., Beal,K., Brent,S., Carvalho-Silva,D., Clapham,P., Coates,G., Fairley,S., Fitzgerald,S. et al. (2012) Ensembl 2012. Nucleic Acids Res., 40, D84–D90.
+1. UniProt Consortium. (2012) Reorganizing the protein space at the Universal Protein Resource (UniProt). Nucleic   Acids Res., 40, D71–D75.
+1. Kozomara,A. and Griffiths-Jones,S. (2011) miRBase: integrating microRNA annotation and deep-sequencing data. Nucleic Acids Res., 39, D152–D157.
+1. Xiao,F., Zuo,Z., Cai,G., Kang,S., Gao,X. and Li,T. (2009) miRecords: an integrated resource for microRNA-target interactions. Nucleic Acids Res., 37, D105–D110.
+1. Hsu,S.D., Lin,F.M., Wu,W.Y., Liang,C., Huang,W.C., Chan,W.L., Tsai,W.T., Chen,G.Z., Lee,C.J., Chiu,C.M. et al. (2011) miRTarBase: a database curates experimentally validated microRNA-target interactions. Nucleic Acids Res., 39, D163–D169.
+1. Friedman,R.C., Farh,K.K., Burge,C.B. and Bartel,D.P. (2009) Most mammalian mRNAs are conserved targets of microRNAs. Genome Res., 19, 92–105. 12. Betel,D., Wilson,M., Gabow,A., Marks,D.S. and Sander,C. (2008) The microRNA.org resource: targets and expression. Nucleic Acids Res., 36, D149–D153.
+1. Dreszer,T.R., Karolchik,D., Zweig,A.S., Hinrichs,A.S., Raney,B.J., Kuhn,R.M., Meyer,L.R., Wong,M., Sloan,C.A., Rosenbloom,K.R. et al. (2012) The UCSC genome browser database: extensions and updates 2011. Nucleic Acids Res.,40, D918–D923.
+1. Smith,B., Ashburner,M., Rosse,C., Bard,J., Bug,W., Ceusters,W., Goldberg,L.J., Eilbeck,K., Ireland,A., Mungall,C.J. et al. (2007) The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration. Nat. Biotechnol., 25, 1251–1255.
+1. Hunter,S., Jones,P., Mitchell,A., Apweiler,R., Attwood,T.K.,Bateman,A., Bernard,T., Binns,D., Bork,P., Burge,S. et al. (2012) InterPro in 2011: new developments in the family and domain prediction database. Nucleic Acids Res.,40, D306–D312.
+1. Sherry,S.T., Ward,M.H., Kholodov,M., Baker,J., Phan,L., Smigielski,E.M. and Sirotkin,K. (2001) dbSNP: the NCBI database of genetic variation. Nucleic Acids Res., 29, 308–311.
+1. Altshuler,D.M., Gibbs,R.A., Peltonen,L., Dermitzakis,E., Schaffner,S.F., Yu,F., Bonnen,P.E., de Bakker,P.I.,  Deloukas,P., Gabriel,S.B. et al. (2010) Integrating common and rare genetic variation in diverse human populations. Nature, 467, 52–58.
+1. 1000 Genomes Project Consortium. (2010) A map of human genome variation from population-scale sequencing. Nature, 467, 1061–1073.
+1. Hindorff,L.A., Sethupathy,P., Junkins,H.A., Ramos,E.M., Mehta,J.P., Collins,F.S. and Manolio,T.A. (2009)   Potential etiologic and functional implications of genome-wide association loci for human diseases and traits. Proc. Natl Acad. Sci. USA, 106, 9362–9367.
+1. Stenson,P.D., Ball,E.V., Mort,M., Phillips,A.D., Shiel,J.A., Thomas,N.S., Abeysinghe,S., Krawczak,M. and Cooper,D.N. (2003) Human gene mutation database (HGMD): 2003 update. Hum. Mutat., 21, 577–581.
+1. Johnson,A.D. and O’Donnell,C.J. (2009) An open access database of genome-wide association results. BMC Med. Genet, 10, 6.
+1. McKusick,V. (1998) A Catalog of Human Genes and Genetic Disorders, 12th edn. John Hopkins University Press,Baltimore, MD.
+1. Forbes,S.A., Bindal,N., Bamford,S., Cole,C., Kok,C.Y., Beare,D., Jia,M., Shepherd,R., Leung,K., Menzies,A. et al. (2011) COSMIC: mining complete cancer genomes in the catalogue of somatic mutations in cancer. Nucleic Acids Res., 39, D945–D950.
+1. Kerrien,S., Aranda,B., Breuza,L., Bridge,A., Broackes-Carter,F., Chen,C., Duesbury,M., Dumousseau,M., Feuermann,M., Hinz,U. et al. (2012) The Intact molecular interaction database in 2012. Nucleic Acids Res., 40, D841–D846.
+1. Croft,D., O’Kelly,G., Wu,G., Haw,R., Gillespie,M., Matthews,L., Caudy,M., Garapati,P., Gopinath,G., Jassal,B. et al. (2011) Reactome: a database of reactions, pathways and biological processes. Nucleic Acids Res.,    39, D691–D697.
+1. Demir,E., Cary,M.P., Paley,S., Fukuda,K., Lemer,C., Vastrik,I.,Wu,G., D’Eustachio,P., Schaefer,C., Luciano,J. et al. (2010) The BioPAX community standard for pathway data sharing. Nature Biotechnol., 28, 935–942.
+1. Alemán Z, García-García F, Medina I, Dopazo J (2014): A web tool for the design and management of panels of genes for targeted enrichment and massive sequencing for clinical applications. Nucleic Acids Res 42: W83-7.
+1. [Alemán A](http://www.ncbi.nlm.nih.gov/pubmed?term=Alem%C3%A1n%20A%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Garcia-Garcia F](http://www.ncbi.nlm.nih.gov/pubmed?term=Garcia-Garcia%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Salavert F](http://www.ncbi.nlm.nih.gov/pubmed?term=Salavert%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Medina I](http://www.ncbi.nlm.nih.gov/pubmed?term=Medina%20I%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Dopazo J](http://www.ncbi.nlm.nih.gov/pubmed?term=Dopazo%20J%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)> (2014). A web-based interactive framework to assist in the prioritization of disease candidate genes in whole-exome sequencing studies. [Nucleic Acids Res.](http://www.ncbi.nlm.nih.gov/pubmed/?term=BiERapp "Nucleic acids research.")>42 :W88-93.
+1. Landrum,M.J., Lee,J.M., Riley,G.R., Jang,W., Rubinstein,W.S., Church,D.M. and Maglott,D.R. (2014) ClinVar: public archive of relationships among sequence variation and human phenotype. Nucleic Acids Res., 42, D980–D985.
+1. Medina I, Salavert F, Sanchez R, de Maria A, Alonso R, Escobar P, Bleda M, Dopazo J: Genome Maps, a new generation genome browser. Nucleic Acids Res 2013, 41:W41-46.
diff --git a/docs.it4i/anselm-cluster-documentation/software/openfoam.md b/docs.it4i/anselm-cluster-documentation/software/openfoam.md
index e3509febc4636fb0b4058f1cac6ee510db3f5f14..d7394a6e9282c42fb2ab1dc07ce3395f475dd645 100644
--- a/docs.it4i/anselm-cluster-documentation/software/openfoam.md
+++ b/docs.it4i/anselm-cluster-documentation/software/openfoam.md
@@ -22,10 +22,10 @@ Naming convection of the installed versions is following:
 
 openfoam\<VERSION\>-\<COMPILER\>\<openmpiVERSION\>-\<PRECISION\>
 
-*   \<VERSION\> - version of openfoam
-*   \<COMPILER\> - version of used compiler
-*   \<openmpiVERSION\> - version of used openmpi/impi
-*   \<PRECISION\> - DP/SP – double/single precision
+* \<VERSION\> - version of openfoam
+* \<COMPILER\> - version of used compiler
+* \<openmpiVERSION\> - version of used openmpi/impi
+* \<PRECISION\> - DP/SP – double/single precision
 
 ### Available OpenFOAM Modules
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/paraview.md b/docs.it4i/anselm-cluster-documentation/software/paraview.md
index b7a350368cac78589729f1928b9b1cce9e1dd449..8d7f0552fef1a2a6d10e0f2471a153a3b9632875 100644
--- a/docs.it4i/anselm-cluster-documentation/software/paraview.md
+++ b/docs.it4i/anselm-cluster-documentation/software/paraview.md
@@ -53,7 +53,7 @@ Because a direct connection is not allowed to compute nodes on Anselm, you must
     ssh -TN -L 12345:cn77:11111 username@anselm.it4i.cz
 ```
 
-replace  username with your login and cn77 with the name of compute node your ParaView server is running on (see previous step). If you use PuTTY on Windows, load Anselm connection configuration, t>hen go to Connection-> SSH>->Tunnels to set up the port forwarding. Click Remote radio button. Insert 12345 to Source port textbox. Insert cn77:11111. Click Add button, then Open.
+replace username with your login and cn77 with the name of compute node your ParaView server is running on (see previous step). If you use PuTTY on Windows, load Anselm connection configuration, t>hen go to Connection-> SSH>->Tunnels to set up the port forwarding. Click Remote radio button. Insert 12345 to Source port textbox. Insert cn77:11111. Click Add button, then Open.
 
 Now launch ParaView client installed on your desktop PC. Select File->Connect..., click Add Server. Fill in the following :
 
diff --git a/docs.it4i/anselm-cluster-documentation/storage.md b/docs.it4i/anselm-cluster-documentation/storage.md
index 436e6141781c3d7d207e7a30718be58db8995316..13d24491aa0a3ae29d4bafa9ccfae9a98efe4c6e 100644
--- a/docs.it4i/anselm-cluster-documentation/storage.md
+++ b/docs.it4i/anselm-cluster-documentation/storage.md
@@ -22,18 +22,18 @@ If multiple clients try to read and write the same part of a file at the same ti
 
 There is default stripe configuration for Anselm Lustre filesystems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance:
 
-1.  stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Anselm Lustre filesystems
-2.  stripe_count the number of OSTs to stripe across; default is 1 for Anselm Lustre filesystems  one can specify -1 to use all OSTs in the filesystem.
-3.  stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
+1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Anselm Lustre filesystems
+1. stripe_count the number of OSTs to stripe across; default is 1 for Anselm Lustre filesystems one can specify -1 to use all OSTs in the filesystem.
+1. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
 
 !!! note
     Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
 
-Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file access patterns. 
+Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file access patterns.
 
 ```bash
 $ lfs getstripe dir|filename
-$ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename 
+$ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename
 ```
 
 Example:
@@ -76,27 +76,27 @@ Read more on <http://doc.lustre.org/lustre_manual.xhtml#managingstripingfreespac
 
 ### Lustre on Anselm
 
-The  architecture of Lustre on Anselm is composed of two metadata servers (MDS) and four data/object storage servers (OSS). Two object storage servers are used for file system HOME and another two object storage servers are used for file system SCRATCH.
+The architecture of Lustre on Anselm is composed of two metadata servers (MDS) and four data/object storage servers (OSS). Two object storage servers are used for file system HOME and another two object storage servers are used for file system SCRATCH.
 
  Configuration of the storages
 
-*   HOME Lustre object storage
-   * One disk array NetApp E5400
-   * 22 OSTs
-   * 227 2TB NL-SAS 7.2krpm disks
-   * 22 groups of 10 disks in RAID6 (8+2)
-   * 7 hot-spare disks
-*   SCRATCH Lustre object storage
-   * Two disk arrays NetApp E5400
-   * 10 OSTs
-   * 106 2TB NL-SAS 7.2krpm disks
-   * 10 groups of 10 disks in RAID6 (8+2)
-   * 6 hot-spare disks
-*   Lustre metadata storage
-   * One disk array NetApp E2600
-   * 12 300GB SAS 15krpm disks
-   * 2 groups of 5 disks in RAID5
-   * 2 hot-spare disks
+* HOME Lustre object storage
+  * One disk array NetApp E5400
+  * 22 OSTs
+  * 227 2TB NL-SAS 7.2krpm disks
+  * 22 groups of 10 disks in RAID6 (8+2)
+  * 7 hot-spare disks
+* SCRATCH Lustre object storage
+  * Two disk arrays NetApp E5400
+  * 10 OSTs
+  * 106 2TB NL-SAS 7.2krpm disks
+  * 10 groups of 10 disks in RAID6 (8+2)
+  * 6 hot-spare disks
+* Lustre metadata storage
+  * One disk array NetApp E2600
+  * 12 300GB SAS 15krpm disks
+  * 2 groups of 5 disks in RAID5
+  * 2 hot-spare disks
 
 \###HOME
 
@@ -132,7 +132,7 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t
 The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
 
 !!! note
-    The Scratch filesystem is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
+    The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
 
     >Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files.
 
@@ -166,11 +166,11 @@ Example for Lustre HOME directory:
 ```bash
 $ lfs quota /home
 Disk quotas for user user001 (uid 1234):
-    Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
-         /home  300096       0 250000000       -    2102       0  500000    -
+    Filesystem kbytes   quota   limit   grace   files   quota   limit   grace
+         /home 300096       0 250000000       -    2102       0 500000    -
 Disk quotas for group user001 (gid 1234):
-    Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
-        /home  300096       0       0       -    2102       0       0       -
+    Filesystem kbytes   quota   limit   grace   files   quota   limit   grace
+        /home 300096       0       0       -    2102       0       0       -
 ```
 
 In this example, we view current quota size limit of 250GB and 300MB currently used by user001.
@@ -180,7 +180,7 @@ Example for Lustre SCRATCH directory:
 ```bash
 $ lfs quota /scratch
 Disk quotas for user user001 (uid 1234):
-     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
+     Filesystem kbytes   quota   limit   grace   files   quota   limit   grace
           /scratch       8       0 100000000000       -       3       0       0       -
 Disk quotas for group user001 (gid 1234):
  Filesystem kbytes quota limit grace files quota limit grace
@@ -229,7 +229,7 @@ ACLs on a Lustre file system work exactly like ACLs on any Linux file system. Th
 [vop999@login1.anselm ~]$ umask 027
 [vop999@login1.anselm ~]$ mkdir test
 [vop999@login1.anselm ~]$ ls -ld test
-drwxr-x--- 2 vop999 vop999 4096 Nov  5 14:17 test
+drwxr-x--- 2 vop999 vop999 4096 Nov 5 14:17 test
 [vop999@login1.anselm ~]$ getfacl test
 # file: test
 # owner: vop999
@@ -240,7 +240,7 @@ other::---
 
 [vop999@login1.anselm ~]$ setfacl -m user:johnsm:rwx test
 [vop999@login1.anselm ~]$ ls -ld test
-drwxrwx---+ 2 vop999 vop999 4096 Nov  5 14:17 test
+drwxrwx---+ 2 vop999 vop999 4096 Nov 5 14:17 test
 [vop999@login1.anselm ~]$ getfacl test
 # file: test
 # owner: vop999
@@ -254,7 +254,7 @@ other::---
 
 Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory (-d flag to setfacl) will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory. Refer to this page for more information on Linux ACL:
 
-[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html)
+[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html)
 
 ## Local Filesystems
 
@@ -267,7 +267,7 @@ Use local scratch in case you need to access large amount of small files during
 
 The local scratch disk is mounted as /lscratch and is accessible to user at /lscratch/$PBS_JOBID directory.
 
-The local scratch filesystem is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs that access large number of small files within the calculation must use the local scratch filesystem as their working directory. This is required for performance reasons, as frequent access to number of small files may overload the metadata servers (MDS) of the Lustre filesystem.
+The local scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs that access large number of small files within the calculation must use the local scratch filesystem as their working directory. This is required for performance reasons, as frequent access to number of small files may overload the metadata servers (MDS) of the Lustre filesystem.
 
 !!! note
     The local scratch directory /lscratch/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
@@ -349,7 +349,7 @@ Once registered for CESNET Storage, you may [access the storage](https://du.cesn
 !!! note
     SSHFS: The storage will be mounted like a local hard drive
 
-The SSHFS  provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
+The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
 
 First, create the mount point
 
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md
index 7d243fc01535188dfc754a950c09ed78204146b9..ae60d1e99674ebc8970023c5cd128536c60fa2bc 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md
@@ -18,7 +18,7 @@ Verify:
 ## Start Vncserver
 
 !!! note
-    To access VNC a local vncserver must be  started first and also a tunnel using SSH port forwarding must be established.
+    To access VNC a local vncserver must be started first and also a tunnel using SSH port forwarding must be established.
 
 [See below](vnc.md#linux-example-of-creating-a-tunnel) for the details on SSH tunnels. In this example we use port 61.
 
@@ -26,8 +26,8 @@ You can find ports which are already occupied. Here you can see that ports " /us
 
 ```bash
 [username@login2 ~]$ ps aux | grep Xvnc
-username    5971  0.0  0.0 201072 92564 ?        SN   Sep22   4:19 /usr/bin/Xvnc :79 -desktop login2:79 (username) -auth /home/gre196/.Xauthority -geometry 1024x768 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5979 -fp catalogue:/etc/X11/fontpath.d -pn
-username    10296  0.0  0.0 131772 21076 pts/29   SN   13:01   0:01 /usr/bin/Xvnc :60 -desktop login2:61 (username) -auth /home/username/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/jir13/.vnc/passwd -rfbport 5960 -fp catalogue:/etc/X11/fontpath.d -pn
+username    5971 0.0 0.0 201072 92564 ?        SN   Sep22   4:19 /usr/bin/Xvnc :79 -desktop login2:79 (username) -auth /home/gre196/.Xauthority -geometry 1024x768 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5979 -fp catalogue:/etc/X11/fontpath.d -pn
+username    10296 0.0 0.0 131772 21076 pts/29   SN   13:01   0:01 /usr/bin/Xvnc :60 -desktop login2:61 (username) -auth /home/username/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/jir13/.vnc/passwd -rfbport 5960 -fp catalogue:/etc/X11/fontpath.d -pn
 .....
 ```
 
@@ -58,7 +58,7 @@ Another command:
 ```bash
 [username@login2 .vnc]$  ps aux | grep Xvnc
 
-username    10296  0.0  0.0 131772 21076 pts/29   SN   13:01   0:01 /usr/bin/Xvnc :61 -desktop login2:61 (username) -auth /home/jir13/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5961 -fp catalogue:/etc/X11/fontpath.d -pn
+username    10296 0.0 0.0 131772 21076 pts/29   SN   13:01   0:01 /usr/bin/Xvnc :61 -desktop login2:61 (username) -auth /home/jir13/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5961 -fp catalogue:/etc/X11/fontpath.d -pn
 ```
 
 To access the VNC server you have to create a tunnel between the login node using TCP **port 5961** and your machine using a free TCP port (for simplicity the very same, in this case).
@@ -162,8 +162,8 @@ If the screen gets locked you have to kill the screensaver. Do not to forget to
 
 ```bash
 [username@login2 .vnc]$ ps aux | grep screen
-username     1503  0.0  0.0 103244   892 pts/4    S+   14:37   0:00 grep screen
-username     24316  0.0  0.0 270564  3528 ?        Ss   14:12   0:00 gnome-screensaver
+username     1503 0.0 0.0 103244   892 pts/4    S+   14:37   0:00 grep screen
+username     24316 0.0 0.0 270564 3528 ?        Ss   14:12   0:00 gnome-screensaver
 
 [username@login2 .vnc]$ kill 24316
 ```
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
index 9c1d75b807e8c1e7fa62da076875749d695ef045..b9c6951295a6b4d96fceb53c6d383464bee6d5c1 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
@@ -99,7 +99,7 @@ xserver-xephyr, on OS X it is part of [XQuartz](http://xquartz.macosforge.org/la
 local $ Xephyr -ac -screen 1024x768 -br -reset -terminate :1 &
 ```
 
-This will open a new X window with size 1024 x 768 at DISPLAY :1. Next, ssh to the cluster with DISPLAY environment variable set and launch  gnome-session
+This will open a new X window with size 1024 x 768 at DISPLAY :1. Next, ssh to the cluster with DISPLAY environment variable set and launch gnome-session
 
 ```bash
 local $ DISPLAY=:1.0 ssh -XC yourname@cluster-name.it4i.cz -i ~/.ssh/path_to_your_key
@@ -141,7 +141,7 @@ PuTTY X11 proxy: unable to connect to forwarded X server: Network error: Connect
   (gnome-session:23691): WARNING **: Cannot open display:**
 ```
 
-1.  Locate and modify Cygwin shortcut that uses [startxwin](http://x.cygwin.com/docs/man1/startxwin.1.html)
+1. Locate and modify Cygwin shortcut that uses [startxwin](http://x.cygwin.com/docs/man1/startxwin.1.html)
     locate
     C:cygwin64binXWin.exe
     change it
@@ -150,7 +150,7 @@ PuTTY X11 proxy: unable to connect to forwarded X server: Network error: Connect
 
 ![XWin-listen-tcp.png](../../../img/XWinlistentcp.png "XWin-listen-tcp.png")
 
-1.  Check Putty settings:
+1. Check Putty settings:
      Enable X11 forwarding
 
     ![](../../../img/cygwinX11forwarding.png)
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
index e4faa9fc2bddc63c4d708483e5960e52c13145ec..7a4d63ed99a12aa345a37d6afbe65a1e8d1f459d 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
@@ -15,43 +15,43 @@ We recommned you to download "**A Windows installer for everything except PuTTYt
 
 ## PuTTY - How to Connect to the IT4Innovations Cluster
 
-*   Run PuTTY
-*   Enter Host name and Save session fields with [Login address](../../../salomon/shell-and-data-access.md) and browse Connection -  SSH - Auth menu. The _Host Name_ input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**.
+* Run PuTTY
+* Enter Host name and Save session fields with [Login address](../../../salomon/shell-and-data-access.md) and browse Connection -  SSH - Auth menu. The _Host Name_ input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**.
 
 ![](../../../img/PuTTY_host_Salomon.png)
 
-*   Category - Connection -  SSH - Auth:
+* Category - Connection -  SSH - Auth:
       Select Attempt authentication using Pageant.
       Select Allow agent forwarding.
       Browse and select your [private key](ssh-keys/) file.
 
 ![](../../../img/PuTTY_keyV.png)
 
-*   Return to Session page and Save selected configuration with _Save_ button.
+* Return to Session page and Save selected configuration with _Save_ button.
 
 ![](../../../img/PuTTY_save_Salomon.png)
 
-*   Now you can log in using _Open_ button.
+* Now you can log in using _Open_ button.
 
 ![](../../../img/PuTTY_open_Salomon.png)
 
-*   Enter your username if the _Host Name_ input is not in the format "username@salomon.it4i.cz".
-*   Enter passphrase for selected [private key](ssh-keys/) file if Pageant **SSH authentication agent is not used.**
+* Enter your username if the _Host Name_ input is not in the format "username@salomon.it4i.cz".
+* Enter passphrase for selected [private key](ssh-keys/) file if Pageant **SSH authentication agent is not used.**
 
 ## Another PuTTY Settings
 
-*   Category - Windows - Translation - Remote character set and select **UTF-8**.
-*   Category - Terminal - Features and select **Disable application keypad mode** (enable numpad)
-*   Save your configuration on Session page in to Default Settings with _Save_ button.
+* Category - Windows - Translation - Remote character set and select **UTF-8**.
+* Category - Terminal - Features and select **Disable application keypad mode** (enable numpad)
+* Save your configuration on Session page in to Default Settings with _Save_ button.
 
 ## Pageant SSH Agent
 
 Pageant holds your private key in memory without needing to retype a passphrase on every login.
 
-*   Run Pageant.
-*   On Pageant Key List press _Add key_ and select your private key (id_rsa.ppk).
-*   Enter your passphrase.
-*   Now you have your private key in memory without needing to retype a passphrase on every login.
+* Run Pageant.
+* On Pageant Key List press _Add key_ and select your private key (id_rsa.ppk).
+* Enter your passphrase.
+* Now you have your private key in memory without needing to retype a passphrase on every login.
 
 ![](../../../img/PageantV.png)
 
@@ -63,11 +63,11 @@ PuTTYgen is the PuTTY key generator. You can load in an existing private key and
 
 You can change the password of your SSH key with "PuTTY Key Generator". Make sure to backup the key.
 
-*   Load your [private key](../shell-access-and-data-transfer/ssh-keys/) file with _Load_ button.
-*   Enter your current passphrase.
-*   Change key passphrase.
-*   Confirm key passphrase.
-*   Save your private key with _Save private key_ button.
+* Load your [private key](../shell-access-and-data-transfer/ssh-keys/) file with _Load_ button.
+* Enter your current passphrase.
+* Change key passphrase.
+* Confirm key passphrase.
+* Save your private key with _Save private key_ button.
 
 ![](../../../img/PuttyKeygeneratorV.png)
 
@@ -75,33 +75,33 @@ You can change the password of your SSH key with "PuTTY Key Generator". Make sur
 
 You can generate an additional public/private key pair and insert public key into authorized_keys file for authentication with your own private key.
 
-*   Start with _Generate_ button.
+* Start with _Generate_ button.
 
 ![](../../../img/PuttyKeygenerator_001V.png)
 
-*   Generate some randomness.
+* Generate some randomness.
 
 ![](../../../img/PuttyKeygenerator_002V.png)
 
-*   Wait.
+* Wait.
 
 ![](../../../img/PuttyKeygenerator_003V.png)
 
-*   Enter a _comment_ for your key using format 'username@organization.example.com'.
+* Enter a _comment_ for your key using format 'username@organization.example.com'.
       Enter key passphrase.
       Confirm key passphrase.
       Save your new private key in "_.ppk" format with _Save private key\* button.
 
 ![](../../../img/PuttyKeygenerator_004V.png)
 
-*   Save the public key with _Save public key_ button.
+* Save the public key with _Save public key_ button.
       You can copy public key out of the ‘Public key for pasting into authorized_keys file’ box.
 
 ![](../../../img/PuttyKeygenerator_005V.png)
 
-*   Export private key in OpenSSH format "id_rsa" using Conversion - Export OpenSSH key
+* Export private key in OpenSSH format "id_rsa" using Conversion - Export OpenSSH key
 
 ![](../../../img/PuttyKeygenerator_006V.png)
 
-*   Now you can insert additional public key into authorized_keys file for authentication with your own private key.
+* Now you can insert additional public key into authorized_keys file for authentication with your own private key.
       You must log in using ssh key received after registration. Then proceed to [How to add your own key](../shell-access-and-data-transfer/ssh-keys/).
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
index 85ef36b73ec669306fcc3753509d7a612a619813..a2a4d429fc06d4943a0ab89df247f410ccdc4bd2 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
@@ -10,10 +10,10 @@ After logging in, you can see .ssh/ directory with SSH keys and authorized_keys
     total 24
     drwx------ 2 username username 4096 May 13 15:12 .
     drwxr-x---22 username username 4096 May 13 07:22 ..
-    -rw-r--r-- 1 username username  392 May 21  2014 authorized_keys
-    -rw------- 1 username username 1675 May 21  2014 id_rsa
-    -rw------- 1 username username 1460 May 21  2014 id_rsa.ppk
-    -rw-r--r-- 1 username username  392 May 21  2014 id_rsa.pub
+    -rw-r--r-- 1 username username 392 May 21 2014 authorized_keys
+    -rw------- 1 username username 1675 May 21 2014 id_rsa
+    -rw------- 1 username username 1460 May 21 2014 id_rsa.ppk
+    -rw-r--r-- 1 username username 392 May 21 2014 id_rsa.pub
 ```
 
 !!! hint
@@ -21,9 +21,9 @@ After logging in, you can see .ssh/ directory with SSH keys and authorized_keys
 
 ## Access Privileges on .ssh Folder
 
-*   .ssh directory: 700 (drwx------)
-*   Authorized_keys, known_hosts and public key (.pub file): 644 (-rw-r--r--)
-*   Private key (id_rsa/id_rsa.ppk): 600 (-rw-------)
+* .ssh directory: 700 (drwx------)
+* Authorized_keys, known_hosts and public key (.pub file): 644 (-rw-r--r--)
+* Private key (id_rsa/id_rsa.ppk): 600 (-rw-------)
 
 ```bash
     cd /home/username/
@@ -76,7 +76,7 @@ An example of private key format:
 
 ## Public Key
 
-Public key file in "\*.pub" format is used to verify a digital signature. Public key is present on the remote side and  allows access to the owner of the matching private key.
+Public key file in "\*.pub" format is used to verify a digital signature. Public key is present on the remote side and allows access to the owner of the matching private key.
 
 An example of public key format:
 
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md
index 0de78fa9f87ddffc9f5ec75dd0e4292a0c9b3fa9..01123953847eefae3965c86e4896e6573f5514a5 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md
@@ -6,12 +6,12 @@ AnyConnect users on Windows 8.1 will receive a "Failed to initialize connection
 
 ## Workaround
 
-*   Close the Cisco AnyConnect Window and the taskbar mini-icon
-*   Right click vpnui.exe in the 'Cisco AnyConnect Secure Mobility Client' folder. (C:Program Files (x86)CiscoCisco AnyConnect Secure Mobility Client)
-*   Click on the 'Run compatibility troubleshooter' button
-*   Choose 'Try recommended settings'
-*   The wizard suggests Windows 8 compatibility.
-*   Click 'Test Program'. This will open the program.
-*   Close
+* Close the Cisco AnyConnect Window and the taskbar mini-icon
+* Right click vpnui.exe in the 'Cisco AnyConnect Secure Mobility Client' folder. (C:Program Files (x86)CiscoCisco AnyConnect Secure Mobility Client)
+* Click on the 'Run compatibility troubleshooter' button
+* Choose 'Try recommended settings'
+* The wizard suggests Windows 8 compatibility.
+* Click 'Test Program'. This will open the program.
+* Close
 
 ![](../../../img/vpnuiV.png)
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md
index d33b2e20b89686147f0b850c31c17748e762d32c..8f24a21f54aa37624035cf8aa42806af9d09c4a8 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md
@@ -4,12 +4,12 @@
 
 For using resources and licenses which are located at IT4Innovations local network, it is necessary to VPN connect to this network. We use Cisco AnyConnect Secure Mobility Client, which is supported on the following operating systems:
 
-*   Windows XP
-*   Windows Vista
-*   Windows 7
-*   Windows 8
-*   Linux
-*   MacOS
+* Windows XP
+* Windows Vista
+* Windows 7
+* Windows 8
+* Linux
+* MacOS
 
 It is impossible to connect to VPN from other operating systems.
 
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md
index e0a21d4700a16011afb67d8ef1b19983f6ef8db2..bbfdb211a5b4e1ffa1725fbb7772cbbdc30106ad 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md
@@ -9,12 +9,12 @@ Workaround can be found at [vpn-connection-fail-in-win-8.1](../../get-started-wi
 
 For using resources and licenses which are located at IT4Innovations local network, it is necessary to VPN connect to this network. We use Cisco AnyConnect Secure Mobility Client, which is supported on the following operating systems:
 
-*   Windows XP
-*   Windows Vista
-*   Windows 7
-*   Windows 8
-*   Linux
-*   MacOS
+* Windows XP
+* Windows Vista
+* Windows 7
+* Windows 8
+* Linux
+* MacOS
 
 It is impossible to connect to VPN from other operating systems.
 
diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md
index c0d9c23231a434c803f8718ab853dfa0c54b8392..bf0b5c5acc85d611237908cfecf5f8e73b07afd5 100644
--- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md
+++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md
@@ -8,10 +8,10 @@ IT4Innovations employs X.509 certificates for secure communication (e. g. creden
 
 There are different kinds of certificates, each with a different scope of use. We mention here:
 
-*   User (Private) certificates
-*   Certificate Authority (CA) certificates
-*   Host certificates
-*   Service certificates
+* User (Private) certificates
+* Certificate Authority (CA) certificates
+* Host certificates
+* Service certificates
 
 However, users need only manage User and CA certificates. Note that your user certificate is protected by an associated private key, and this **private key must never be disclosed**.
 
diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
index 7213ec31c1fc03b7cd770cb35c4cd77070ecdbb8..1d09fb5b60a36bd98876c2588f798be87c71c509 100644
--- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
+++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
@@ -2,7 +2,7 @@
 
 ## Obtaining Authorization
 
-The computational resources of IT4I  are allocated by the Allocation Committee to a [Project](/), investigated by a Primary Investigator. By allocating the computational resources, the Allocation Committee is authorizing the PI to access and use the clusters. The PI may decide to authorize a number of her/his Collaborators to access and use the clusters, to consume the resources allocated to her/his Project. These collaborators will be associated to the Project. The Figure below is depicting the authorization chain:
+The computational resources of IT4I are allocated by the Allocation Committee to a [Project](/), investigated by a Primary Investigator. By allocating the computational resources, the Allocation Committee is authorizing the PI to access and use the clusters. The PI may decide to authorize a number of her/his Collaborators to access and use the clusters, to consume the resources allocated to her/his Project. These collaborators will be associated to the Project. The Figure below is depicting the authorization chain:
 
 ![](../../img/Authorization_chain.png)
 
@@ -24,16 +24,16 @@ This is a preferred way of granting access to project resources. Please, use thi
 
 Log in to the [IT4I Extranet portal](https://extranet.it4i.cz) using IT4I credentials and go to the **Projects** section.
 
-*   **Users:** Please, submit your requests for becoming a project member.
-*   **Primary Investigators:** Please, approve or deny users' requests in the same section.
+* **Users:** Please, submit your requests for becoming a project member.
+* **Primary Investigators:** Please, approve or deny users' requests in the same section.
 
 ## Authorization by E-Mail (An Alternative Approach)
 
 In order to authorize a Collaborator to utilize the allocated resources, the PI should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support\[at\]it4i.cz](mailto:support@it4i.cz)) and provide following information:
 
-1.  Identify your project by project ID
-2.  Provide list of people, including himself, who are authorized to use the resources allocated to the project. The list must include full name, e-mail and affiliation. Provide usernames as well, if collaborator login access already exists on the IT4I systems.
-3.  Include "Authorization to IT4Innovations" into the subject line.
+1. Identify your project by project ID
+1. Provide list of people, including himself, who are authorized to use the resources allocated to the project. The list must include full name, e-mail and affiliation. Provide usernames as well, if collaborator login access already exists on the IT4I systems.
+1. Include "Authorization to IT4Innovations" into the subject line.
 
 Example (except the subject line which must be in English, you may use Czech or Slovak language for communication with us):
 
@@ -59,12 +59,12 @@ Should the above information be provided by e-mail, the e-mail **must be** digit
 
 Once authorized by PI, every person (PI or Collaborator) wishing to access the clusters, should contact the [IT4I support](https://support.it4i.cz/rt/) (E-mail: [support\[at\]it4i.cz](mailto:support@it4i.cz)) providing following information:
 
-1.  Project ID
-2.  Full name and affiliation
-3.  Statement that you have read and accepted the [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf) (AUP).
-4.  Attach the AUP file.
-5.  Your preferred username, max 8 characters long. The preferred username must associate your surname and name or be otherwise derived from it. Only alphanumeric sequences, dash and underscore signs are allowed.
-6.  In case you choose [Alternative way to personal certificate](obtaining-login-credentials/#alternative-way-of-getting-personal-certificate), a **scan of photo ID** (personal ID or passport or driver license) is required
+1. Project ID
+1. Full name and affiliation
+1. Statement that you have read and accepted the [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf) (AUP).
+1. Attach the AUP file.
+1. Your preferred username, max 8 characters long. The preferred username must associate your surname and name or be otherwise derived from it. Only alphanumeric sequences, dash and underscore signs are allowed.
+1. In case you choose [Alternative way to personal certificate](obtaining-login-credentials/#alternative-way-of-getting-personal-certificate), a **scan of photo ID** (personal ID or passport or driver license) is required
 
 Example (except the subject line which must be in English, you may use Czech or Slovak language for communication with us):
 
@@ -94,9 +94,9 @@ For various reasons we do not accept PGP keys.** Please, use only X.509 PKI cert
 
 You will receive your personal login credentials by protected e-mail. The login credentials include:
 
-1.  username
-2.  ssh private key and private key passphrase
-3.  system password
+1. username
+1. ssh private key and private key passphrase
+1. system password
 
 The clusters are accessed by the [private key](../accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/) and username. Username and password is used for login to the [information systems](http://support.it4i.cz/).
 
@@ -120,7 +120,7 @@ We accept personal certificates issued by any widely respected certification aut
 
 Certificate generation process is well-described here:
 
-*   [How to generate a personal TCS certificate in Mozilla Firefox web browser (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-gen)
+* [How to generate a personal TCS certificate in Mozilla Firefox web browser (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-gen)
 
 A FAQ about certificates can be found here: [Certificates FAQ](certificates-faq/).
 
@@ -128,19 +128,19 @@ A FAQ about certificates can be found here: [Certificates FAQ](certificates-faq/
 
 Follow these steps **only** if you can not obtain your certificate in a standard way. In case you choose this procedure, please attach a **scan of photo ID** (personal ID or passport or drivers license) when applying for [login credentials](obtaining-login-credentials/#the-login-credentials).
 
-*   Go to [CAcert](www.cacert.org).
-   * If there's a security warning, just acknowledge it.
-*   Click _Join_.
-*   Fill in the form and submit it by the _Next_ button.
-   * Type in the e-mail address which you use for communication with us.
-   * Don't forget your chosen _Pass Phrase_.
-*   You will receive an e-mail verification link. Follow it.
-*   After verifying, go to the CAcert's homepage and login using     _Password Login_.
-*   Go to _Client Certificates_ _New_.
-*   Tick _Add_ for your e-mail address and click the _Next_ button.
-*   Click the _Create Certificate Request_ button.
-*   You'll be redirected to a page from where you can download/install your certificate.
-   * Simultaneously you'll get an e-mail with a link to the certificate.
+* Go to [CAcert](www.cacert.org).
+  * If there's a security warning, just acknowledge it.
+* Click _Join_.
+* Fill in the form and submit it by the _Next_ button.
+  * Type in the e-mail address which you use for communication with us.
+  * Don't forget your chosen _Pass Phrase_.
+* You will receive an e-mail verification link. Follow it.
+* After verifying, go to the CAcert's homepage and login using     _Password Login_.
+* Go to _Client Certificates_ _New_.
+* Tick _Add_ for your e-mail address and click the _Next_ button.
+* Click the _Create Certificate Request_ button.
+* You'll be redirected to a page from where you can download/install your certificate.
+  * Simultaneously you'll get an e-mail with a link to the certificate.
 
 ## Installation of the Certificate Into Your Mail Client
 
@@ -148,13 +148,13 @@ The procedure is similar to the following guides:
 
 MS Outlook 2010
 
-*   [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380)
-*   [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp)
+* [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380)
+* [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp)
 
 Mozilla Thudnerbird
 
-*   [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate)
-*   [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp)
+* [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate)
+* [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp)
 
 ## End of User Account Lifecycle
 
@@ -162,8 +162,8 @@ User accounts are supported by membership in active Project(s) or by affiliation
 
 User will get 3 automatically generated warning e-mail messages of the pending removal:.
 
-*   First message will be sent 3 months before the removal
-*   Second message will be sent 1 month before the removal
-*   Third message will be sent 1 week before the removal.
+* First message will be sent 3 months before the removal
+* Second message will be sent 1 month before the removal
+* Third message will be sent 1 week before the removal.
 
 The messages will inform about the projected removal date and will challenge the user to migrate her/his data
diff --git a/docs.it4i/index.md b/docs.it4i/index.md
index 958b9bc77f31faec1b88ba25ea38eeea25d28233..86072355734e682af540708c3808a33c01d9d40f 100644
--- a/docs.it4i/index.md
+++ b/docs.it4i/index.md
@@ -4,9 +4,9 @@ Welcome to IT4Innovations documentation pages. The IT4Innovations national super
 
 ## How to Read the Documentation
 
-1.  Read the list in the left column. Select the subject of interest. Alternatively, use the Search in the upper right corner.
-2.  Scan for all the notes and reminders on the page.
-3.  Read the details if still more information is needed. **Look for examples** illustrating the concepts.
+1. Read the list in the left column. Select the subject of interest. Alternatively, use the Search in the upper right corner.
+1. Scan for all the notes and reminders on the page.
+1. Read the details if still more information is needed. **Look for examples** illustrating the concepts.
 
 ## Getting Help and Support
 
@@ -29,17 +29,17 @@ In many cases, you will run your own code on the cluster. In order to fully expl
 
 ## Terminology Frequently Used on These Pages
 
-*   **node:** a computer, interconnected by network to other computers - Computational nodes are powerful computers, designed and dedicated for executing demanding scientific computations.
-*   **core:** processor core, a unit of processor, executing computations
-*   **corehours:** wall clock hours of processor core time - Each node is equipped with **X** processor cores, provides **X** corehours per 1 wall clock hour.
-*   **job:** a calculation running on the supercomputer - The job allocates and utilizes resources of the supercomputer for certain time.
-*   **HPC:** High Performance Computing
-*   **HPC (computational) resources:** corehours, storage capacity, software licences
-*   **code:** a program
-*   **primary investigator (PI):** a person responsible for execution of computational project and utilization of computational resources allocated to that project
-*   **collaborator:** a person participating on execution of computational project and utilization of computational resources allocated to that project
-*   **project:** a computational project under investigation by the PI - The project is identified by the project ID. The computational resources are allocated and charged per project.
-*   **jobscript:** a script to be executed by the PBS Professional workload manager
+* **node:** a computer, interconnected by network to other computers - Computational nodes are powerful computers, designed and dedicated for executing demanding scientific computations.
+* **core:** processor core, a unit of processor, executing computations
+* **corehours:** wall clock hours of processor core time - Each node is equipped with **X** processor cores, provides **X** corehours per 1 wall clock hour.
+* **job:** a calculation running on the supercomputer - The job allocates and utilizes resources of the supercomputer for certain time.
+* **HPC:** High Performance Computing
+* **HPC (computational) resources:** corehours, storage capacity, software licences
+* **code:** a program
+* **primary investigator (PI):** a person responsible for execution of computational project and utilization of computational resources allocated to that project
+* **collaborator:** a person participating on execution of computational project and utilization of computational resources allocated to that project
+* **project:** a computational project under investigation by the PI - The project is identified by the project ID. The computational resources are allocated and charged per project.
+* **jobscript:** a script to be executed by the PBS Professional workload manager
 
 ## Conventions
 
diff --git a/docs.it4i/modules-salomon.md b/docs.it4i/modules-salomon.md
index 7ef6398e700ca885e1e969d5fd4051f2eecb883f..3995f4314921d0ddf959fda44498d28a0f63412e 100644
--- a/docs.it4i/modules-salomon.md
+++ b/docs.it4i/modules-salomon.md
@@ -1,4 +1,4 @@
-# List of Available Modules
+# Available Modules
 
 ## Core
 
diff --git a/docs.it4i/pbspro.md b/docs.it4i/pbspro.md
index e89ddfe72d54ff6b0e3fce2ab53f47bb2c6bbac5..72f5c3dd33b2946d0399ffe3c16c7cab8613a5b8 100644
--- a/docs.it4i/pbspro.md
+++ b/docs.it4i/pbspro.md
@@ -1,4 +1,4 @@
-*   ![pdf](img/pdf.png)[PBS Pro Programmer's Guide](http://www.pbsworks.com/pdfs/PBSProgramGuide13.0.pdf)
-*   ![pdf](img/pdf.png)[PBS Pro Quick Start Guide](http://www.pbsworks.com/pdfs/PBSQuickStartGuide13.0.pdf)
-*   ![pdf](img/pdf.png)[PBS Pro Reference Guide](http://www.pbsworks.com/pdfs/PBSReferenceGuide13.0.pdf)
-*   ![pdf](img/pdf.png)[PBS Pro User's Guide](http://www.pbsworks.com/pdfs/PBSUserGuide13.0.pdf)
+* ![pdf](img/pdf.png)[PBS Pro Programmer's Guide](http://www.pbsworks.com/pdfs/PBSProgramGuide13.0.pdf)
+* ![pdf](img/pdf.png)[PBS Pro Quick Start Guide](http://www.pbsworks.com/pdfs/PBSQuickStartGuide13.0.pdf)
+* ![pdf](img/pdf.png)[PBS Pro Reference Guide](http://www.pbsworks.com/pdfs/PBSReferenceGuide13.0.pdf)
+* ![pdf](img/pdf.png)[PBS Pro User's Guide](http://www.pbsworks.com/pdfs/PBSUserGuide13.0.pdf)
diff --git a/docs.it4i/salomon/7d-enhanced-hypercube.md b/docs.it4i/salomon/7d-enhanced-hypercube.md
index 11151010082595be1cb46a98156bc86ef6c4d96c..af082502dcd86b173da87c3c804517ca44445443 100644
--- a/docs.it4i/salomon/7d-enhanced-hypercube.md
+++ b/docs.it4i/salomon/7d-enhanced-hypercube.md
@@ -1,7 +1,5 @@
 # 7D Enhanced Hypercube
 
-### 7D Enhanced Hypercube {#D-Enhanced-Hypercube}
-
 ![](../img/7D_Enhanced_hypercube.png)
 
 | Node type                            | Count | Short name       | Long name                | Rack  |
@@ -9,6 +7,6 @@
 | M-Cell compute nodes w/o accelerator | 576   | cns1 -cns576     | r1i0n0 - r4i7n17         | 1-4   |
 | compute nodes MIC accelerated        | 432   | cns577 - cns1008 | r21u01n577 - r37u31n1008 | 21-38 |
 
-### IB Topology
+## IB Topology
 
 ![](../img/Salomon_IB_topology.png)
diff --git a/docs.it4i/salomon/capacity-computing.md b/docs.it4i/salomon/capacity-computing.md
index c5ae6b385bbe260340d5e69257f0d3d0854ee40a..aa947db011b4ba820f3736b445e1bb639233ef14 100644
--- a/docs.it4i/salomon/capacity-computing.md
+++ b/docs.it4i/salomon/capacity-computing.md
@@ -9,14 +9,14 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 !!! note
     Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
 
-*   Use [Job arrays](capacity-computing.md#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
-*   Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
-*   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
+* Use [Job arrays](capacity-computing.md#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
+* Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
+* Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
 
 ## Policy
 
-1.  A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
-2.  The array size is at most 1000 subjobs.
+1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
+1. The array size is at most 1000 subjobs.
 
 ## Job Arrays
 
@@ -25,9 +25,9 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 
 A job array is a compact representation of many jobs, called subjobs. The subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
 
-*   each subjob has a unique index, $PBS_ARRAY_INDEX
-*   job Identifiers of subjobs only differ by their indices
-*   the state of subjobs can differ (R,Q,...etc.)
+* each subjob has a unique index, $PBS_ARRAY_INDEX
+* job Identifiers of subjobs only differ by their indices
+* the state of subjobs can differ (R,Q,...etc.)
 
 All subjobs within a job array have the same scheduling priority and schedule as independent jobs. Entire job array is submitted through a single qsub command and may be managed by qdel, qalter, qhold, qrls and qsig commands as a single job.
 
@@ -101,10 +101,10 @@ Check status of the job array by the qstat command.
 $ qstat -a 506493[].isrv5
 
 isrv5:
-                                                            Req'd  Req'd   Elap
-Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
+                                                            Req'd Req'd   Elap
+Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
 --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
-12345[].dm2     user2    qprod    xx          13516   1  24    --  00:50 B 00:02
+12345[].dm2     user2    qprod    xx          13516   1 24    --  00:50 B 00:02
 ```
 
 The status B means that some subjobs are already running.
@@ -115,16 +115,16 @@ Check status of the first 100 subjobs by the qstat command.
 $ qstat -a 12345[1-100].isrv5
 
 isrv5:
-                                                            Req'd  Req'd   Elap
-Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
+                                                            Req'd Req'd   Elap
+Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
 --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
-12345[1].isrv5    user2    qprod    xx          13516   1  24    --  00:50 R 00:02
-12345[2].isrv5    user2    qprod    xx          13516   1  24    --  00:50 R 00:02
-12345[3].isrv5    user2    qprod    xx          13516   1  24    --  00:50 R 00:01
-12345[4].isrv5    user2    qprod    xx          13516   1  24    --  00:50 Q   --
+12345[1].isrv5    user2    qprod    xx          13516   1 24    --  00:50 R 00:02
+12345[2].isrv5    user2    qprod    xx          13516   1 24    --  00:50 R 00:02
+12345[3].isrv5    user2    qprod    xx          13516   1 24    --  00:50 R 00:01
+12345[4].isrv5    user2    qprod    xx          13516   1 24    --  00:50 Q   --
      .             .        .      .             .    .   .     .    .   .    .
      ,             .        .      .             .    .   .     .    .   .    .
-12345[100].isrv5  user2    qprod    xx          13516   1  24    --  00:50 Q   --
+12345[100].isrv5 user2    qprod    xx          13516   1 24    --  00:50 Q   --
 ```
 
 Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted.
@@ -154,7 +154,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/)
 !!! note
     Use GNU parallel to run many single core tasks on one node.
 
-GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on  Anselm.
+GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on Anselm.
 
 For more information and examples see the parallel man page:
 
@@ -199,7 +199,7 @@ TASK=$1
 cp $PBS_O_WORKDIR/$TASK input
 
 # execute the calculation
-cat  input > output
+cat input > output
 
 # copy output file to submit directory
 cp output $PBS_O_WORKDIR/$TASK.out
@@ -216,7 +216,7 @@ $ qsub -N JOBNAME jobscript
 12345.dm2
 ```
 
-In this example, we submit a job of 101 tasks. 24 input files will be processed in  parallel. The 101 tasks on 24 cores are assumed to complete in less than 2 hours.
+In this example, we submit a job of 101 tasks. 24 input files will be processed in parallel. The 101 tasks on 24 cores are assumed to complete in less than 2 hours.
 
 !!! note
     Use #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
@@ -281,18 +281,18 @@ cat input > output
 cp output $PBS_O_WORKDIR/$TASK.out
 ```
 
-In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node.  Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks  in numtasks file is reached.
+In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node.  Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks in numtasks file is reached.
 
 !!! note
-    Select  subjob walltime and number of tasks per subjob  carefully
+    Select subjob walltime and number of tasks per subjob carefully
 
 When deciding this values, think about following guiding rules :
 
-1.  Let n = N / 24.  Inequality (n + 1) x T < W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
-2.  Number of tasks should be modulo 24.
-3.  These rules are valid only when all tasks have similar task walltimes T.
+1. Let n = N / 24.  Inequality (n + 1) x T < W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
+1. Number of tasks should be modulo 24.
+1. These rules are valid only when all tasks have similar task walltimes T.
 
-### Submit the Job Array
+### Submit the Job Array (-J)
 
 To submit the job array, use the qsub -J command. The 992 tasks' job of the [example above](capacity-computing/#combined_example) may be submitted like this:
 
diff --git a/docs.it4i/salomon/compute-nodes.md b/docs.it4i/salomon/compute-nodes.md
index 83bca5c4045e93800fe922accb9882508f0aa0b1..c7ac66a79201f33d2fbeebeae03a07f4084640e1 100644
--- a/docs.it4i/salomon/compute-nodes.md
+++ b/docs.it4i/salomon/compute-nodes.md
@@ -9,22 +9,22 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 
 ### Compute Nodes Without Accelerator
 
-*   codename "grafton"
-*   576 nodes
-*   13 824 cores in total
-*   two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
-*   128 GB of physical memory per node
+* codename "grafton"
+* 576 nodes
+* 13 824 cores in total
+* two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
+* 128 GB of physical memory per node
 
 ![cn_m_cell](../img/cn_m_cell)
 
 ### Compute Nodes With MIC Accelerator
 
-*   codename "perrin"
-*   432 nodes
-*   10 368 cores in total
-*   two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
-*   128 GB of physical memory per node
-*   MIC accelerator 2 x Intel Xeon Phi 7120P per node, 61-cores, 16 GB per accelerator
+* codename "perrin"
+* 432 nodes
+* 10 368 cores in total
+* two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
+* 128 GB of physical memory per node
+* MIC accelerator 2 x Intel Xeon Phi 7120P per node, 61-cores, 16 GB per accelerator
 
 ![cn_mic](../img/cn_mic-1)
 
@@ -34,12 +34,12 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 
 ### Uv 2000
 
-*   codename "UV2000"
-*   1 node
-*   112 cores in total
-*   14 x Intel Xeon E5-4627v2, 8-core, 3.3 GHz processors, in 14 NUMA nodes
-*   3328 GB of physical memory per node
-*   1 x NVIDIA GM200 (GeForce GTX TITAN X), 12 GB RAM
+* codename "UV2000"
+* 1 node
+* 112 cores in total
+* 14 x Intel Xeon E5-4627v2, 8-core, 3.3 GHz processors, in 14 NUMA nodes
+* 3328 GB of physical memory per node
+* 1 x NVIDIA GM200 (GeForce GTX TITAN X), 12 GB RAM
 
 ![](../img/uv-2000.jpeg)
 
@@ -57,22 +57,22 @@ Salomon is equipped with Intel Xeon processors Intel Xeon E5-2680v3. Processors
 
 ### Intel Xeon E5-2680v3 Processor
 
-*   12-core
-*   speed: 2.5 GHz, up to 3.3 GHz using Turbo Boost Technology
-*   peak performance:  19.2 GFLOP/s per core
-*   caches:
-   * Intel® Smart Cache:  30 MB
-*   memory bandwidth at the level of the processor: 68 GB/s
+* 12-core
+* speed: 2.5 GHz, up to 3.3 GHz using Turbo Boost Technology
+* peak performance:  19.2 GFLOP/s per core
+* caches:
+  * Intel® Smart Cache:  30 MB
+* memory bandwidth at the level of the processor: 68 GB/s
 
 ### MIC Accelerator Intel Xeon Phi 7120P Processor
 
-*   61-core
-*   speed:  1.238
+* 61-core
+* speed:  1.238
     GHz, up to 1.333 GHz using Turbo Boost Technology
-*   peak performance:  18.4 GFLOP/s per core
-*   caches:
-   * L2:  30.5 MB
-*   memory bandwidth at the level of the processor:  352 GB/s
+* peak performance:  18.4 GFLOP/s per core
+* caches:
+  * L2:  30.5 MB
+* memory bandwidth at the level of the processor:  352 GB/s
 
 ## Memory Architecture
 
@@ -80,28 +80,28 @@ Memory is equally distributed across all CPUs and cores for optimal performance.
 
 ### Compute Node Without Accelerator
 
-*   2 sockets
-*   Memory Controllers are integrated into processors.
-   * 8 DDR4 DIMMs per node
-   * 4 DDR4 DIMMs per CPU
-   * 1 DDR4 DIMMs per channel
-*   Populated memory: 8 x 16 GB DDR4 DIMM >2133 MHz
+* 2 sockets
+* Memory Controllers are integrated into processors.
+  * 8 DDR4 DIMMs per node
+  * 4 DDR4 DIMMs per CPU
+  * 1 DDR4 DIMMs per channel
+* Populated memory: 8 x 16 GB DDR4 DIMM >2133 MHz
 
 ### Compute Node With MIC Accelerator
 
 2 sockets
 Memory Controllers are integrated into processors.
 
-*   8 DDR4 DIMMs per node
-*   4 DDR4 DIMMs per CPU
-*   1 DDR4 DIMMs per channel
+* 8 DDR4 DIMMs per node
+* 4 DDR4 DIMMs per CPU
+* 1 DDR4 DIMMs per channel
 
 Populated memory: 8 x 16 GB DDR4 DIMM 2133 MHz
 MIC Accelerator Intel Xeon Phi 7120P Processor
 
-*   2 sockets
-*   Memory Controllers are are connected via an
+* 2 sockets
+* Memory Controllers are are connected via an
     Interprocessor Network (IPN) ring.
-   * 16 GDDR5 DIMMs per node
-   * 8 GDDR5 DIMMs per CPU
-   * 2 GDDR5 DIMMs per channel
+  * 16 GDDR5 DIMMs per node
+  * 8 GDDR5 DIMMs per CPU
+  * 2 GDDR5 DIMMs per channel
diff --git a/docs.it4i/salomon/environment-and-modules.md b/docs.it4i/salomon/environment-and-modules.md
index a9a6def4dfeb499e8daf6ad3cd8fc8bd707d7d91..9671013566e7621e42b2d0cdf693eed783f13197 100644
--- a/docs.it4i/salomon/environment-and-modules.md
+++ b/docs.it4i/salomon/environment-and-modules.md
@@ -1,6 +1,6 @@
 # Environment and Modules
 
-### Environment Customization
+## Environment Customization
 
 After logging in, you may want to configure the environment. Write your preferred path definitions, aliases, functions and module loads in the .bashrc file
 
@@ -24,11 +24,11 @@ fi
 ```
 
 !!! note
-    Do not run commands outputting to standard output (echo, module list, etc) in .bashrc  for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example.
+    Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example.
 
 ### Application Modules
 
-In order to configure your shell for  running particular application on Salomon we use Module package interface.
+In order to configure your shell for running particular application on Salomon we use Module package interface.
 
 Application modules on Salomon cluster are built using [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). The modules are divided into the following structure:
 
@@ -67,7 +67,7 @@ To check available modules use
 $ module avail
 ```
 
-To load a module, for example the Open MPI module  use
+To load a module, for example the Open MPI module use
 
 ```bash
 $ module load OpenMPI
@@ -107,9 +107,9 @@ The EasyBuild framework prepares the build environment for the different toolcha
 
 Recent releases of EasyBuild include out-of-the-box toolchain support for:
 
-*   various compilers, including GCC, Intel, Clang, CUDA
-*   common MPI libraries, such as Intel MPI, MPICH, MVAPICH2, Open MPI
-*   various numerical libraries, including ATLAS, Intel MKL, OpenBLAS, ScaLAPACK, FFTW
+* various compilers, including GCC, Intel, Clang, CUDA
+* common MPI libraries, such as Intel MPI, MPICH, MVAPICH2, Open MPI
+* various numerical libraries, including ATLAS, Intel MKL, OpenBLAS, ScaLAPACK, FFTW
 
 On Salomon, we have currently following toolchains installed:
 
diff --git a/docs.it4i/salomon/hardware-overview.md b/docs.it4i/salomon/hardware-overview.md
index c20234030c70a4a6dc9e8ba36f86b1097437313d..d84bd6293e02f4db183bd9ac32dc77597f70ecfc 100644
--- a/docs.it4i/salomon/hardware-overview.md
+++ b/docs.it4i/salomon/hardware-overview.md
@@ -2,7 +2,7 @@
 
 ## Introduction
 
-The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a  powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128 GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5 PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes.
+The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128 GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5 PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes.
 
 [More about schematic representation of the Salomon cluster compute nodes IB topology](ib-single-plane-topology/).
 
diff --git a/docs.it4i/salomon/ib-single-plane-topology.md b/docs.it4i/salomon/ib-single-plane-topology.md
index 9456b83b37aaa5ae54b9a97b49e3019eabc09bda..b1a5381e6280b9c1ace1c84c90b12ef2d4641650 100644
--- a/docs.it4i/salomon/ib-single-plane-topology.md
+++ b/docs.it4i/salomon/ib-single-plane-topology.md
@@ -4,11 +4,11 @@ A complete M-Cell assembly consists of four compute racks. Each rack contains 4
 
 The SGI ICE X IB Premium Blade provides the first level of interconnection via dual 36-port Mellanox FDR InfiniBand ASIC switch with connections as follows:
 
-*   9 ports from each switch chip connect to the unified backplane, to connect the 18 compute node slots
-*   3 ports on each chip provide connectivity between the chips
-*   24 ports from each switch chip connect to the external bulkhead, for a total of 48
+* 9 ports from each switch chip connect to the unified backplane, to connect the 18 compute node slots
+* 3 ports on each chip provide connectivity between the chips
+* 24 ports from each switch chip connect to the external bulkhead, for a total of 48
 
-### IB Single-Plane Topology - ICEX M-Cell
+## IB Single-Plane Topology - ICEX M-Cell
 
 Each color in each physical IRU represents one dual-switch ASIC switch.
 
@@ -16,15 +16,15 @@ Each color in each physical IRU represents one dual-switch ASIC switch.
 
 ![../src/IB single-plane topology - ICEX Mcell.pdf](../img/IBsingleplanetopologyICEXMcellsmall.png)
 
-### IB Single-Plane Topology - Accelerated Nodes
+## IB Single-Plane Topology - Accelerated Nodes
 
 Each of the 3 inter-connected D racks are equivalent to one half of M-Cell rack. 18 x D rack with MIC accelerated nodes [r21-r38] are equivalent to 3 M-Cell racks as shown in a diagram [7D Enhanced Hypercube](7d-enhanced-hypercube/).
 
 As shown in a diagram ![IB Topology](../img/Salomon_IB_topology.png)
 
-*   Racks 21, 22, 23, 24, 25, 26 are equivalent to one M-Cell rack.
-*   Racks 27, 28, 29, 30, 31, 32 are equivalent to one M-Cell rack.
-*   Racks 33, 34, 35, 36, 37, 38 are equivalent to one M-Cell rack.
+* Racks 21, 22, 23, 24, 25, 26 are equivalent to one M-Cell rack.
+* Racks 27, 28, 29, 30, 31, 32 are equivalent to one M-Cell rack.
+* Racks 33, 34, 35, 36, 37, 38 are equivalent to one M-Cell rack.
 
 [IB single-plane topology - Accelerated nodes.pdf](<../src/IB single-plane topology - Accelerated nodes.pdf>)
 
diff --git a/docs.it4i/salomon/introduction.md b/docs.it4i/salomon/introduction.md
index 1563518906866ae72937039dd6dc2ddb29126c15..e9c133dbc70500c0791101aa67d492176e71c4e4 100644
--- a/docs.it4i/salomon/introduction.md
+++ b/docs.it4i/salomon/introduction.md
@@ -2,15 +2,15 @@
 
 Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totaling 24192 compute cores with 129 TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, at least 128 GB RAM. Nodes are interconnected by 7D Enhanced hypercube InfiniBand network and equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
 
-The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the  RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
+The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
 
-**Water-cooled Compute Nodes With MIC Accelerator**
+## Water-cooled Compute Nodes With MIC Accelerator
 
 ![](../img/salomon)
 
 ![](../img/salomon-1.jpeg)
 
-**Tape Library T950B**
+## Tape Library T950B
 
 ![](../img/salomon-3.jpeg)
 
diff --git a/docs.it4i/salomon/job-priority.md b/docs.it4i/salomon/job-priority.md
index bb762398529c1a919a5f4b36442ed06522616b44..265afe5441ba0ad549348c7de9edc07c3fb078fb 100644
--- a/docs.it4i/salomon/job-priority.md
+++ b/docs.it4i/salomon/job-priority.md
@@ -6,9 +6,9 @@ Scheduler gives each job an execution priority and then uses this job execution
 
 Job execution priority is determined by these job properties (in order of importance):
 
-1.  queue priority
-2.  fair-share priority
-3.  eligible time
+1. queue priority
+1. fair-share priority
+1. eligible time
 
 ### Queue Priority
 
@@ -34,7 +34,7 @@ usage<sub>Total</sub> is total usage by all users, by all projects.
 
 Usage counts allocated core-hours (`ncpus x walltime`). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week).
 
-# Jobs Queued in Queue qexp Are Not Calculated to Project's Usage.
+## Jobs Queued in Queue qexp Are Not Calculated to Project's Usage.
 
 !!! note
     Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/rsweb/salomon/projects>.
diff --git a/docs.it4i/salomon/job-submission-and-execution.md b/docs.it4i/salomon/job-submission-and-execution.md
index 0865e9c21b44c7b755e810d9a81da902de452183..e7a4c4ff0039815504804e9f5fcb30959e8713e6 100644
--- a/docs.it4i/salomon/job-submission-and-execution.md
+++ b/docs.it4i/salomon/job-submission-and-execution.md
@@ -4,12 +4,12 @@
 
 When allocating computational resources for the job, please specify
 
-1.  suitable queue for your job (default is qprod)
-2.  number of computational nodes required
-3.  number of cores per node required
-4.  maximum wall time allocated to your calculation, note that jobs exceeding maximum wall time will be killed
-5.  Project ID
-6.  Jobscript or interactive switch
+1. suitable queue for your job (default is qprod)
+1. number of computational nodes required
+1. number of cores per node required
+1. maximum wall time allocated to your calculation, note that jobs exceeding maximum wall time will be killed
+1. Project ID
+1. Jobscript or interactive switch
 
 !!! note
     Use the **qsub** command to submit your job to a queue for allocation of the computational resources.
@@ -43,13 +43,13 @@ In this example, we allocate 4 nodes, 24 cores per node, for 1 hour. We allocate
 $ qsub -A OPEN-0-0 -q qlong -l select=10:ncpus=24 ./myjob
 ```
 
-In this example, we allocate 10 nodes, 24 cores per node, for  72 hours. We allocate these resources via the qlong queue. Jobscript myjob will be executed on the first node in the allocation.
+In this example, we allocate 10 nodes, 24 cores per node, for 72 hours. We allocate these resources via the qlong queue. Jobscript myjob will be executed on the first node in the allocation.
 
 ```bash
 $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=24 ./myjob
 ```
 
-In this example, we allocate 10  nodes, 24 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation.
+In this example, we allocate 10 nodes, 24 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation.
 
 ### Intel Xeon Phi Co-Processors
 
@@ -76,13 +76,13 @@ In this example, we allocate 4 nodes, with 24 cores per node (totalling 96 cores
     Per NUMA node allocation.
     Jobs are isolated by cpusets.
 
-The UV2000 (node uv1) offers 3328GB of RAM and 112 cores, distributed in 14 NUMA nodes. A NUMA node packs 8 cores and approx. 236GB RAM. In the PBS  the UV2000 provides 14 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of  the NUMA nodes allocated to itself.
+The UV2000 (node uv1) offers 3328GB of RAM and 112 cores, distributed in 14 NUMA nodes. A NUMA node packs 8 cores and approx. 236GB RAM. In the PBS the UV2000 provides 14 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of the NUMA nodes allocated to itself.
 
 ```bash
  $ qsub -A OPEN-0-0 -q qfat -l select=14 ./myjob
 ```
 
-In this example, we allocate all 14 NUMA nodes (corresponds to 14 chunks), 112 cores of the SGI UV2000 node  for 72 hours. Jobscript myjob will be executed on the node uv1.
+In this example, we allocate all 14 NUMA nodes (corresponds to 14 chunks), 112 cores of the SGI UV2000 node for 72 hours. Jobscript myjob will be executed on the node uv1.
 
 ```bash
 $ qsub -A OPEN-0-0 -q qfat -l select=1:mem=2000GB ./myjob
@@ -138,7 +138,7 @@ Nodes directly connected to the same InifiBand switch can communicate most effic
 !!! note
     We recommend allocating compute nodes of a single switch when the best possible computational network performance is required to run job efficiently.
 
-Nodes directly connected to the one InifiBand switch can be allocated using node grouping on PBS resource attribute switch. 
+Nodes directly connected to the one InifiBand switch can be allocated using node grouping on PBS resource attribute switch.
 
 In this example, we request all 9 nodes directly connected to the same switch using node grouping placement.
 
@@ -249,12 +249,12 @@ Example:
 $ qstat -a
 
 srv11:
-                                                            Req'd  Req'd   Elap
-Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
+                                                            Req'd Req'd   Elap
+Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
 --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
-16287.isrv5     user1    qlong    job1         6183   4  64    --  144:0 R 38:25
-16468.isrv5     user1    qlong    job2         8060   4  64    --  144:0 R 17:44
-16547.isrv5     user2    qprod    job3x       13516   2  32    --  48:00 R 00:58
+16287.isrv5     user1    qlong    job1         6183   4 64    --  144:0 R 38:25
+16468.isrv5     user1    qlong    job2         8060   4 64    --  144:0 R 17:44
+16547.isrv5     user2    qprod    job3x       13516   2 32    --  48:00 R 00:58
 ```
 
 In this example user1 and user2 are running jobs named job1, job2 and job3x. The jobs job1 and job2 are using 4 nodes, 16 cores per node each. The job1 already runs for 38 hours and 25 minutes, job2 for 17 hours 44 minutes. The job1 already consumed 64 x 38.41 = 2458.6 core hours. The job3x already consumed 0.96 x 32 = 30.93 core hours. These consumed core hours will be accounted on the respective project accounts, regardless of whether the allocated cores were actually used for computations.
@@ -350,10 +350,10 @@ $ qsub -q qexp -l select=4:ncpus=24 -N Name0 ./myjob
 $ qstat -n -u username
 
 isrv5:
-                                                            Req'd  Req'd   Elap
-Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
+                                                            Req'd Req'd   Elap
+Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
 --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
-15209.isrv5     username qexp     Name0        5530   4  96    --  01:00 R 00:00
+15209.isrv5     username qexp     Name0        5530   4 96    --  01:00 R 00:00
    r21u01n577/0*24+r21u02n578/0*24+r21u03n579/0*24+r21u04n580/0*24
 ```
 
diff --git a/docs.it4i/salomon/network.md b/docs.it4i/salomon/network.md
index 005e6d6600147d5b7ff344017c377368338d295f..2f3f8a09f474c12ffe961781c39ea6fbea260a46 100644
--- a/docs.it4i/salomon/network.md
+++ b/docs.it4i/salomon/network.md
@@ -19,10 +19,10 @@ The network provides **2170MB/s** transfer rates via the TCP connection (single
 ```bash
 $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
 $ qstat -n -u username
-                                                            Req'd  Req'd   Elap
-Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
+                                                            Req'd Req'd   Elap
+Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
 --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
-15209.isrv5     username qexp     Name0        5530   4  96    --  01:00 R 00:00
+15209.isrv5     username qexp     Name0        5530   4 96    --  01:00 R 00:00
    r4i1n0/0*24+r4i1n1/0*24+r4i1n2/0*24+r4i1n3/0*24
 ```
 
@@ -32,7 +32,7 @@ In this example, we access the node r4i1n0 by Infiniband network via the ib0 int
 $ ssh 10.17.35.19
 ```
 
-In this example, we  get
+In this example, we get
 information of the Infiniband network.
 
 ```bash
diff --git a/docs.it4i/salomon/prace.md b/docs.it4i/salomon/prace.md
index 76b9186711bd7477e714a2621d60eb46d4ef290d..8406726ba38f1ccf662f4e7f654f66107b2442b2 100644
--- a/docs.it4i/salomon/prace.md
+++ b/docs.it4i/salomon/prace.md
@@ -28,11 +28,11 @@ The user will need a valid certificate and to be present in the PRACE LDAP (plea
 
 Most of the information needed by PRACE users accessing the Salomon TIER-1 system can be found here:
 
-*   [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
-*   [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
-*   [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
-*   [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
-*   [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
+* [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
+* [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
+* [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
+* [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
+* [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
 
 Before you start to use any of the services don't forget to create a proxy certificate from your certificate:
 
@@ -48,7 +48,7 @@ To check whether your proxy certificate is still valid (by default it's valid 12
 
 To access Salomon cluster, two login nodes running GSI SSH service are available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
 
-**Access from PRACE network:**
+#### Access from PRACE network:
 
 It is recommended to use the single DNS name salomon-prace.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
 
@@ -70,7 +70,7 @@ When logging from other PRACE system, the prace_service script can be used:
     $ gsissh `prace_service -i -s salomon`
 ```
 
-**Access from public Internet:**
+#### Access from public Internet:
 
 It is recommended to use the single DNS name salomon.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
 
@@ -127,7 +127,7 @@ Apart from the standard mechanisms, for PRACE users to transfer data to/from Sal
 
 There's one control server and three backend servers for striping and/or backup in case one of them would fail.
 
-**Access from PRACE network:**
+### Access from PRACE network
 
 | Login address                 | Port | Node role                   |
 | ----------------------------- | ---- | --------------------------- |
@@ -142,7 +142,7 @@ Copy files **to** Salomon by running the following commands on your local machin
     $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
 ```
 
-Or by using  prace_service script:
+Or by using prace_service script:
 
 ```bash
     $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
@@ -154,13 +154,13 @@ Copy files **from** Salomon:
     $ globus-url-copy gsiftp://gridftp-prace.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
 ```
 
-Or by using  prace_service script:
+Or by using prace_service script:
 
 ```bash
     $ globus-url-copy gsiftp://`prace_service -i -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
 ```
 
-**Access from public Internet:**
+### Access from public Internet
 
 | Login address           | Port | Node role                   |
 | ----------------------- | ---- | --------------------------- |
@@ -175,7 +175,7 @@ Copy files **to** Salomon by running the following commands on your local machin
     $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
 ```
 
-Or by using  prace_service script:
+Or by using prace_service script:
 
 ```bash
     $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
@@ -187,7 +187,7 @@ Copy files **from** Salomon:
     $ globus-url-copy gsiftp://gridftp.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
 ```
 
-Or by using  prace_service script:
+Or by using prace_service script:
 
 ```bash
     $ globus-url-copy gsiftp://`prace_service -e -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
diff --git a/docs.it4i/salomon/resource-allocation-and-job-execution.md b/docs.it4i/salomon/resource-allocation-and-job-execution.md
index 940e43a91b0e389a2758eae8ac3d51ff1e9f2f08..a28c2a63a19b0de082214d7e2a2e93da91b0d0e8 100644
--- a/docs.it4i/salomon/resource-allocation-and-job-execution.md
+++ b/docs.it4i/salomon/resource-allocation-and-job-execution.md
@@ -6,12 +6,12 @@ To run a [job](job-submission-and-execution/), [computational resources](resourc
 
 The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](job-priority/) at Salomon ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are available to Anselm users:
 
-*   **qexp**, the Express queue
-*   **qprod**, the Production queue
-*   **qlong**, the Long queue
-*   **qmpp**, the Massively parallel queue
-*   **qfat**, the queue to access SMP UV2000 machine
-*   **qfree**, the Free resource utilization queue
+* **qexp**, the Express queue
+* **qprod**, the Production queue
+* **qlong**, the Long queue
+* **qmpp**, the Massively parallel queue
+* **qfat**, the queue to access SMP UV2000 machine
+* **qfree**, the Free resource utilization queue
 
 !!! note
     Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/>
diff --git a/docs.it4i/salomon/resources-allocation-policy.md b/docs.it4i/salomon/resources-allocation-policy.md
index ab5f32a4f3a6327d7cb262ac06deff646783a8db..d705a527d4ed1e0988a4c76575687c23239e41de 100644
--- a/docs.it4i/salomon/resources-allocation-policy.md
+++ b/docs.it4i/salomon/resources-allocation-policy.md
@@ -1,7 +1,5 @@
 # Resources Allocation Policy
 
-## Resources Allocation Policy
-
 The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
 
 !!! note
@@ -20,18 +18,18 @@ The resources are allocated to the job in a fair-share fashion, subject to const
 !!! note
     **The qfree queue is not free of charge**. [Normal accounting](resources-allocation-policy/#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
 
-*   **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
-*   **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
-*   **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 336 nodes without acceleration may be accessed via the qlong queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 \* 48 h)
-*   **qmpp**, the massively parallel queue. This queue is intended for massively parallel runs. It is required that active project with nonzero remaining resources is specified to enter the qmpp. All nodes may be accessed via the qmpp queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it.  The maximum runtime in qmpp is 4 hours. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
-*   **qfat**, the UV2000 queue. This queue is dedicated to access the fat SGI UV2000 SMP machine. The machine (uv1) has 112 Intel IvyBridge cores at 3.3GHz and 3.25TB RAM. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
-*   **qfree**, the Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 24 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
-*   **qviz**, the Visualization queue: Intended for pre-/post-processing using OpenGL accelerated graphics. Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 73 GB of RAM and 1/7 of the GPU capacity (default "chunk"). If more GPU power or RAM is required, it is recommended to allocate more chunks (with 4 cores each) up to one whole node per user, so that all 28 cores, 512 GB RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
+* **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
+* **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
+* **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 336 nodes without acceleration may be accessed via the qlong queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 \* 48 h)
+* **qmpp**, the massively parallel queue. This queue is intended for massively parallel runs. It is required that active project with nonzero remaining resources is specified to enter the qmpp. All nodes may be accessed via the qmpp queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it.  The maximum runtime in qmpp is 4 hours. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
+* **qfat**, the UV2000 queue. This queue is dedicated to access the fat SGI UV2000 SMP machine. The machine (uv1) has 112 Intel IvyBridge cores at 3.3GHz and 3.25TB RAM. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
+* **qfree**, the Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 24 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
+* **qviz**, the Visualization queue: Intended for pre-/post-processing using OpenGL accelerated graphics. Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 73 GB of RAM and 1/7 of the GPU capacity (default "chunk"). If more GPU power or RAM is required, it is recommended to allocate more chunks (with 4 cores each) up to one whole node per user, so that all 28 cores, 512 GB RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
 
 !!! note
     To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
 
-### Notes
+## Notes
 
 The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be  [set manually, see examples](job-submission-and-execution/).
 
@@ -39,7 +37,7 @@ Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatica
 
 Salomon users may check current queue configuration at <https://extranet.it4i.cz/rsweb/salomon/queues>.
 
-### Queue Status
+## Queue Status
 
 !!! note
     Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)
@@ -61,9 +59,9 @@ Usage: rspbs [options]
 Options:
   --version             show program's version number and exit
   -h, --help            show this help message and exit
-  --get-server-details  Print server
+  --get-server-details Print server
   --get-queues          Print queues
-  --get-queues-details  Print queues details
+  --get-queues-details Print queues details
   --get-reservations    Print reservations
   --get-reservations-details
                         Print reservations details
@@ -94,7 +92,7 @@ Options:
   --get-user-ncpus      Print number of allocated ncpus per user
   --get-qlist-nodes     Print qlist nodes
   --get-qlist-nodeset   Print qlist nodeset
-  --get-ibswitch-nodes  Print ibswitch nodes
+  --get-ibswitch-nodes Print ibswitch nodes
   --get-ibswitch-nodeset
                         Print ibswitch nodeset
   --summary             Print summary
diff --git a/docs.it4i/salomon/shell-and-data-access.md b/docs.it4i/salomon/shell-and-data-access.md
index fef1df3bca4bf1da985055af4bb386667c7a076e..93b68abaaf23e38eeff31e220f66a39d22456a8b 100644
--- a/docs.it4i/salomon/shell-and-data-access.md
+++ b/docs.it4i/salomon/shell-and-data-access.md
@@ -41,18 +41,18 @@ On **Windows**, use [PuTTY ssh client](../get-started-with-it4innovations/access
 After logging in, you will see the command prompt:
 
 ```bash
-                    _____       _                             
-                   / ____|     | |                            
-                  | (___   __ _| | ___  _ __ ___   ___  _ __  
-                   \___ \ / _` | |/ _ \| '_ ` _ \ / _ \| '_ \ 
+                    _____       _
+                   / ____|     | |
+                  | (___   __ _| | ___  _ __ ___   ___  _ __
+                   \___ \ / _` | |/ _ \| '_ ` _ \ / _ \| '_ \
                    ____) | (_| | | (_) | | | | | | (_) | | | |
                   |_____/ \__,_|_|\___/|_| |_| |_|\___/|_| |_|
-                                                              
+
 
                         http://www.it4i.cz/?lang=en
 
 
-Last login: Tue Jul  9 15:57:38 2013 from your-host.example.com
+Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com
 [username@login2.salomon ~]$
 ```
 
@@ -140,7 +140,7 @@ Pick some unused port on Salomon login node  (for example 6000) and establish th
 local $ ssh -R 6000:remote.host.com:1234 salomon.it4i.cz
 ```
 
-In this example, we establish port forwarding between port 6000 on Salomon and  port 1234 on the remote.host.com. By accessing localhost:6000 on Salomon, an application will see response of remote.host.com:1234. The traffic will run via users local workstation.
+In this example, we establish port forwarding between port 6000 on Salomon and port 1234 on the remote.host.com. By accessing localhost:6000 on Salomon, an application will see response of remote.host.com:1234. The traffic will run via users local workstation.
 
 Port forwarding may be done **using PuTTY** as well. On the PuTTY Configuration screen, load your Salomon configuration first. Then go to Connection->SSH->Tunnels to set up the port forwarding. Click Remote radio button. Insert 6000 to Source port textbox. Insert remote.host.com:1234. Click Add button, then Open.
 
@@ -187,13 +187,13 @@ Once the proxy server is running, establish ssh port forwarding from Salomon to
 local $ ssh -R 6000:localhost:1080 salomon.it4i.cz
 ```
 
-Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding  to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well.
+Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well.
 
 ## Graphical User Interface
 
-*   The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
-*   The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
+* The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
+* The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
 
 ## VPN Access
 
-*   Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/).
+* Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/).
diff --git a/docs.it4i/salomon/software/ansys/ansys-cfx.md b/docs.it4i/salomon/software/ansys/ansys-cfx.md
index 741e565503fa19b82b6ef41c36eec69ed9c21128..0eb52d3e6f29fcf4e8ab2e37c27a7166faea2247 100644
--- a/docs.it4i/salomon/software/ansys/ansys-cfx.md
+++ b/docs.it4i/salomon/software/ansys/ansys-cfx.md
@@ -34,7 +34,7 @@ procs_per_host=1
 hl=""
 for host in `cat $PBS_NODEFILE`
 do
- if [ "$hl" = "" ]
+ if ["$hl" = "" ]
  then hl="$host:$procs_per_host"
  else hl="${hl}:$host:$procs_per_host"
  fi
diff --git a/docs.it4i/salomon/software/ansys/ansys-fluent.md b/docs.it4i/salomon/software/ansys/ansys-fluent.md
index aefcfbf77da974a6ec14110ef571e7dcf1f8ba36..33e711b285cc8066604c43ebb7c943dcb1294fb6 100644
--- a/docs.it4i/salomon/software/ansys/ansys-fluent.md
+++ b/docs.it4i/salomon/software/ansys/ansys-fluent.md
@@ -3,7 +3,7 @@
 [ANSYS Fluent](http://www.ansys.com/products/fluids/ansys-fluent)
 software contains the broad physical modeling capabilities needed to model flow, turbulence, heat transfer, and reactions for industrial applications ranging from air flow over an aircraft wing to combustion in a furnace, from bubble columns to oil platforms, from blood flow to semiconductor manufacturing, and from clean room design to wastewater treatment plants. Special models that give the software the ability to model in-cylinder combustion, aeroacoustics, turbomachinery, and multiphase systems have served to broaden its reach.
 
-1.  Common way to run Fluent over pbs file
+1. Common way to run Fluent over pbs file
 
 To run ANSYS Fluent in batch mode you can utilize/modify the default fluent.pbs script and execute it via the qsub command.
 
@@ -56,17 +56,17 @@ Journal file with definition of the input geometry and boundary conditions and d
 
 The appropriate dimension of the problem has to be set by parameter (2d/3d).
 
-2.  Fast way to run Fluent from command line
+1. Fast way to run Fluent from command line
 
 ```bash
 fluent solver_version [FLUENT_options] -i journal_file -pbs
 ```
 
-This syntax will start the ANSYS FLUENT job under PBS Professional using the  qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of _job_ID.hostname_. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as  qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o _job_ID_.
+This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of _job_ID.hostname_. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o _job_ID_.
 
-3.  Running Fluent via user's config file
+1. Running Fluent via user's config file
 
-The sample script uses a configuration file called pbs_fluent.conf  if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of  pbs_fluent.conf can be:
+The sample script uses a configuration file called pbs_fluent.conf if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of pbs_fluent.conf can be:
 
 ```bash
 input="example_small.flin"
@@ -100,7 +100,7 @@ To run ANSYS Fluent in batch mode with user's config file you can utilize/modify
  cd $PBS_O_WORKDIR
 
  #We assume that if they didn’t specify arguments then they should use the
- #config file if [ "xx${input}${case}${mpp}${fluent_args}zz" = "xxzz" ]; then
+ #config file if ["xx${input}${case}${mpp}${fluent_args}zz" = "xxzz" ]; then
    if [ -f pbs_fluent.conf ]; then
      . pbs_fluent.conf
    else
@@ -141,7 +141,7 @@ To run ANSYS Fluent in batch mode with user's config file you can utilize/modify
 
 It runs the jobs out of the directory from which they are submitted (PBS_O_WORKDIR).
 
-4.  Running Fluent in parralel
+1. Running Fluent in parralel
 
 Fluent could be run in parallel only under Academic Research license. To do so this ANSYS Academic Research license must be placed before ANSYS CFD license in user preferences. To make this change anslic_admin utility should be run
 
diff --git a/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md b/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md
index 65423033146af8c27fe47a513aad0e51543128c3..5d49022a7dd18dc85af28e501cfa08b31be28272 100644
--- a/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md
+++ b/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md
@@ -1,6 +1,6 @@
 # ANSYS LS-DYNA
 
-**[ANSYSLS-DYNA](http://www.ansys.com/products/structures/ansys-ls-dyna)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern  graphical user environment.
+**[ANSYSLS-DYNA](http://www.ansys.com/products/structures/ansys-ls-dyna)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment.
 
 To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command.
 
@@ -39,7 +39,7 @@ procs_per_host=1
 hl=""
 for host in `cat $PBS_NODEFILE`
 do
- if [ "$hl" = "" ]
+ if ["$hl" = "" ]
  then hl="$host:$procs_per_host"
  else hl="${hl}:$host:$procs_per_host"
  fi
diff --git a/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md b/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md
index 70448b8903b0ab067691d3d7d4442f141b7a084e..c1562c1c23ca09fe308536c45f1c903ab8384b3e 100644
--- a/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md
+++ b/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md
@@ -35,7 +35,7 @@ procs_per_host=1
 hl=""
 for host in `cat $PBS_NODEFILE`
 do
- if [ "$hl" = "" ]
+ if ["$hl" = "" ]
  then hl="$host:$procs_per_host"
  else hl="${hl}:$host:$procs_per_host"
  fi
diff --git a/docs.it4i/salomon/software/ansys/ansys.md b/docs.it4i/salomon/software/ansys/ansys.md
index c72e964629ed098f6413c5ccc491ef5920220363..f93524a3e580f8a5c83302f8d1cd9997bb68c2be 100644
--- a/docs.it4i/salomon/software/ansys/ansys.md
+++ b/docs.it4i/salomon/software/ansys/ansys.md
@@ -2,7 +2,7 @@
 
 **[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
 
-Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa\_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More  about licensing here](licensing/)
+Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa\_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [More about licensing here](licensing/)
 
 To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module:
 
diff --git a/docs.it4i/salomon/software/ansys/licensing.md b/docs.it4i/salomon/software/ansys/licensing.md
index 8709ba86478c7bb933fb2cd586cd9eaa3c8729bc..04ff6513349ccede25a0846dd21227251e954732 100644
--- a/docs.it4i/salomon/software/ansys/licensing.md
+++ b/docs.it4i/salomon/software/ansys/licensing.md
@@ -2,9 +2,9 @@
 
 ## ANSYS Licence Can Be Used By:
 
-*   all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB - Technical University of Ostrava, users are CE IT4Innovations third parties - CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
-*   all persons who have a valid license
-*   students of the Technical University
+* all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB - Technical University of Ostrava, users are CE IT4Innovations third parties - CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
+* all persons who have a valid license
+* students of the Technical University
 
 ## ANSYS Academic Research
 
@@ -16,8 +16,8 @@ The licence intended to be used for science and research, publications, students
 
 ## Available Versions
 
-*   16.1
-*   17.0
+* 16.1
+* 17.0
 
 ## License Preferences
 
diff --git a/docs.it4i/salomon/software/ansys/setting-license-preferences.md b/docs.it4i/salomon/software/ansys/setting-license-preferences.md
index f3dfad8e2daa5212d4620b5b17155e60b5aadad7..fe14541d46b1fe4cab38eb7b883c58e40e03dd32 100644
--- a/docs.it4i/salomon/software/ansys/setting-license-preferences.md
+++ b/docs.it4i/salomon/software/ansys/setting-license-preferences.md
@@ -2,7 +2,7 @@
 
 Some ANSYS tools allow you to explicitly specify usage of academic or commercial licenses in the command line (eg. ansys161 -p aa_r to select Academic Research license). However, we have observed that not all tools obey this option and choose commercial license.
 
-Thus you need to configure preferred license order with ANSLIC_ADMIN. Please follow these steps and move Academic Research license to the  top or bottom of the list accordingly.
+Thus you need to configure preferred license order with ANSLIC_ADMIN. Please follow these steps and move Academic Research license to the top or bottom of the list accordingly.
 
 Launch the ANSLIC_ADMIN utility in a graphical environment:
 
diff --git a/docs.it4i/salomon/software/chemistry/molpro.md b/docs.it4i/salomon/software/chemistry/molpro.md
index eb0ffb2db199699a64c6aa853417068fc0d773d9..ab53760cda8c5efa186e93d7ab9d4b4032979f53 100644
--- a/docs.it4i/salomon/software/chemistry/molpro.md
+++ b/docs.it4i/salomon/software/chemistry/molpro.md
@@ -33,7 +33,7 @@ Compilation parameters are default:
 Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details.
 
 !!! note
-    The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option  mpiprocs=16:ompthreads=1 to PBS.
+    The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
 
 You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem.
 
diff --git a/docs.it4i/salomon/software/chemistry/nwchem.md b/docs.it4i/salomon/software/chemistry/nwchem.md
index be4e95f060601b302b8a4a9a677672256001ba5e..a26fc701ee44585dbab1f942685b92d9190adfa5 100644
--- a/docs.it4i/salomon/software/chemistry/nwchem.md
+++ b/docs.it4i/salomon/software/chemistry/nwchem.md
@@ -1,7 +1,5 @@
 # NWChem
 
-**High-Performance Computational Chemistry**
-
 ## Introduction
 
 NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.
@@ -12,8 +10,8 @@ NWChem aims to provide its users with computational chemistry tools that are sca
 
 The following versions are currently installed:
 
-*   NWChem/6.3.revision2-2013-10-17-Python-2.7.8, current release. Compiled with Intel compilers, MKL and Intel MPI
-*   NWChem/6.5.revision26243-intel-2015b-2014-09-10-Python-2.7.8
+* NWChem/6.3.revision2-2013-10-17-Python-2.7.8, current release. Compiled with Intel compilers, MKL and Intel MPI
+* NWChem/6.5.revision26243-intel-2015b-2014-09-10-Python-2.7.8
 
 For a current list of installed versions, execute:
 
@@ -41,5 +39,5 @@ The recommend to use version 6.5. Version 6.3 fails on Salomon nodes with accele
 
 Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
 
-*   MEMORY : controls the amount of memory NWChem will use
-*   SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
+* MEMORY : controls the amount of memory NWChem will use
+* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
diff --git a/docs.it4i/salomon/software/chemistry/phono3py.md b/docs.it4i/salomon/software/chemistry/phono3py.md
index b1f575f6758847682894d66598b48a04d725a237..3f747d23bc9775f80137c0d6e4f1b4821d97439b 100644
--- a/docs.it4i/salomon/software/chemistry/phono3py.md
+++ b/docs.it4i/salomon/software/chemistry/phono3py.md
@@ -27,14 +27,14 @@ $ cat POSCAR
  Si
    8
 Direct
-   0.8750000000000000  0.8750000000000000  0.8750000000000000
-   0.8750000000000000  0.3750000000000000  0.3750000000000000
-   0.3750000000000000  0.8750000000000000  0.3750000000000000
-   0.3750000000000000  0.3750000000000000  0.8750000000000000
-   0.1250000000000000  0.1250000000000000  0.1250000000000000
-   0.1250000000000000  0.6250000000000000  0.6250000000000000
-   0.6250000000000000  0.1250000000000000  0.6250000000000000
-   0.6250000000000000  0.6250000000000000  0.1250000000000000
+   0.8750000000000000 0.8750000000000000 0.8750000000000000
+   0.8750000000000000 0.3750000000000000 0.3750000000000000
+   0.3750000000000000 0.8750000000000000 0.3750000000000000
+   0.3750000000000000 0.3750000000000000 0.8750000000000000
+   0.1250000000000000 0.1250000000000000 0.1250000000000000
+   0.1250000000000000 0.6250000000000000 0.6250000000000000
+   0.6250000000000000 0.1250000000000000 0.6250000000000000
+   0.6250000000000000 0.6250000000000000 0.1250000000000000
 ```
 
 ### Generating Displacement Using 2 by 2 by 2 Supercell for Both Second and Third Order Force Constants
@@ -47,15 +47,15 @@ $ phono3py -d --dim="2 2 2" -c POSCAR
 disp_fc3.yaml, and the structure input files with this displacements are POSCAR-00XXX, where the XXX=111.
 
 ```bash
-disp_fc3.yaml  POSCAR-00008  POSCAR-00017  POSCAR-00026  POSCAR-00035  POSCAR-00044  POSCAR-00053  POSCAR-00062  POSCAR-00071  POSCAR-00080  POSCAR-00089  POSCAR-00098  POSCAR-00107
-POSCAR         POSCAR-00009  POSCAR-00018  POSCAR-00027  POSCAR-00036  POSCAR-00045  POSCAR-00054  POSCAR-00063  POSCAR-00072  POSCAR-00081  POSCAR-00090  POSCAR-00099  POSCAR-00108
-POSCAR-00001   POSCAR-00010  POSCAR-00019  POSCAR-00028  POSCAR-00037  POSCAR-00046  POSCAR-00055  POSCAR-00064  POSCAR-00073  POSCAR-00082  POSCAR-00091  POSCAR-00100  POSCAR-00109
-POSCAR-00002   POSCAR-00011  POSCAR-00020  POSCAR-00029  POSCAR-00038  POSCAR-00047  POSCAR-00056  POSCAR-00065  POSCAR-00074  POSCAR-00083  POSCAR-00092  POSCAR-00101  POSCAR-00110
-POSCAR-00003   POSCAR-00012  POSCAR-00021  POSCAR-00030  POSCAR-00039  POSCAR-00048  POSCAR-00057  POSCAR-00066  POSCAR-00075  POSCAR-00084  POSCAR-00093  POSCAR-00102  POSCAR-00111
-POSCAR-00004   POSCAR-00013  POSCAR-00022  POSCAR-00031  POSCAR-00040  POSCAR-00049  POSCAR-00058  POSCAR-00067  POSCAR-00076  POSCAR-00085  POSCAR-00094  POSCAR-00103
-POSCAR-00005   POSCAR-00014  POSCAR-00023  POSCAR-00032  POSCAR-00041  POSCAR-00050  POSCAR-00059  POSCAR-00068  POSCAR-00077  POSCAR-00086  POSCAR-00095  POSCAR-00104
-POSCAR-00006   POSCAR-00015  POSCAR-00024  POSCAR-00033  POSCAR-00042  POSCAR-00051  POSCAR-00060  POSCAR-00069  POSCAR-00078  POSCAR-00087  POSCAR-00096  POSCAR-00105
-POSCAR-00007   POSCAR-00016  POSCAR-00025  POSCAR-00034  POSCAR-00043  POSCAR-00052  POSCAR-00061  POSCAR-00070  POSCAR-00079  POSCAR-00088  POSCAR-00097  POSCAR-00106
+disp_fc3.yaml POSCAR-00008 POSCAR-00017 POSCAR-00026 POSCAR-00035 POSCAR-00044 POSCAR-00053 POSCAR-00062 POSCAR-00071 POSCAR-00080 POSCAR-00089 POSCAR-00098 POSCAR-00107
+POSCAR         POSCAR-00009 POSCAR-00018 POSCAR-00027 POSCAR-00036 POSCAR-00045 POSCAR-00054 POSCAR-00063 POSCAR-00072 POSCAR-00081 POSCAR-00090 POSCAR-00099 POSCAR-00108
+POSCAR-00001   POSCAR-00010 POSCAR-00019 POSCAR-00028 POSCAR-00037 POSCAR-00046 POSCAR-00055 POSCAR-00064 POSCAR-00073 POSCAR-00082 POSCAR-00091 POSCAR-00100 POSCAR-00109
+POSCAR-00002   POSCAR-00011 POSCAR-00020 POSCAR-00029 POSCAR-00038 POSCAR-00047 POSCAR-00056 POSCAR-00065 POSCAR-00074 POSCAR-00083 POSCAR-00092 POSCAR-00101 POSCAR-00110
+POSCAR-00003   POSCAR-00012 POSCAR-00021 POSCAR-00030 POSCAR-00039 POSCAR-00048 POSCAR-00057 POSCAR-00066 POSCAR-00075 POSCAR-00084 POSCAR-00093 POSCAR-00102 POSCAR-00111
+POSCAR-00004   POSCAR-00013 POSCAR-00022 POSCAR-00031 POSCAR-00040 POSCAR-00049 POSCAR-00058 POSCAR-00067 POSCAR-00076 POSCAR-00085 POSCAR-00094 POSCAR-00103
+POSCAR-00005   POSCAR-00014 POSCAR-00023 POSCAR-00032 POSCAR-00041 POSCAR-00050 POSCAR-00059 POSCAR-00068 POSCAR-00077 POSCAR-00086 POSCAR-00095 POSCAR-00104
+POSCAR-00006   POSCAR-00015 POSCAR-00024 POSCAR-00033 POSCAR-00042 POSCAR-00051 POSCAR-00060 POSCAR-00069 POSCAR-00078 POSCAR-00087 POSCAR-00096 POSCAR-00105
+POSCAR-00007   POSCAR-00016 POSCAR-00025 POSCAR-00034 POSCAR-00043 POSCAR-00052 POSCAR-00061 POSCAR-00070 POSCAR-00079 POSCAR-00088 POSCAR-00097 POSCAR-00106
 ```
 
 For each displacement the forces needs to be calculated, i.e. in form of the output file of VASP (vasprun.xml). For a single VASP calculations one needs [KPOINTS](KPOINTS), [POTCAR](POTCAR), [INCAR](INCAR) in your case directory (where you have POSCARS) and those 111 displacements calculations can be generated by [prepare.sh](prepare.sh) script. Then each of the single 111 calculations is submitted [run.sh](run.sh) by [submit.sh](submit.sh).
@@ -63,14 +63,14 @@ For each displacement the forces needs to be calculated, i.e. in form of the out
 ```bash
 $./prepare.sh
 $ls
-disp-00001  disp-00009  disp-00017  disp-00025  disp-00033  disp-00041  disp-00049  disp-00057  disp-00065  disp-00073  disp-00081  disp-00089  disp-00097  disp-00105     INCAR
-disp-00002  disp-00010  disp-00018  disp-00026  disp-00034  disp-00042  disp-00050  disp-00058  disp-00066  disp-00074  disp-00082  disp-00090  disp-00098  disp-00106     KPOINTS
-disp-00003  disp-00011  disp-00019  disp-00027  disp-00035  disp-00043  disp-00051  disp-00059  disp-00067  disp-00075  disp-00083  disp-00091  disp-00099  disp-00107     POSCAR
-disp-00004  disp-00012  disp-00020  disp-00028  disp-00036  disp-00044  disp-00052  disp-00060  disp-00068  disp-00076  disp-00084  disp-00092  disp-00100  disp-00108     POTCAR
-disp-00005  disp-00013  disp-00021  disp-00029  disp-00037  disp-00045  disp-00053  disp-00061  disp-00069  disp-00077  disp-00085  disp-00093  disp-00101  disp-00109     prepare.sh
-disp-00006  disp-00014  disp-00022  disp-00030  disp-00038  disp-00046  disp-00054  disp-00062  disp-00070  disp-00078  disp-00086  disp-00094  disp-00102  disp-00110     run.sh
-disp-00007  disp-00015  disp-00023  disp-00031  disp-00039  disp-00047  disp-00055  disp-00063  disp-00071  disp-00079  disp-00087  disp-00095  disp-00103  disp-00111     submit.sh
-disp-00008  disp-00016  disp-00024  disp-00032  disp-00040  disp-00048  disp-00056  disp-00064  disp-00072  disp-00080  disp-00088  disp-00096  disp-00104  disp_fc3.yaml
+disp-00001 disp-00009 disp-00017 disp-00025 disp-00033 disp-00041 disp-00049 disp-00057 disp-00065 disp-00073 disp-00081 disp-00089 disp-00097 disp-00105     INCAR
+disp-00002 disp-00010 disp-00018 disp-00026 disp-00034 disp-00042 disp-00050 disp-00058 disp-00066 disp-00074 disp-00082 disp-00090 disp-00098 disp-00106     KPOINTS
+disp-00003 disp-00011 disp-00019 disp-00027 disp-00035 disp-00043 disp-00051 disp-00059 disp-00067 disp-00075 disp-00083 disp-00091 disp-00099 disp-00107     POSCAR
+disp-00004 disp-00012 disp-00020 disp-00028 disp-00036 disp-00044 disp-00052 disp-00060 disp-00068 disp-00076 disp-00084 disp-00092 disp-00100 disp-00108     POTCAR
+disp-00005 disp-00013 disp-00021 disp-00029 disp-00037 disp-00045 disp-00053 disp-00061 disp-00069 disp-00077 disp-00085 disp-00093 disp-00101 disp-00109     prepare.sh
+disp-00006 disp-00014 disp-00022 disp-00030 disp-00038 disp-00046 disp-00054 disp-00062 disp-00070 disp-00078 disp-00086 disp-00094 disp-00102 disp-00110     run.sh
+disp-00007 disp-00015 disp-00023 disp-00031 disp-00039 disp-00047 disp-00055 disp-00063 disp-00071 disp-00079 disp-00087 disp-00095 disp-00103 disp-00111     submit.sh
+disp-00008 disp-00016 disp-00024 disp-00032 disp-00040 disp-00048 disp-00056 disp-00064 disp-00072 disp-00080 disp-00088 disp-00096 disp-00104 disp_fc3.yaml
 ```
 
 Taylor your run.sh script to fit into your project and other needs and submit all 111 calculations using submit.sh script
diff --git a/docs.it4i/salomon/software/compilers.md b/docs.it4i/salomon/software/compilers.md
index da785d173bb9cf361c2f5d062b3d269b0e530293..2c5b545c6ed7e7a85041ba18c7467d5530609803 100644
--- a/docs.it4i/salomon/software/compilers.md
+++ b/docs.it4i/salomon/software/compilers.md
@@ -4,22 +4,22 @@ Available compilers, including GNU, INTEL and UPC compilers
 
 There are several compilers for different programming languages available on the cluster:
 
-*   C/C++
-*   Fortran 77/90/95/HPF
-*   Unified Parallel C
-*   Java
+* C/C++
+* Fortran 77/90/95/HPF
+* Unified Parallel C
+* Java
 
 The C/C++ and Fortran compilers are provided by:
 
 Opensource:
 
-*   GNU GCC
-*   Clang/LLVM
+* GNU GCC
+* Clang/LLVM
 
 Commercial licenses:
 
-*   Intel
-*   PGI
+* Intel
+* PGI
 
 ## Intel Compilers
 
@@ -81,8 +81,8 @@ For more information about the possibilities of the compilers, please see the ma
 
 UPC is supported by two compiler/runtime implementations:
 
-*   GNU - SMP/multi-threading support only
-*   Berkley - multi-node support as well as SMP/multi-threading support
+* GNU - SMP/multi-threading support only
+* Berkley - multi-node support as well as SMP/multi-threading support
 
 ### GNU UPC Compiler
 
diff --git a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
index fd40c1e4aefe6acfc79aff06425ebf5ee7594fe5..6b2d725c102af9f6b08940087344b96f9d5d7633 100644
--- a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
+++ b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
@@ -4,11 +4,11 @@
 
 [COMSOL](http://www.comsol.com) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical applications.
 
-*   [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
-*   [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
-*   [CFD Module](http://www.comsol.com/cfd-module),
-*   [Acoustics Module](http://www.comsol.com/acoustics-module),
-*   and [many others](http://www.comsol.com/products)
+* [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
+* [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
+* [CFD Module](http://www.comsol.com/cfd-module),
+* [Acoustics Module](http://www.comsol.com/acoustics-module),
+* and [many others](http://www.comsol.com/products)
 
 COMSOL also allows an interface support for equation-based modelling of partial differential equations.
 
@@ -16,9 +16,9 @@ COMSOL also allows an interface support for equation-based modelling of partial
 
 On the clusters COMSOL is available in the latest stable version. There are two variants of the release:
 
-*   **Non commercial** or so called >**EDU variant**>, which can be used for research and educational purposes.
+* **Non commercial** or so called >**EDU variant**>, which can be used for research and educational purposes.
 
-*   **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU  variant** available. More about licensing will be posted here soon.
+* **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
 
 To load the of COMSOL load the module
 
diff --git a/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md b/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
index 6e9d290c73257580869df79364c7cca8d6ae72e5..4358b930fedbfcdf3ea9277d2fa5c89e8a74ca37 100644
--- a/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
+++ b/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
@@ -2,9 +2,9 @@
 
 ## Comsol Licence Can Be Used By:
 
-*   all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB - Technical University of Ostrava, users are CE IT4Innovations third parties - CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
-*   all persons who have a valid license
-*   students of the Technical University
+* all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB - Technical University of Ostrava, users are CE IT4Innovations third parties - CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
+* all persons who have a valid license
+* students of the Technical University
 
 ## Comsol EDU Network Licence
 
@@ -12,8 +12,8 @@ The licence intended to be used for science and research, publications, students
 
 ## Comsol COM Network Licence
 
-The licence intended to be used for science and research, publications, students’ projects, commercial research with no commercial use restrictions.  Enables  the solution of at least  one job by one user  in one program start.
+The licence intended to be used for science and research, publications, students’ projects, commercial research with no commercial use restrictions.  Enables the solution of at least one job by one user in one program start.
 
 ## Available Versions
 
-*   ver. 51
+* ver. 51
diff --git a/docs.it4i/salomon/software/debuggers/aislinn.md b/docs.it4i/salomon/software/debuggers/aislinn.md
index 7db8ebc34343b03af0449b4790f0cd880ced5c6a..e1dee28b8d6d78ef7be2371afb2f8884f2b5f364 100644
--- a/docs.it4i/salomon/software/debuggers/aislinn.md
+++ b/docs.it4i/salomon/software/debuggers/aislinn.md
@@ -1,14 +1,14 @@
 # Aislinn
 
-*   Aislinn is a dynamic verifier for MPI programs. For a fixed input it covers all possible runs with respect to nondeterminism introduced by MPI. It allows to detect bugs (for sure) that occurs very rare in normal runs.
-*   Aislinn detects problems like invalid memory accesses, deadlocks, misuse of MPI, and resource leaks.
-*   Aislinn is open-source software; you can use it without any licensing limitations.
-*   Web page of the project: <http://verif.cs.vsb.cz/aislinn/>
+* Aislinn is a dynamic verifier for MPI programs. For a fixed input it covers all possible runs with respect to nondeterminism introduced by MPI. It allows to detect bugs (for sure) that occurs very rare in normal runs.
+* Aislinn detects problems like invalid memory accesses, deadlocks, misuse of MPI, and resource leaks.
+* Aislinn is open-source software; you can use it without any licensing limitations.
+* Web page of the project: <http://verif.cs.vsb.cz/aislinn/>
 
 !!! note
     Aislinn is software developed at IT4Innovations and some parts are still considered experimental. If you have any questions or experienced any problems, please contact the author: <mailto:stanislav.bohm@vsb.cz>.
 
-### Usage
+## Usage
 
 Let us have the following program that contains a bug that is not manifested in all runs:
 
@@ -83,20 +83,20 @@ At the beginning of the report there are some basic summaries of the verificatio
 
 It shows us:
 
-*   Error occurs in process 0 in test.cpp on line 16.
-*   Stdout and stderr streams are empty. (The program does not write anything).
-*   The last part shows MPI calls for each process that occurs in the invalid run. The more detailed information about each call can be obtained by mouse cursor.
+* Error occurs in process 0 in test.cpp on line 16.
+* Stdout and stderr streams are empty. (The program does not write anything).
+* The last part shows MPI calls for each process that occurs in the invalid run. The more detailed information about each call can be obtained by mouse cursor.
 
 ### Limitations
 
 Since the verification is a non-trivial process there are some of limitations.
 
-*   The verified process has to terminate in all runs, i.e. we cannot answer the halting problem.
-*   The verification is a computationally and memory demanding process. We put an effort to make it efficient and it is an important point for further research. However covering all runs will be always more demanding than techniques that examines only a single run. The good practise is to start with small instances and when it is feasible, make them bigger. The Aislinn is good to find bugs that are hard to find because they occur very rarely (only in a rare scheduling). Such bugs often do not need big instances.
-*   Aislinn expects that your program is a "standard MPI" program, i.e. processes communicate only through MPI, the verified program does not interacts with the system in some unusual ways (e.g. opening sockets).
+* The verified process has to terminate in all runs, i.e. we cannot answer the halting problem.
+* The verification is a computationally and memory demanding process. We put an effort to make it efficient and it is an important point for further research. However covering all runs will be always more demanding than techniques that examines only a single run. The good practise is to start with small instances and when it is feasible, make them bigger. The Aislinn is good to find bugs that are hard to find because they occur very rarely (only in a rare scheduling). Such bugs often do not need big instances.
+* Aislinn expects that your program is a "standard MPI" program, i.e. processes communicate only through MPI, the verified program does not interacts with the system in some unusual ways (e.g. opening sockets).
 
 There are also some limitations bounded to the current version and they will be removed in the future:
 
-*   All files containing MPI calls have to be recompiled by MPI implementation provided by Aislinn. The files that does not contain MPI calls, they do not have to recompiled. Aislinn MPI implementation supports many commonly used calls from MPI-2 and MPI-3 related to point-to-point communication, collective communication, and communicator management. Unfortunately, MPI-IO and one-side communication is not implemented yet.
-*   Each MPI can use only one thread (if you use OpenMP, set OMP_NUM_THREADS to 1).
-*   There are some limitations for using files, but if the program just reads inputs and writes results, it is ok.
+* All files containing MPI calls have to be recompiled by MPI implementation provided by Aislinn. The files that does not contain MPI calls, they do not have to recompiled. Aislinn MPI implementation supports many commonly used calls from MPI-2 and MPI-3 related to point-to-point communication, collective communication, and communicator management. Unfortunately, MPI-IO and one-side communication is not implemented yet.
+* Each MPI can use only one thread (if you use OpenMP, set OMP_NUM_THREADS to 1).
+* There are some limitations for using files, but if the program just reads inputs and writes results, it is ok.
diff --git a/docs.it4i/salomon/software/debuggers/allinea-ddt.md b/docs.it4i/salomon/software/debuggers/allinea-ddt.md
index 3315d6deecb54892b0dfa059ca86f949a7385ca5..0c8128afed0c36c58e3477d31350e3976d363cbd 100644
--- a/docs.it4i/salomon/software/debuggers/allinea-ddt.md
+++ b/docs.it4i/salomon/software/debuggers/allinea-ddt.md
@@ -10,13 +10,13 @@ Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed for profil
 
 On Anselm users can debug OpenMP or MPI code that runs up to 64 parallel processes. In case of debugging GPU or Xeon Phi accelerated codes the limit is 8 accelerators. These limitation means that:
 
-*   1 user can debug up 64 processes, or
-*   32 users can debug 2 processes, etc.
+* 1 user can debug up 64 processes, or
+* 32 users can debug 2 processes, etc.
 
 In case of debugging on accelerators:
 
-*   1 user can debug on up to 8 accelerators, or
-*   8 users can debug on single accelerator.
+* 1 user can debug on up to 8 accelerators, or
+* 8 users can debug on single accelerator.
 
 ## Compiling Code to Run With DDT
 
@@ -54,7 +54,7 @@ Before debugging, you need to compile your code with theses flags:
 
 ## Starting a Job With DDT
 
-Be sure to log in with an  X window forwarding enabled. This could mean using the -X in the ssh:
+Be sure to log in with an X window forwarding enabled. This could mean using the -X in the ssh:
 
 ```bash
     $ ssh -X username@anselm.it4i.cz
diff --git a/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md b/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md
index 526f28cc89d5fd41c2ef5cd220a3475d8bc58f54..f79935222903cd0f98e90e9f1e923ae55c910eb0 100644
--- a/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md
+++ b/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md
@@ -28,7 +28,7 @@ Instead of [running your MPI program the usual way](../mpi/mpi/), use the the pe
     $ perf-report mpirun ./mympiprog.x
 ```
 
-The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that  demanding MPI codes should be run within [ the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution/).
+The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that demanding MPI codes should be run within [the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution/).
 
 ## Example
 
diff --git a/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
index 224dca9612556aeb83c14fe58554782b55af297b..2fdbd18e166d3e553a8ad5719f7945f902cbd73c 100644
--- a/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
+++ b/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
@@ -4,10 +4,10 @@
 
 Intel *®* VTune™ Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
 
-*   Hotspot analysis
-*   Locks and waits analysis
-*   Low level specific counters, such as branch analysis and memory bandwidth
-*   Power usage analysis - frequency and sleep states.
+* Hotspot analysis
+* Locks and waits analysis
+* Low level specific counters, such as branch analysis and memory bandwidth
+* Power usage analysis - frequency and sleep states.
 
 ![](../../../img/vtune-amplifier.png)
 
@@ -90,5 +90,5 @@ You can obtain this command line by pressing the "Command line..." button on Ana
 ## References
 
 1. [Performance Tuning for Intel® Xeon Phi™ Coprocessors](https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf)
-2. [Intel® VTune™ Amplifier Support](https://software.intel.com/en-us/intel-vtune-amplifier-xe-support/documentation)
-3. [https://software.intel.com/en-us/amplifier_help_linux](https://software.intel.com/en-us/amplifier_help_linux)
+1. [Intel® VTune™ Amplifier Support](https://software.intel.com/en-us/intel-vtune-amplifier-xe-support/documentation)
+1. [https://software.intel.com/en-us/amplifier_help_linux](https://software.intel.com/en-us/amplifier_help_linux)
diff --git a/docs.it4i/salomon/software/debuggers/total-view.md b/docs.it4i/salomon/software/debuggers/total-view.md
index 17a2d42344ffa0ccc1c34ec4c369bfcca8341e79..f4f69278ff59e8f2cd35aad8b5c79bf78a4a0171 100644
--- a/docs.it4i/salomon/software/debuggers/total-view.md
+++ b/docs.it4i/salomon/software/debuggers/total-view.md
@@ -80,7 +80,7 @@ To debug a serial code use:
 
 To debug a parallel code compiled with **OpenMPI** you need to setup your TotalView environment:
 
-!!! hint 
+!!! hint
     To be able to run parallel debugging procedure from the command line without stopping the debugger in the mpiexec source code you have to add the following function to your **~/.tvdrc** file.
 
 ```bash
@@ -112,7 +112,9 @@ The source code of this function can be also found in
 You can also add only following line to you ~/.tvdrc file instead of
 the entire function:
 
-**source /apps/all/OpenMPI/1.10.1-GNU-4.9.3-2.25/etc/openmpi-totalview.tcl**
+```bash
+source /apps/all/OpenMPI/1.10.1-GNU-4.9.3-2.25/etc/openmpi-totalview.tcl
+```
 
 You need to do this step only once. See also [OpenMPI FAQ entry](https://www.open-mpi.org/faq/?category=running#run-with-tv)
 
diff --git a/docs.it4i/salomon/software/debuggers/valgrind.md b/docs.it4i/salomon/software/debuggers/valgrind.md
index af97d2b617e4af5f9b1db30fbfbcad4650575289..430118785a08bc43e67a4711396f9ac6b63c4afb 100644
--- a/docs.it4i/salomon/software/debuggers/valgrind.md
+++ b/docs.it4i/salomon/software/debuggers/valgrind.md
@@ -8,20 +8,20 @@ Valgind is an extremely useful tool for debugging memory errors such as [off-by-
 
 The main tools available in Valgrind are :
 
-*   **Memcheck**, the original, must used and default tool. Verifies memory access in you program and can detect use of unitialized memory, out of bounds memory access, memory leaks, double free, etc.
-*   **Massif**, a heap profiler.
-*   **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications.
-*   **Cachegrind**, a cache profiler.
-*   **Callgrind**, a callgraph analyzer.
-*   For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/).
+* **Memcheck**, the original, must used and default tool. Verifies memory access in you program and can detect use of unitialized memory, out of bounds memory access, memory leaks, double free, etc.
+* **Massif**, a heap profiler.
+* **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications.
+* **Cachegrind**, a cache profiler.
+* **Callgrind**, a callgraph analyzer.
+* For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/).
 
 ## Installed Versions
 
 There are two versions of Valgrind available on the cluster.
 
-*   Version 3.8.1, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support. Also, it does not support AVX2 instructions, debugging of an AVX2-enabled executable with this version will fail
-*   Version 3.11.0 built by ICC with support for Intel MPI, available in module Valgrind/3.11.0-intel-2015b. After loading the module, this version replaces the default valgrind.
-*   Version 3.11.0 built by GCC with support for Open MPI, module Valgrind/3.11.0-foss-2015b
+* Version 3.8.1, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support. Also, it does not support AVX2 instructions, debugging of an AVX2-enabled executable with this version will fail
+* Version 3.11.0 built by ICC with support for Intel MPI, available in module Valgrind/3.11.0-intel-2015b. After loading the module, this version replaces the default valgrind.
+* Version 3.11.0 built by GCC with support for Open MPI, module Valgrind/3.11.0-foss-2015b
 
 ## Usage
 
@@ -159,7 +159,7 @@ so it is better to use the MPI-enabled valgrind from module. The MPI versions re
 
 $EBROOTVALGRIND/lib/valgrind/libmpiwrap-amd64-linux.so
 
-which must be included in the  LD_PRELOAD environment variable.
+which must be included in the LD_PRELOAD environment variable.
 
 Lets look at this MPI example:
 
diff --git a/docs.it4i/salomon/software/debuggers/vampir.md b/docs.it4i/salomon/software/debuggers/vampir.md
index 864a384b0352f0060e66abf895d3800013a59897..99053546c14b43c51d5ab7728dfa3824f2016170 100644
--- a/docs.it4i/salomon/software/debuggers/vampir.md
+++ b/docs.it4i/salomon/software/debuggers/vampir.md
@@ -19,4 +19,4 @@ You can find the detailed user manual in PDF format in $EBROOTVAMPIR/doc/vampir-
 
 ## References
 
-1.  <https://www.vampir.eu>
+1. <https://www.vampir.eu>
diff --git a/docs.it4i/salomon/software/intel-suite/intel-advisor.md b/docs.it4i/salomon/software/intel-suite/intel-advisor.md
index 77975c016e750cd816ca9821acbe57bb5d18ccae..427f5c98cfccf29de4870043c08074ac1a246135 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-advisor.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-advisor.md
@@ -26,6 +26,6 @@ In the left pane, you can switch between Vectorization and Threading workflows.
 
 ## References
 
-1.  [Intel® Advisor 2015 Tutorial: Find Where to Add Parallelism - C++ Sample](https://software.intel.com/en-us/intel-advisor-tutorial-vectorization-windows-cplusplus)
-2.  [Product page](https://software.intel.com/en-us/intel-advisor-xe)
-3.  [Documentation](https://software.intel.com/en-us/intel-advisor-2016-user-guide-linux)
+1. [Intel® Advisor 2015 Tutorial: Find Where to Add Parallelism - C++ Sample](https://software.intel.com/en-us/intel-advisor-tutorial-vectorization-windows-cplusplus)
+1. [Product page](https://software.intel.com/en-us/intel-advisor-xe)
+1. [Documentation](https://software.intel.com/en-us/intel-advisor-2016-user-guide-linux)
diff --git a/docs.it4i/salomon/software/intel-suite/intel-compilers.md b/docs.it4i/salomon/software/intel-suite/intel-compilers.md
index 1a122dbae163406643dc6c3d5fde62a397cff3af..63a05bd91e15c04afa6a3cc8d21231ba030437bc 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-compilers.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-compilers.md
@@ -32,5 +32,5 @@ Read more at <https://software.intel.com/en-us/intel-cplusplus-compiler-16.0-use
 
  Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon compute nodes are equipped with Haswell based architecture. The UV1 SMP compute server has Ivy Bridge CPUs, which are equivalent to Sandy Bridge (only smaller manufacturing technology). The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this:
 
-*   Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge/Ivy Bridge nodes.
-*   Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This   will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge/Ivy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this    will result in larger binaries.
+* Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge/Ivy Bridge nodes.
+* Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This   will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge/Ivy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this    will result in larger binaries.
diff --git a/docs.it4i/salomon/software/intel-suite/intel-debugger.md b/docs.it4i/salomon/software/intel-suite/intel-debugger.md
index 1fbb569d179f6f95aa9918748075c67b92458bf1..c4f25ac328d0712cc9a136fd6f96c3eddc27778e 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-debugger.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-debugger.md
@@ -4,7 +4,7 @@ IDB is no longer available since Intel Parallel Studio 2015
 
 ## Debugging Serial Applications
 
-The intel debugger version  13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) for running the GUI.
+The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) for running the GUI.
 
 ```bash
     $ module load intel/2014.06
diff --git a/docs.it4i/salomon/software/intel-suite/intel-inspector.md b/docs.it4i/salomon/software/intel-suite/intel-inspector.md
index 3ff7762b131f56dff0fa6385f90003e5f78d8812..6231a65347abc13d442aea0586d6003ac7d3c798 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-inspector.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-inspector.md
@@ -28,12 +28,12 @@ In the main pane, you can start a predefined analysis type or define your own. C
 
 ### Batch Mode
 
-Analysis can be also run from command line in batch mode. Batch mode analysis is run with command  inspxe-cl. To obtain the required parameters, either consult the documentation or you can configure the analysis in the GUI and then click "Command Line" button in the lower right corner to the respective command line.
+Analysis can be also run from command line in batch mode. Batch mode analysis is run with command inspxe-cl. To obtain the required parameters, either consult the documentation or you can configure the analysis in the GUI and then click "Command Line" button in the lower right corner to the respective command line.
 
 Results obtained from batch mode can be then viewed in the GUI by selecting File -> Open -> Result...
 
 ## References
 
-1.  [Product page](https://software.intel.com/en-us/intel-inspector-xe)
-2.  [Documentation and Release Notes](https://software.intel.com/en-us/intel-inspector-xe-support/documentation)
-3.  [Tutorials](https://software.intel.com/en-us/articles/inspectorxe-tutorials)
+1. [Product page](https://software.intel.com/en-us/intel-inspector-xe)
+1. [Documentation and Release Notes](https://software.intel.com/en-us/intel-inspector-xe-support/documentation)
+1. [Tutorials](https://software.intel.com/en-us/articles/inspectorxe-tutorials)
diff --git a/docs.it4i/salomon/software/intel-suite/intel-mkl.md b/docs.it4i/salomon/software/intel-suite/intel-mkl.md
index ca0cdcc619554869f74f13d9d25895b28530c330..322492010827e5dc2cc63d6ccd7cb3452f1a4214 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-mkl.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-mkl.md
@@ -4,14 +4,14 @@
 
 Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, extensively threaded and optimized for maximum performance. Intel MKL provides these basic math kernels:
 
-*   BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations.
-*   The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations.
-*   ScaLAPACK distributed processing linear algebra routines for Linux and Windows operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).
-*   Fast Fourier transform (FFT) functions in one, two, or three dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions.
-*   Vector Math Library (VML) routines for optimized mathematical operations on vectors.
-*   Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for several probability distributions, convolution and correlation routines, and summary statistics functions.
-*   Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
-*   Extended Eigensolver, a shared memory  version of an eigensolver based on the Feast Eigenvalue Solver.
+* BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations.
+* The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations.
+* ScaLAPACK distributed processing linear algebra routines for Linux and Windows operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).
+* Fast Fourier transform (FFT) functions in one, two, or three dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions.
+* Vector Math Library (VML) routines for optimized mathematical operations on vectors.
+* Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for several probability distributions, convolution and correlation routines, and summary statistics functions.
+* Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
+* Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver.
 
 For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm).
 
@@ -38,7 +38,7 @@ Intel MKL library provides number of interfaces. The fundamental once are the LP
 
 Linking Intel MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl/#examples) below.
 
-You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include  rpath on the compile line:
+You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include rpath on the compile line:
 
 ```bash
     $ icc .... -Wl,-rpath=$LIBRARY_PATH ...
@@ -72,7 +72,7 @@ Number of examples, demonstrating use of the Intel MKL library and its linking i
     $ make sointel64 function=cblas_dgemm
 ```
 
-In this example, we compile, link and run the cblas_dgemm  example, demonstrating use of MKL example suite installed on clusters.
+In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL example suite installed on clusters.
 
 ### Example: MKL and Intel Compiler
 
@@ -86,14 +86,14 @@ In this example, we compile, link and run the cblas_dgemm  example, demonstratin
     $ ./cblas_dgemmx.x data/cblas_dgemmx.d
 ```
 
-In this example, we compile, link and run the cblas_dgemm  example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to:
+In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to:
 
 ```bash
     $ icc -w source/cblas_dgemmx.c source/common_func.c -o cblas_dgemmx.x
     -I$MKL_INC_DIR -L$MKL_LIB_DIR -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5
 ```
 
-In this example, we compile and link the cblas_dgemm  example, using LP64 interface to threaded MKL and Intel OMP threads implementation.
+In this example, we compile and link the cblas_dgemm example, using LP64 interface to threaded MKL and Intel OMP threads implementation.
 
 ### Example: Intel MKL and GNU Compiler
 
@@ -109,7 +109,7 @@ In this example, we compile and link the cblas_dgemm  example, using LP64 interf
     $ ./cblas_dgemmx.x data/cblas_dgemmx.d
 ```
 
-In this example, we compile, link and run the cblas_dgemm  example, using LP64 interface to threaded MKL and gnu OMP threads implementation.
+In this example, we compile, link and run the cblas_dgemm example, using LP64 interface to threaded MKL and gnu OMP threads implementation.
 
 ## MKL and MIC Accelerators
 
diff --git a/docs.it4i/salomon/software/intel-suite/intel-parallel-studio-introduction.md b/docs.it4i/salomon/software/intel-suite/intel-parallel-studio-introduction.md
index 219ebaa772b5ea1d0963515647adc3c68e26ff83..4b1c9308957a43fafafb8f5c1280c11ba2bf81a1 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-parallel-studio-introduction.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-parallel-studio-introduction.md
@@ -3,6 +3,7 @@
 The Salomon cluster provides following elements of the Intel Parallel Studio XE
 
 Intel Parallel Studio XE
+
 * Intel Compilers
 * Intel Debugger
 * Intel MKL Library
diff --git a/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md b/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
index cf170231eec28314236eb262ae6893abf71852a9..5b52cabe9367dec1c3995369e275d2020745e3a4 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
@@ -6,7 +6,7 @@ ITAC is a offline analysis tool - first you run your application to collect a tr
 
 ## Installed Version
 
-Currently on Salomon is version 9.1.2.024 available as module  itac/9.1.2.024
+Currently on Salomon is version 9.1.2.024 available as module itac/9.1.2.024
 
 ## Collecting Traces
 
@@ -36,5 +36,5 @@ Please refer to Intel documenation about usage of the GUI tool.
 
 ## References
 
-1.  [Getting Started with Intel® Trace Analyzer and Collector](https://software.intel.com/en-us/get-started-with-itac-for-linux)
-2.  [Intel® Trace Analyzer and Collector - Documentation](https://software.intel.com/en-us/intel-trace-analyzer)
+1. [Getting Started with Intel® Trace Analyzer and Collector](https://software.intel.com/en-us/get-started-with-itac-for-linux)
+1. [Intel® Trace Analyzer and Collector - Documentation](https://software.intel.com/en-us/intel-trace-analyzer)
diff --git a/docs.it4i/salomon/software/intel-xeon-phi.md b/docs.it4i/salomon/software/intel-xeon-phi.md
index 65457058ce668c1ee7a92f1d19bbceec427dc051..634cb913c30f56f10b3a6d864d9d28e16f50deee 100644
--- a/docs.it4i/salomon/software/intel-xeon-phi.md
+++ b/docs.it4i/salomon/software/intel-xeon-phi.md
@@ -258,7 +258,7 @@ or by setting environment variable
     $ export MKL_MIC_ENABLE=1
 ```
 
-To get more information about automatic offload please refer to "[Using Intel® MKL Automatic Offload on Intel ® Xeon Phi™ Coprocessors](http://software.intel.com/sites/default/files/11MIC42_How_to_Use_MKL_Automatic_Offload_0.pdf)" white paper or [ Intel MKL documentation](https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation).
+To get more information about automatic offload please refer to "[Using Intel® MKL Automatic Offload on Intel ® Xeon Phi™ Coprocessors](http://software.intel.com/sites/default/files/11MIC42_How_to_Use_MKL_Automatic_Offload_0.pdf)" white paper or [Intel MKL documentation](https://software.intel.com/en-us/articles/intel-math-kernel-library-documentation).
 
 ### Automatic Offload Example
 
@@ -518,7 +518,7 @@ The compilation command for this example is:
     $ g++ cmdoptions.cpp gemm.cpp ../common/basic.cpp ../common/cmdparser.cpp ../common/oclobject.cpp -I../common -lOpenCL -o gemm -I/apps/intel/opencl/include/
 ```
 
-To see the performance of Intel Xeon Phi performing the DGEMM run the example as follows: 
+To see the performance of Intel Xeon Phi performing the DGEMM run the example as follows:
 
 ```bash
     ./gemm -d 1
@@ -631,7 +631,7 @@ The output should be similar to:
 There are two ways how to execute an MPI code on a single coprocessor: 1.) lunch the program using "**mpirun**" from the
 coprocessor; or 2.) lunch the task using "**mpiexec.hydra**" from a host.
 
-**Execution on coprocessor**
+#### Execution on coprocessor
 
 Similarly to execution of OpenMP programs in native mode, since the environmental module are not supported on MIC, user has to setup paths to Intel MPI libraries and binaries manually. One time setup can be done by creating a "**.profile**" file in user's home directory. This file sets up the environment on the MIC automatically once user access to the accelerator through the SSH.
 
@@ -650,8 +650,8 @@ Similarly to execution of OpenMP programs in native mode, since the environmenta
 ```
 
 !!! note
-    - this file sets up both environmental variable for both MPI and OpenMP libraries.
-    - this file sets up the paths to a particular version of Intel MPI library and particular version of an Intel compiler. These versions have to match with loaded modules.
+    \* this file sets up both environmental variable for both MPI and OpenMP libraries.
+    \* this file sets up the paths to a particular version of Intel MPI library and particular version of an Intel compiler. These versions have to match with loaded modules.
 
 To access a MIC accelerator located on a node that user is currently connected to, use:
 
@@ -680,7 +680,7 @@ The output should be similar to:
     Hello world from process 0 of 4 on host cn207-mic0
 ```
 
-**Execution on host**
+#### Execution on host
 
 If the MPI program is launched from host instead of the coprocessor, the environmental variables are not set using the ".profile" file. Therefore user has to specify library paths from the command line when calling "mpiexec".
 
@@ -703,8 +703,8 @@ or using mpirun
 ```
 
 !!! note
-    - the full path to the binary has to specified (here: "**>~/mpi-test-mic**")
-    - the LD_LIBRARY_PATH has to match with Intel MPI module used to compile the MPI code
+    \* the full path to the binary has to specified (here: "**>~/mpi-test-mic**")
+    \* the LD_LIBRARY_PATH has to match with Intel MPI module used to compile the MPI code
 
 The output should be again similar to:
 
@@ -715,7 +715,7 @@ The output should be again similar to:
     Hello world from process 0 of 4 on host cn207-mic0
 ```
 
-!!! hint 
+!!! hint
     **"mpiexec.hydra"** requires a file the MIC filesystem. If the file is missing please contact the system administrators.
 
 A simple test to see if the file is present is to execute:
@@ -725,7 +725,7 @@ A simple test to see if the file is present is to execute:
       /bin/pmi_proxy
 ```
 
-**Execution on host - MPI processes distributed over multiple accelerators on multiple nodes**
+#### Execution on host - MPI processes distributed over multiple accelerators on multiple nodes
 
 To get access to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters:
 
@@ -885,7 +885,7 @@ A possible output of the MPI "hello-world" example executed on two hosts and two
 !!! note
     At this point the MPI communication between MIC accelerators on different nodes uses 1Gb Ethernet only.
 
-**Using the PBS automatically generated node-files**
+#### Using the PBS automatically generated node-files
 
 PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated:
 
diff --git a/docs.it4i/salomon/software/java.md b/docs.it4i/salomon/software/java.md
index f607960cd793a6a84315cf11651416157d6bd015..703e53fc1093cf28aeb5c80b985174784e54ad90 100644
--- a/docs.it4i/salomon/software/java.md
+++ b/docs.it4i/salomon/software/java.md
@@ -1,7 +1,5 @@
 # Java
 
-**Java on the cluster**
-
 Java is available on the cluster. Activate java by loading the Java module
 
 ```bash
diff --git a/docs.it4i/salomon/software/mpi/Running_OpenMPI.md b/docs.it4i/salomon/software/mpi/Running_OpenMPI.md
index 0af557ecf054b258b83ff3b0f1046a2e7d932e54..9aa54f09aa07ccde2daa1bfc5c6ff4daeab2b78b 100644
--- a/docs.it4i/salomon/software/mpi/Running_OpenMPI.md
+++ b/docs.it4i/salomon/software/mpi/Running_OpenMPI.md
@@ -29,7 +29,7 @@ Example:
 Please be aware, that in this example, the directive **-pernode** is used to run only **one task per node**, which is normally an unwanted behaviour (unless you want to run hybrid code with just one MPI and 24 OpenMP tasks per node). In normal MPI programs **omit the -pernode directive** to run up to 24 MPI tasks per each node.
 
 In this example, we allocate 4 nodes via the express queue interactively. We set up the openmpi environment and interactively run the helloworld_mpi.x program.
-Note that the executable  helloworld_mpi.x must be available within the same path on all nodes. This is automatically fulfilled on the /home and /scratch filesystem.
+Note that the executable helloworld_mpi.x must be available within the same path on all nodes. This is automatically fulfilled on the /home and /scratch filesystem.
 
 You need to preload the executable, if running on the local ramdisk /tmp filesystem
 
diff --git a/docs.it4i/salomon/software/numerical-languages/matlab.md b/docs.it4i/salomon/software/numerical-languages/matlab.md
index 8cfbdf31afc0155eee6b84a64f43eb2bf2f35fef..59bf293f253eaf189dd3bb95ac08697f7491aa08 100644
--- a/docs.it4i/salomon/software/numerical-languages/matlab.md
+++ b/docs.it4i/salomon/software/numerical-languages/matlab.md
@@ -4,8 +4,8 @@
 
 Matlab is available in versions R2015a and R2015b. There are always two variants of the release:
 
-*   Non commercial or so called EDU variant, which can be used for common research and educational purposes.
-*   Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
+* Non commercial or so called EDU variant, which can be used for common research and educational purposes.
+* Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
 
 To load the latest version of Matlab load the module
 
@@ -272,7 +272,7 @@ You can use MATLAB on UV2000 in two parallel modes:
 
 ### Threaded Mode
 
-Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set  maxNumCompThreads accordingly and certain operations, such as  fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes.
+Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes.
 
 ### Local Cluster Mode
 
diff --git a/docs.it4i/salomon/software/numerical-languages/octave.md b/docs.it4i/salomon/software/numerical-languages/octave.md
index eda9196ae8972e946d32361067e3bd43c4762721..1f96d0849dd4c0ffc1928bcb2406607600f06fd9 100644
--- a/docs.it4i/salomon/software/numerical-languages/octave.md
+++ b/docs.it4i/salomon/software/numerical-languages/octave.md
@@ -29,7 +29,7 @@ To run octave in batch mode, write an octave script, then write a bash jobscript
     mkdir -p /scratch/work/user/$USER/$PBS_JOBID
     cd /scratch/work/user/$USER/$PBS_JOBID || exit
 
-    # copy input file to scratch 
+    # copy input file to scratch
     cp $PBS_O_WORKDIR/octcode.m .
 
     # load octave module
diff --git a/docs.it4i/salomon/software/numerical-languages/r.md b/docs.it4i/salomon/software/numerical-languages/r.md
index 138e4da07151f4e9e802ef447c8ad7bdad7ec190..e6f9a69b4d27fd0b4b844703759b7dc15d109d8c 100644
--- a/docs.it4i/salomon/software/numerical-languages/r.md
+++ b/docs.it4i/salomon/software/numerical-languages/r.md
@@ -14,7 +14,7 @@ Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals
 
 ## Modules
 
-**The R version 3.1.1 is available on the cluster, along with GUI interface Rstudio**
+The R version 3.1.1 is available on the cluster, along with GUI interface Rstudio
 
 | Application | Version           | module              |
 | ----------- | ----------------- | ------------------- |
@@ -70,7 +70,7 @@ This script may be submitted directly to the PBS workload manager via the qsub c
 
 ## Parallel R
 
-Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where  parallel constructs are directly stated within the R script.
+Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where parallel constructs are directly stated within the R script.
 
 ## Package Parallel
 
@@ -347,7 +347,7 @@ mpi.apply Rmpi example:
     mpi.quit()
 ```
 
-The above is the mpi.apply MPI example for calculating the number π. Only the slave processes carry out the calculation. Note the **mpi.parSapply(), ** function call. The package parallel [example](r/#package-parallel)[above](r/#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply().
+The above is the mpi.apply MPI example for calculating the number π. Only the slave processes carry out the calculation. Note the **mpi.parSapply()**, function call. The package parallel [example](r/#package-parallel) [above](r/#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply().
 
 Execute the example as:
 
@@ -371,7 +371,7 @@ Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, runnin
     #PBS -N Rjob
     #PBS -l select=100:ncpus=24:mpiprocs=24:ompthreads=1
 
-    # change to  scratch directory
+    # change to scratch directory
     SCRDIR=/scratch/work/user/$USER/myjob
     cd $SCRDIR || exit
 
diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md
index 0edbcd5db799a8517bbe8ffff128f9c552d70832..c7ed3ffc3cd13bc1737a23675f319502c5fb6504 100644
--- a/docs.it4i/salomon/storage.md
+++ b/docs.it4i/salomon/storage.md
@@ -9,8 +9,8 @@ All login and compute nodes may access same data on shared file systems. Compute
 ## Policy (In a Nutshell)
 
 !!! note
-    _ Use [HOME](#home) for your most valuable data and programs.
-    _ Use [WORK](#work) for your large project files.
+    \* Use [HOME](#home) for your most valuable data and programs.
+    \* Use [WORK](#work) for your large project files.
     \* Use [TEMP](#temp) for large scratch data.
 
 !!! warning
@@ -30,19 +30,19 @@ The HOME file system is realized as a Tiered file system, exported via NFS. The
 
 ### SCRATCH File System
 
-The  architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH file system hosts the [WORK and TEMP workspaces](#shared-workspaces).
+The architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH file system hosts the [WORK and TEMP workspaces](#shared-workspaces).
 
 Configuration of the SCRATCH Lustre storage
 
-*   SCRATCH Lustre object storage
-   * Disk array SFA12KX
-   * 540 x 4 TB SAS 7.2krpm disk
-   * 54 x OST of 10 disks in RAID6 (8+2)
-   * 15 x hot-spare disk
-   * 4 x 400 GB SSD cache
-*   SCRATCH Lustre metadata storage
-   * Disk array EF3015
-   * 12 x 600 GB SAS 15 krpm disk
+* SCRATCH Lustre object storage
+  * Disk array SFA12KX
+  * 540 x 4 TB SAS 7.2krpm disk
+  * 54 x OST of 10 disks in RAID6 (8+2)
+  * 15 x hot-spare disk
+  * 4 x 400 GB SSD cache
+* SCRATCH Lustre metadata storage
+  * Disk array EF3015
+  * 12 x 600 GB SAS 15 krpm disk
 
 ### Understanding the Lustre File Systems
 
@@ -50,15 +50,15 @@ Configuration of the SCRATCH Lustre storage
 
 A user file on the Lustre file system can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing.
 
-When a client (a  compute  node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the  MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval.
+When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval.
 
 If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results.
 
 There is default stripe configuration for Salomon Lustre file systems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance:
 
-1.  stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre file systems
-2.  stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre file systems  one can specify -1 to use all OSTs in the file system.
-3.  stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
+1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre file systems
+1. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre file systems one can specify -1 to use all OSTs in the file system.
+1. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
 
 !!! note
     Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
@@ -66,8 +66,8 @@ There is default stripe configuration for Salomon Lustre file systems. However,
 Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file access patterns.
 
 ```bash
-$ lfs getstripe dir|filename
-$ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename 
+$ lfs getstripe dir | filename
+$ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir | filename
 ```
 
 Example:
@@ -121,7 +121,7 @@ Example for Lustre SCRATCH directory:
 ```bash
 $ lfs quota /scratch
 Disk quotas for user user001 (uid 1234):
-     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
+     Filesystem kbytes   quota   limit   grace   files   quota   limit   grace
           /scratch       8       0 100000000000       *    3       0       0       -
 Disk quotas for group user001 (gid 1234):
  Filesystem kbytes quota limit grace files quota limit grace
@@ -141,9 +141,9 @@ Example output:
 ```bash
     $ quota
     Disk quotas for user vop999 (uid 1025):
-         Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
+         Filesystem blocks   quota   limit   grace   files   quota   limit   grace
     home-nfs-ib.salomon.it4i.cz:/home
-                         28       0 250000000              10     0  500000
+                         28       0 250000000              10     0 500000
 ```
 
 To have a better understanding of where the space is exactly used, you can use following command to find out.
@@ -186,7 +186,7 @@ ACLs on a Lustre file system work exactly like ACLs on any Linux file system. Th
 [vop999@login1.salomon ~]$ umask 027
 [vop999@login1.salomon ~]$ mkdir test
 [vop999@login1.salomon ~]$ ls -ld test
-drwxr-x--- 2 vop999 vop999 4096 Nov  5 14:17 test
+drwxr-x--- 2 vop999 vop999 4096 Nov 5 14:17 test
 [vop999@login1.salomon ~]$ getfacl test
 # file: test
 # owner: vop999
@@ -197,7 +197,7 @@ other::---
 
 [vop999@login1.salomon ~]$ setfacl -m user:johnsm:rwx test
 [vop999@login1.salomon ~]$ ls -ld test
-drwxrwx---+ 2 vop999 vop999 4096 Nov  5 14:17 test
+drwxrwx---+ 2 vop999 vop999 4096 Nov 5 14:17 test
 [vop999@login1.salomon ~]$ getfacl test
 # file: test
 # owner: vop999
@@ -211,7 +211,7 @@ other::---
 
 Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory (-d flag to setfacl) will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory. Refer to this page for more information on Linux ACL:
 
-[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html)
+[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html)
 
 ## Shared Workspaces
 
@@ -222,7 +222,7 @@ Users home directories /home/username reside on HOME file system. Accessible cap
 !!! note
     The HOME file system is intended for preparation, evaluation, processing and storage of data generated by active Projects.
 
-The HOME  should not be used to archive data of past Projects or other unrelated data.
+The HOME should not be used to archive data of past Projects or other unrelated data.
 
 The files on HOME will not be deleted until end of the [users lifecycle](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/).
 
@@ -241,7 +241,7 @@ The workspace is backed up, such that it can be restored in case of catasthropic
 The WORK workspace resides on SCRATCH file system.  Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid.
 
 !!! note
-    The WORK workspace is intended  to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up.
+    The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up.
 
     Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project.
 
@@ -266,7 +266,7 @@ The WORK workspace is hosted on SCRATCH file system. The SCRATCH is realized as
 The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is  /scratch/temp.  Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. >If 100 TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
 
 !!! note
-    The TEMP workspace is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
+    The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
 
     Users are advised to save the necessary data from the TEMP workspace to HOME or WORK after the calculations and clean up the scratch files.
 
@@ -352,7 +352,7 @@ Once registered for CESNET Storage, you may [access the storage](https://du.cesn
 !!! note
     SSHFS: The storage will be mounted like a local hard drive
 
-The SSHFS  provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
+The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
 
 First, create the mount point
 
diff --git a/docs.it4i/software/lmod.md b/docs.it4i/software/lmod.md
index 00e70819ce8c8bac76f0d14d42b39c10afcd0a67..9c7f97739fbd5e92f77796bb2d7bfe5b7993089c 100644
--- a/docs.it4i/software/lmod.md
+++ b/docs.it4i/software/lmod.md
@@ -48,13 +48,13 @@ To get an overview of all available modules, you can use module avail or simply
 ```bash
 $ ml av
 ---------------------------------------- /apps/modules/compiler ----------------------------------------------
-     GCC/5.2.0	GCCcore/6.2.0 (D)    icc/2013.5.192		ifort/2013.5.192    LLVM/3.9.0-intel-2017.00 (D)
-   			...                                  ...
+   GCC/5.2.0    GCCcore/6.2.0 (D)    icc/2013.5.192     ifort/2013.5.192    LLVM/3.9.0-intel-2017.00 (D)
+                                 ...                                  ...
 
 ---------------------------------------- /apps/modules/devel -------------------------------------------------
-   Autoconf/2.69-foss-2015g                       CMake/3.0.0-intel-2016.01                           M4/1.4.17-intel-2016.01                     pkg-config/0.27.1-foss-2015g
-   Autoconf/2.69-foss-2016a                       CMake/3.3.1-foss-2015g                              M4/1.4.17-intel-2017.00                     pkg-config/0.27.1-intel-2015b
-   			...                                  ...
+   Autoconf/2.69-foss-2015g    CMake/3.0.0-intel-2016.01   M4/1.4.17-intel-2016.01   pkg-config/0.27.1-foss-2015g
+   Autoconf/2.69-foss-2016a    CMake/3.3.1-foss-2015g      M4/1.4.17-intel-2017.00   pkg-config/0.27.1-intel-2015b
+                                 ...                                  ...
 ```
 
 In the current module naming scheme, each module name consists of two parts:
@@ -75,7 +75,7 @@ $ ml spider gcc
   GCC:
 ---------------------------------------------------------------------------------
     Description:
-      The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...). - Homepage: http://gcc.gnu.org/ 
+      The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...). - Homepage: http://gcc.gnu.org/
 
      Versions:
         GCC/4.4.7-system
@@ -118,7 +118,7 @@ $ module spider GCC/6.2.0-2.27
   GCC: GCC/6.2.0-2.27
 --------------------------------------------------------------------------------------------------------------
     Description:
-      The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...). - Homepage: http://gcc.gnu.org/ 
+      The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...). - Homepage: http://gcc.gnu.org/
 
     This module can be loaded directly: module load GCC/6.2.0-2.27
 
@@ -156,7 +156,7 @@ Lmod does a partial match on the module name, so sometimes you need to use / to
 $ ml av GCC/
 
 ------------------------------------------ /apps/modules/compiler -------------------------------------------
-GCC/4.4.7-system    GCC/4.8.3   GCC/4.9.2   GCC/4.9.3   GCC/5.1.0-binutils-2.25  GCC/5.3.0-binutils-2.25   GCC/5.3.0-2.26   GCC/5.4.0-2.26   GCC/4.7.4   GCC/4.9.2-binutils-2.25   GCC/4.9.3-binutils-2.25   GCC/4.9.3-2.25   GCC/5.2.0   GCC/5.3.0-2.25  GCC/6.2.0-2.27 (D)
+GCC/4.4.7-system    GCC/4.8.3   GCC/4.9.2   GCC/4.9.3   GCC/5.1.0-binutils-2.25 GCC/5.3.0-binutils-2.25   GCC/5.3.0-2.26   GCC/5.4.0-2.26   GCC/4.7.4   GCC/4.9.2-binutils-2.25   GCC/4.9.3-binutils-2.25   GCC/4.9.3-2.25   GCC/5.2.0   GCC/5.3.0-2.25 GCC/6.2.0-2.27 (D)
 
   Where:
    D:  Default Module
diff --git a/docs.it4i/software/orca.md b/docs.it4i/software/orca.md
index 6a8769c65c1033f1b55fecf26a1e7855bc3a9da6..8fcfd69bfb44f9f978b18d8b8ac4e82a71653f36 100644
--- a/docs.it4i/software/orca.md
+++ b/docs.it4i/software/orca.md
@@ -10,7 +10,7 @@ The following module command makes the latest version of orca available to your
 $ module load ORCA/3_0_3-linux_x86-64
 ```
 
-**Dependency**
+### Dependency
 
 ```bash
 $ module list