diff --git a/docs.it4i/anselm-cluster-documentation/capacity-computing.md b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
index 7bb22c748266a9c9df3207c2273c4a36eb08a57f..306474076df83ea3255390348fe63be8640ce4bf 100644
--- a/docs.it4i/anselm-cluster-documentation/capacity-computing.md
+++ b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
@@ -7,11 +7,11 @@ In many cases, it is useful to submit huge (>100+) number of computational jobs
 However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1000 per job array**
 
 !!! Note
-     Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
+	Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
 
-*   Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
-*   Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
-*   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
+-   Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
+-   Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
+-   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
 
 ## Policy
 
@@ -21,13 +21,13 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 ## Job Arrays
 
 !!! Note
-     Huge number of jobs may be easily submitted and managed as a job array.
+	Huge number of jobs may be easily submitted and managed as a job array.
 
 A job array is a compact representation of many jobs, called subjobs. The subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
 
-*   each subjob has a unique index, $PBS_ARRAY_INDEX
-*   job Identifiers of subjobs only differ by their indices
-*   the state of subjobs can differ (R,Q,...etc.)
+-   each subjob has a unique index, $PBS_ARRAY_INDEX
+-   job Identifiers of subjobs only differ by their indices
+-   the state of subjobs can differ (R,Q,...etc.)
 
 All subjobs within a job array have the same scheduling priority and schedule as independent jobs. Entire job array is submitted through a single qsub command and may be managed by qdel, qalter, qhold, qrls and qsig commands as a single job.
 
@@ -39,7 +39,7 @@ Example:
 
 Assume we have 900 input files with name beginning with "file" (e. g. file001, ..., file900). Assume we would like to use each of these input files with program executable myprog.x, each as a separate job.
 
-First, we create a tasklist file (or subjobs list), listing all tasks (subjobs) * all input files in our example:
+First, we create a tasklist file (or subjobs list), listing all tasks (subjobs) - all input files in our example:
 
 ```bash
 $ find . -name 'file*' > tasklist
@@ -103,8 +103,8 @@ $ qstat -a 12345[].dm2
 dm2:
                                                             Req'd  Req'd   Elap
 Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
---------------* -------* -*  |---|---| -----* --* --* -----* ----* * -----
-12345[].dm2     user2    qprod    xx          13516   1  16    -*  00:50 B 00:02
+--------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
+12345[].dm2     user2    qprod    xx          13516   1  16    --  00:50 B 00:02
 ```
 
 The status B means that some subjobs are already running.
@@ -116,14 +116,14 @@ $ qstat -a 12345[1-100].dm2
 dm2:
                                                             Req'd  Req'd   Elap
 Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
---------------* -------* -*  |---|---| -----* --* --* -----* ----* * -----
-12345[1].dm2    user2    qprod    xx          13516   1  16    -*  00:50 R 00:02
-12345[2].dm2    user2    qprod    xx          13516   1  16    -*  00:50 R 00:02
-12345[3].dm2    user2    qprod    xx          13516   1  16    -*  00:50 R 00:01
-12345[4].dm2    user2    qprod    xx          13516   1  16    -*  00:50 Q   --
+--------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
+12345[1].dm2    user2    qprod    xx          13516   1  16    --  00:50 R 00:02
+12345[2].dm2    user2    qprod    xx          13516   1  16    --  00:50 R 00:02
+12345[3].dm2    user2    qprod    xx          13516   1  16    --  00:50 R 00:01
+12345[4].dm2    user2    qprod    xx          13516   1  16    --  00:50 Q   --
      .             .        .      .             .    .   .     .    .   .    .
      ,             .        .      .             .    .   .     .    .   .    .
-12345[100].dm2  user2    qprod    xx          13516   1  16    -*  00:50 Q   --
+12345[100].dm2  user2    qprod    xx          13516   1  16    --  00:50 Q   --
 ```
 
 Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted.
@@ -150,7 +150,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/)
 ## GNU Parallel
 
 !!! Note
-     Use GNU parallel to run many single core tasks on one node.
+	Use GNU parallel to run many single core tasks on one node.
 
 GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on  Anselm.
 
@@ -169,7 +169,7 @@ Example:
 
 Assume we have 101 input files with name beginning with "file" (e. g. file001, ..., file101). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
 
-First, we create a tasklist file, listing all tasks * all input files in our example:
+First, we create a tasklist file, listing all tasks - all input files in our example:
 
 ```bash
 $ find . -name 'file*' > tasklist
@@ -237,7 +237,7 @@ Example:
 
 Assume we have 992 input files with name beginning with "file" (e. g. file001, ..., file992). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
 
-First, we create a tasklist file, listing all tasks * all input files in our example:
+First, we create a tasklist file, listing all tasks - all input files in our example:
 
 ```bash
 $ find . -name 'file*' > tasklist
@@ -265,7 +265,7 @@ SCR=/lscratch/$PBS_JOBID/$PARALLEL_SEQ
 mkdir -p $SCR ; cd $SCR || exit
 
 # get individual task from tasklist with index from PBS JOB ARRAY and index form Parallel
-IDX=$(($PBS_ARRAY_INDEX + $PARALLEL_SEQ * 1))
+IDX=$(($PBS_ARRAY_INDEX + $PARALLEL_SEQ - 1))
 TASK=$(sed -n "${IDX}p" $PBS_O_WORKDIR/tasklist)
 [ -z "$TASK" ] && exit
 
diff --git a/docs.it4i/anselm-cluster-documentation/compute-nodes.md b/docs.it4i/anselm-cluster-documentation/compute-nodes.md
index f3b3694942de113f0f66e76216eb62ef9820ada8..6cd3c18f75ce9886bf315073d0ec6adeba51768c 100644
--- a/docs.it4i/anselm-cluster-documentation/compute-nodes.md
+++ b/docs.it4i/anselm-cluster-documentation/compute-nodes.md
@@ -6,46 +6,46 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu
 
 ### Compute Nodes Without Accelerator
 
-*   180 nodes
-*   2880 cores in total
-*   two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
-*   64 GB of physical memory per node
-*   one 500GB SATA 2,5” 7,2 krpm HDD per node
-*   bullx B510 blade servers
-*   cn[1-180]
+-   180 nodes
+-   2880 cores in total
+-   two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
+-   64 GB of physical memory per node
+-   one 500GB SATA 2,5” 7,2 krpm HDD per node
+-   bullx B510 blade servers
+-   cn[1-180]
 
 ### Compute Nodes With GPU Accelerator
 
-*   23 nodes
-*   368 cores in total
-*   two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
-*   96 GB of physical memory per node
-*   one 500GB SATA 2,5” 7,2 krpm HDD per node
-*   GPU accelerator 1x NVIDIA Tesla Kepler K20 per node
-*   bullx B515 blade servers
-*   cn[181-203]
+-   23 nodes
+-   368 cores in total
+-   two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
+-   96 GB of physical memory per node
+-   one 500GB SATA 2,5” 7,2 krpm HDD per node
+-   GPU accelerator 1x NVIDIA Tesla Kepler K20 per node
+-   bullx B515 blade servers
+-   cn[181-203]
 
 ### Compute Nodes With MIC Accelerator
 
-*   4 nodes
-*   64 cores in total
-*   two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
-*   96 GB of physical memory per node
-*   one 500GB SATA 2,5” 7,2 krpm HDD per node
-*   MIC accelerator 1x Intel Phi 5110P per node
-*   bullx B515 blade servers
-*   cn[204-207]
+-   4 nodes
+-   64 cores in total
+-   two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
+-   96 GB of physical memory per node
+-   one 500GB SATA 2,5” 7,2 krpm HDD per node
+-   MIC accelerator 1x Intel Phi 5110P per node
+-   bullx B515 blade servers
+-   cn[204-207]
 
 ### Fat Compute Nodes
 
-*   2 nodes
-*   32 cores in total
-*   2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
-*   512 GB of physical memory per node
-*   two 300GB SAS 3,5”15krpm HDD (RAID1) per node
-*   two 100GB SLC SSD per node
-*   bullx R423-E3 servers
-*   cn[208-209]
+-   2 nodes
+-   32 cores in total
+-   2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
+-   512 GB of physical memory per node
+-   two 300GB SAS 3,5”15krpm HDD (RAID1) per node
+-   two 100GB SLC SSD per node
+-   bullx R423-E3 servers
+-   cn[208-209]
 
 ![](../img/bullxB510.png)
 **Figure Anselm bullx B510 servers**
@@ -53,7 +53,7 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu
 ### Compute Nodes Summary
 
 | Node type                  | Count | Range       | Memory | Cores       | [Access](resources-allocation-policy/) |
-| -------------------------* | ----* | ----------* | -----* | ----------* | -------------------------------------* |
+| -------------------------- | ----- | ----------- | ------ | ----------- | -------------------------------------- |
 | Nodes without accelerator  | 180   | cn[1-180]   | 64GB   | 16 @ 2.4Ghz | qexp, qprod, qlong, qfree              |
 | Nodes with GPU accelerator | 23    | cn[181-203] | 96GB   | 16 @ 2.3Ghz | qgpu, qprod                            |
 | Nodes with MIC accelerator | 4     | cn[204-207] | 96GB   | 16 @ 2.3GHz | qmic, qprod                            |
@@ -65,23 +65,23 @@ Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes
 
 ### Intel Sandy Bridge E5-2665 Processor
 
-*   eight-core
-*   speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology
-*   peak performance:  19.2 GFLOP/s per core
-*   caches:
-    *   L2: 256 KB per core
-    *   L3: 20 MB per processor
-*   memory bandwidth at the level of the processor: 51.2 GB/s
+-   eight-core
+-   speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology
+-   peak performance:  19.2 GFLOP/s per core
+-   caches:
+    -   L2: 256 KB per core
+    -   L3: 20 MB per processor
+-   memory bandwidth at the level of the processor: 51.2 GB/s
 
 ### Intel Sandy Bridge E5-2470 Processor
 
-*   eight-core
-*   speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology
-*   peak performance:  18.4 GFLOP/s per core
-*   caches:
-    *   L2: 256 KB per core
-    *   L3: 20 MB per processor
-*   memory bandwidth at the level of the processor: 38.4 GB/s
+-   eight-core
+-   speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology
+-   peak performance:  18.4 GFLOP/s per core
+-   caches:
+    -   L2: 256 KB per core
+    -   L3: 20 MB per processor
+-   memory bandwidth at the level of the processor: 38.4 GB/s
 
 Nodes equipped with Intel Xeon E5-2665 CPU have set PBS resource attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU have set PBS resource attribute cpu_freq = 23.
 
@@ -101,30 +101,30 @@ Intel Turbo Boost Technology is used by default,  you can disable it for all nod
 
 ### Compute Node Without Accelerator
 
-*   2 sockets
-*   Memory Controllers are integrated into processors.
-    *   8 DDR3 DIMMs per node
-    *   4 DDR3 DIMMs per CPU
-    *   1 DDR3 DIMMs per channel
-    *   Data rate support: up to 1600MT/s
-*   Populated memory: 8 x 8 GB DDR3 DIMM 1600 MHz
+-   2 sockets
+-   Memory Controllers are integrated into processors.
+    -   8 DDR3 DIMMs per node
+    -   4 DDR3 DIMMs per CPU
+    -   1 DDR3 DIMMs per channel
+    -   Data rate support: up to 1600MT/s
+-   Populated memory: 8 x 8 GB DDR3 DIMM 1600 MHz
 
 ### Compute Node With GPU or MIC Accelerator
 
-*   2 sockets
-*   Memory Controllers are integrated into processors.
-    *   6 DDR3 DIMMs per node
-    *   3 DDR3 DIMMs per CPU
-    *   1 DDR3 DIMMs per channel
-    *   Data rate support: up to 1600MT/s
-*   Populated memory: 6 x 16 GB DDR3 DIMM 1600 MHz
+-   2 sockets
+-   Memory Controllers are integrated into processors.
+    -   6 DDR3 DIMMs per node
+    -   3 DDR3 DIMMs per CPU
+    -   1 DDR3 DIMMs per channel
+    -   Data rate support: up to 1600MT/s
+-   Populated memory: 6 x 16 GB DDR3 DIMM 1600 MHz
 
 ### Fat Compute Node
 
-*   2 sockets
-*   Memory Controllers are integrated into processors.
-    *   16 DDR3 DIMMs per node
-    *   8 DDR3 DIMMs per CPU
-    *   2 DDR3 DIMMs per channel
-    *   Data rate support: up to 1600MT/s
-*   Populated memory: 16 x 32 GB DDR3 DIMM 1600 MHz
+-   2 sockets
+-   Memory Controllers are integrated into processors.
+    -   16 DDR3 DIMMs per node
+    -   8 DDR3 DIMMs per CPU
+    -   2 DDR3 DIMMs per channel
+    -   Data rate support: up to 1600MT/s
+-   Populated memory: 16 x 32 GB DDR3 DIMM 1600 MHz
diff --git a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md
index 1acfc79c8a15e11a5a7a9b49f458d7130578ea48..c674a36cb635e380612074899cd8cbc8c44d6f2d 100644
--- a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md
+++ b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md
@@ -16,7 +16,7 @@ fi
 alias qs='qstat -a'
 module load PrgEnv-gnu
 
-# Display information to standard output * only in interactive ssh session
+# Display information to standard output - only in interactive ssh session
 if [ -n "$SSH_TTY" ]
 then
  module list # Display loaded modules
@@ -24,14 +24,14 @@ fi
 ```
 
 !!! Note
-     Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Conside utilization of SSH session interactivity for such commands as stated in the previous example.
+	Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Conside utilization of SSH session interactivity for such commands as stated in the previous example.
 
 ### Application Modules
 
 In order to configure your shell for  running particular application on Anselm we use Module package interface.
 
 !!! Note
-     The modules set up the application paths, library paths and environment variables for running particular application.
+	The modules set up the application paths, library paths and environment variables for running particular application.
 
     We have also second modules repository. This modules repository is created using tool called EasyBuild. On Salomon cluster, all modules will be build by this tool. If you want to use software from this modules repository, please follow instructions in section [Application Modules Path Expansion](environment-and-modules/#EasyBuild).
 
diff --git a/docs.it4i/anselm-cluster-documentation/hardware-overview.md b/docs.it4i/anselm-cluster-documentation/hardware-overview.md
index 7c56e77144d146e137f2f1a31d75fe1a5454e159..84e05272bd3765b79b12fb66abf793992b148819 100644
--- a/docs.it4i/anselm-cluster-documentation/hardware-overview.md
+++ b/docs.it4i/anselm-cluster-documentation/hardware-overview.md
@@ -12,10 +12,10 @@ The cluster compute nodes cn[1-207] are organized within 13 chassis.
 
 There are four types of compute nodes:
 
-*   180 compute nodes without the accelerator
-*   23 compute nodes with GPU accelerator * equipped with NVIDIA Tesla Kepler K20
-*   4 compute nodes with MIC accelerator * equipped with Intel Xeon Phi 5110P
-*   2 fat nodes * equipped with 512 GB RAM and two 100 GB SSD drives
+-   180 compute nodes without the accelerator
+-   23 compute nodes with GPU accelerator - equipped with NVIDIA Tesla Kepler K20
+-   4 compute nodes with MIC accelerator - equipped with Intel Xeon Phi 5110P
+-   2 fat nodes - equipped with 512 GB RAM and two 100 GB SSD drives
 
 [More about Compute nodes](compute-nodes/).
 
@@ -31,7 +31,7 @@ The user access to the Anselm cluster is provided by two login nodes login1, log
 The parameters are summarized in the following tables:
 
 | **In general**                              |                                              |
-| ------------------------------------------* | -------------------------------------------* |
+| ------------------------------------------- | -------------------------------------------- |
 | Primary purpose                             | High Performance Computing                   |
 | Architecture of compute nodes               | x86-64                                       |
 | Operating system                            | Linux                                        |
@@ -39,7 +39,7 @@ The parameters are summarized in the following tables:
 | Totally                                     | 209                                          |
 | Processor cores                             | 16 (2 x 8 cores)                             |
 | RAM                                         | min. 64 GB, min. 4 GB per core               |
-| Local disk drive                            | yes * usually 500 GB                         |
+| Local disk drive                            | yes - usually 500 GB                         |
 | Compute network                             | InfiniBand QDR, fully non-blocking, fat-tree |
 | w/o accelerator                             | 180, cn[1-180]                               |
 | GPU accelerated                             | 23, cn[181-203]                              |
@@ -51,10 +51,10 @@ The parameters are summarized in the following tables:
 | Total amount of RAM                         | 15.136 TB                                    |
 
 | Node             | Processor                               | Memory | Accelerator          |
-| ---------------* | --------------------------------------* | -----* | -------------------* |
-| w/o accelerator  | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 64 GB  | *                    |
+| ---------------- | --------------------------------------- | ------ | -------------------- |
+| w/o accelerator  | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 64 GB  | -                    |
 | GPU accelerated  | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB  | NVIDIA Kepler K20    |
 | MIC accelerated  | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB  | Intel Xeon Phi 5110P |
-| Fat compute node | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 512 GB | *                    |
+| Fat compute node | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 512 GB | -                    |
 
 For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/), and [Network](network/).
diff --git a/docs.it4i/anselm-cluster-documentation/job-priority.md b/docs.it4i/anselm-cluster-documentation/job-priority.md
index 6b49c5acb7811b5e2873a1ac764b609f2e2550e1..02e86ada55d938d04a09779282da5b22acbeb757 100644
--- a/docs.it4i/anselm-cluster-documentation/job-priority.md
+++ b/docs.it4i/anselm-cluster-documentation/job-priority.md
@@ -36,7 +36,7 @@ Usage counts allocated core-hours (`ncpus x walltime`). Usage is decayed, or cut
 Jobs queued in queue qexp are not calculated to project's usage.
 
 !!! Note
-     Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/anselm/projects>.
+	Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/anselm/projects>.
 
 Calculated fair-share priority can be also seen as Resource_List.fairshare attribute of a job.
 
@@ -65,6 +65,6 @@ The scheduler makes a list of jobs to run in order of execution priority. Schedu
 It means, that jobs with lower execution priority can be run before jobs with higher execution priority.
 
 !!! Note
-     It is **very beneficial to specify the walltime** when submitting jobs.
+	It is **very beneficial to specify the walltime** when submitting jobs.
 
-Specifying more accurate walltime enables better scheduling, better execution times and better resource usage. Jobs with suitable (small) walltime could be backfilled * and overtake job(s) with higher priority.
+Specifying more accurate walltime enables better scheduling, better execution times and better resource usage. Jobs with suitable (small) walltime could be backfilled - and overtake job(s) with higher priority.
diff --git a/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md b/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
index a4b1791ad67c2efc6551bb97c071cd3c483c07a8..ebb2b7cd5f81761e597bbd25d1018286c9cb453a 100644
--- a/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
+++ b/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
@@ -77,7 +77,7 @@ In this example, we allocate nodes cn171 and cn172, all 16 cores per node, for 2
 Nodes equipped with Intel Xeon E5-2665 CPU have base clock frequency 2.4GHz, nodes equipped with Intel Xeon E5-2470 CPU have base frequency 2.3 GHz (see section Compute Nodes for details).  Nodes may be selected via the PBS resource attribute cpu_freq .
 
 | CPU Type           | base freq. | Nodes                  | cpu_freq attribute |
-| -----------------* | ---------* | ---------------------* | -----------------* |
+| ------------------ | ---------- | ---------------------- | ------------------ |
 | Intel Xeon E5-2665 | 2.4GHz     | cn[1-180], cn[208-209] | 24                 |
 | Intel Xeon E5-2470 | 2.3GHz     | cn[181-207]            | 23                 |
 
@@ -150,10 +150,10 @@ $ qstat -a
 srv11:
                                                             Req'd  Req'd   Elap
 Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
---------------* -------* -*  |---|---| -----* --* --* -----* ----* * -----
-16287.srv11     user1    qlong    job1         6183   4  64    -*  144:0 R 38:25
-16468.srv11     user1    qlong    job2         8060   4  64    -*  144:0 R 17:44
-16547.srv11     user2    qprod    job3x       13516   2  32    -*  48:00 R 00:58
+--------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
+16287.srv11     user1    qlong    job1         6183   4  64    --  144:0 R 38:25
+16468.srv11     user1    qlong    job2         8060   4  64    --  144:0 R 17:44
+16547.srv11     user2    qprod    job3x       13516   2  32    --  48:00 R 00:58
 ```
 
 In this example user1 and user2 are running jobs named job1, job2 and job3x. The jobs job1 and job2 are using 4 nodes, 16 cores per node each. The job1 already runs for 38 hours and 25 minutes, job2 for 17 hours 44 minutes. The job1 already consumed `64 x 38.41 = 2458.6` core hours. The job3x already consumed `0.96 x 32 = 30.93` core hours. These consumed core hours will be accounted on the respective project accounts, regardless of whether the allocated cores were actually used for computations.
@@ -253,8 +253,8 @@ $ qstat -n -u username
 srv11:
                                                             Req'd  Req'd   Elap
 Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
---------------* -------* -*  |---|---| -----* --* --* -----* ----* * -----
-15209.srv11     username qexp     Name0        5530   4  64    -*  01:00 R 00:00
+--------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
+15209.srv11     username qexp     Name0        5530   4  64    --  01:00 R 00:00
    cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16
 ```
 
diff --git a/docs.it4i/anselm-cluster-documentation/network.md b/docs.it4i/anselm-cluster-documentation/network.md
index 4760c322f7d7c4c978f8b2dfed65248730f56135..c0226db15c4d3823e761b507cd647065a612bb98 100644
--- a/docs.it4i/anselm-cluster-documentation/network.md
+++ b/docs.it4i/anselm-cluster-documentation/network.md
@@ -9,7 +9,7 @@ All compute and login nodes of Anselm are interconnected by a high-bandwidth, lo
 The compute nodes may be accessed via the InfiniBand network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native InfiniBand connection among the nodes.
 
 !!! Note
-     The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via native InfiniBand protocol.
+	The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via native InfiniBand protocol.
 
 The Fat tree topology ensures that peak transfer rates are achieved between any two nodes, independent of network traffic exchanged among other nodes concurrently.
 
@@ -24,8 +24,8 @@ $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
 $ qstat -n -u username
                                                             Req'd  Req'd   Elap
 Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
---------------* -------* -*  |---|---| -----* --* --* -----* ----* * -----
-15209.srv11     username qexp     Name0        5530   4  64    -*  01:00 R 00:00
+--------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
+15209.srv11     username qexp     Name0        5530   4  64    --  01:00 R 00:00
    cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16
 
 $ ssh 10.2.1.110
diff --git a/docs.it4i/anselm-cluster-documentation/prace.md b/docs.it4i/anselm-cluster-documentation/prace.md
index 5768aefeac9449d232d276f7c84fc034b1401ceb..9904b34c928ea574f3f473913033cfa08699c7be 100644
--- a/docs.it4i/anselm-cluster-documentation/prace.md
+++ b/docs.it4i/anselm-cluster-documentation/prace.md
@@ -28,11 +28,11 @@ The user will need a valid certificate and to be present in the PRACE LDAP (plea
 
 Most of the information needed by PRACE users accessing the Anselm TIER-1 system can be found here:
 
-*   [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
-*   [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
-*   [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
-*   [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
-*   [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
+-   [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
+-   [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
+-   [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
+-   [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
+-   [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
 
 Before you start to use any of the services don't forget to create a proxy certificate from your certificate:
 
@@ -53,7 +53,7 @@ To access Anselm cluster, two login nodes running GSI SSH service are available.
 It is recommended to use the single DNS name anselm-prace.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
 
 | Login address               | Port | Protocol | Login node       |
-| --------------------------* | ---* | -------* | ---------------* |
+| --------------------------- | ---- | -------- | ---------------- |
 | anselm-prace.it4i.cz        | 2222 | gsissh   | login1 or login2 |
 | login1-prace.anselm.it4i.cz | 2222 | gsissh   | login1           |
 | login2-prace.anselm.it4i.cz | 2222 | gsissh   | login2           |
@@ -73,7 +73,7 @@ When logging from other PRACE system, the prace_service script can be used:
 It is recommended to use the single DNS name anselm.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
 
 | Login address         | Port | Protocol | Login node       |
-| --------------------* | ---* | -------* | ---------------* |
+| --------------------- | ---- | -------- | ---------------- |
 | anselm.it4i.cz        | 2222 | gsissh   | login1 or login2 |
 | login1.anselm.it4i.cz | 2222 | gsissh   | login1           |
 | login2.anselm.it4i.cz | 2222 | gsissh   | login2           |
@@ -125,7 +125,7 @@ There's one control server and three backend servers for striping and/or backup
 **Access from PRACE network:**
 
 | Login address                | Port | Node role                   |
-| ---------------------------* | ---* | --------------------------* |
+| ---------------------------- | ---- | --------------------------- |
 | gridftp-prace.anselm.it4i.cz | 2812 | Front end /control server   |
 | login1-prace.anselm.it4i.cz  | 2813 | Backend / data mover server |
 | login2-prace.anselm.it4i.cz  | 2813 | Backend / data mover server |
@@ -158,7 +158,7 @@ Or by using  prace_service script:
 **Access from public Internet:**
 
 | Login address          | Port | Node role                   |
-| ---------------------* | ---* | --------------------------* |
+| ---------------------- | ---- | --------------------------- |
 | gridftp.anselm.it4i.cz | 2812 | Front end /control server   |
 | login1.anselm.it4i.cz  | 2813 | Backend / data mover server |
 | login2.anselm.it4i.cz  | 2813 | Backend / data mover server |
@@ -191,7 +191,7 @@ Or by using  prace_service script:
 Generally both shared file systems are available through GridFTP:
 
 | File system mount point | Filesystem | Comment                                                        |
-| ----------------------* | ---------* | -------------------------------------------------------------* |
+| ----------------------- | ---------- | -------------------------------------------------------------- |
 | /home                   | Lustre     | Default HOME directories of users in format /home/prace/login/ |
 | /scratch                | Lustre     | Shared SCRATCH mounted on the whole cluster                    |
 
@@ -220,7 +220,7 @@ General information about the resource allocation, job queuing and job execution
 For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree".
 
 | queue                         | Active project | Project resources | Nodes               | priority | authorization | walltime  |
-| ----------------------------* | -------------* | ----------------* | ------------------* | -------* | ------------* | --------* |
+| ----------------------------- | -------------- | ----------------- | ------------------- | -------- | ------------- | --------- |
 | **qexp** Express queue        | no             | none required     | 2 reserved, 8 total | high     | no            | 1 / 1h    |
 | **qprace** Production queue   | yes            | > 0               | 178 w/o accelerator | medium   | no            | 24 / 48 h |
 | **qfree** Free resource queue | yes            | none required     | 178 w/o accelerator | very low | no            | 12 / 12 h |
@@ -245,7 +245,7 @@ Users who have undergone the full local registration procedure (including signin
     $ it4ifree
     Password:
          PID    Total   Used   ...by me Free
-       -------* ------* -----* -------* -------
+       -------- ------- ------ -------- -------
        OPEN-0-0 1500000 400644   225265 1099356
        DD-13-1    10000   2606     2606    7394
 ```
diff --git a/docs.it4i/anselm-cluster-documentation/remote-visualization.md b/docs.it4i/anselm-cluster-documentation/remote-visualization.md
index 7c6adc6ac8941c5866c273c7096091a29d1a7195..b448ef6823d976142535a2a3ae121bfeb704deca 100644
--- a/docs.it4i/anselm-cluster-documentation/remote-visualization.md
+++ b/docs.it4i/anselm-cluster-documentation/remote-visualization.md
@@ -2,19 +2,19 @@
 
 ## Introduction
 
-The goal of this service is to provide the users a GPU accelerated use of OpenGL applications, especially for pre* and post* processing work, where not only the GPU performance is needed but also fast access to the shared file systems of the cluster and a reasonable amount of RAM.
+The goal of this service is to provide the users a GPU accelerated use of OpenGL applications, especially for pre- and post- processing work, where not only the GPU performance is needed but also fast access to the shared file systems of the cluster and a reasonable amount of RAM.
 
 The service is based on integration of open source tools VirtualGL and TurboVNC together with the cluster's job scheduler PBS Professional.
 
 Currently two compute nodes are dedicated for this service with following configuration for each node:
 
 | [**Visualization node configuration**](compute-nodes/) |                                         |
-| -----------------------------------------------------* | --------------------------------------* |
+| ------------------------------------------------------ | --------------------------------------- |
 | CPU                                                    | 2 x Intel Sandy Bridge E5-2670, 2.6 GHz |
 | Processor cores                                        | 16 (2 x 8 cores)                        |
 | RAM                                                    | 64 GB, min. 4 GB per core               |
 | GPU                                                    | NVIDIA Quadro 4000, 2 GB RAM            |
-| Local disk drive                                       | yes * 500 GB                            |
+| Local disk drive                                       | yes - 500 GB                            |
 | Compute network                                        | InfiniBand QDR                          |
 
 ## Schematic Overview
@@ -133,7 +133,7 @@ $ vncserver -kill :1
 qviz**. The queue has following properties:
 
 | queue                        | active project | project resources | nodes | min ncpus | priority | authorization | walltime         |
-| ---------------------------* | -------------* | ----------------* | ----* | --------* | -------* | ------------* | ---------------* |
+| ---------------------------- | -------------- | ----------------- | ----- | --------- | -------- | ------------- | ---------------- |
 | **qviz** Visualization queue | yes            | none required     | 2     | 4         | 150      | no            | 1 hour / 8 hours |
 
 Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 16 GB of RAM and 1/4 of the GPU capacity.
diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md
index dd472bdcc451351284dda80f3ab88ee17a1369a3..ae02e4b34a8fa6c8ffc31989d6116803dae6e1fd 100644
--- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md
+++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md
@@ -6,21 +6,21 @@ To run a [job](../introduction/), [computational resources](../introduction/) fo
 
 The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](job-priority/) at Anselm ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are available to Anselm users:
 
-*   **qexp**, the Express queue
-*   **qprod**, the Production queue
-*   **qlong**, the Long queue, regula
-*   **qnvidia**, **qmic**, **qfat**, the Dedicated queues
-*   **qfree**, the Free resource utilization queue
+-   **qexp**, the Express queue
+-   **qprod**, the Production queue
+-   **qlong**, the Long queue, regula
+-   **qnvidia**, **qmic**, **qfat**, the Dedicated queues
+-   **qfree**, the Free resource utilization queue
 
 !!! Note
-     Check the queue status at <https://extranet.it4i.cz/anselm/>
+	Check the queue status at <https://extranet.it4i.cz/anselm/>
 
 Read more on the [Resource AllocationPolicy](resources-allocation-policy/) page.
 
 ## Job Submission and Execution
 
 !!! Note
-     Use the **qsub** command to submit your jobs.
+	Use the **qsub** command to submit your jobs.
 
 The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 16 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
 
@@ -29,7 +29,7 @@ Read more on the [Job submission and execution](job-submission-and-execution/) p
 ## Capacity Computing
 
 !!! Note
-     Use Job arrays when running huge number of jobs.
+	Use Job arrays when running huge number of jobs.
 
 Use GNU Parallel and/or Job arrays when running (many) single core jobs.
 
diff --git a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
index a2bf177be27e295f38755c5f0532627fac4b0e57..eab7a56adef16c4cf23f463aaa82d2c40fae53f7 100644
--- a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
+++ b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
@@ -8,7 +8,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
     Check the queue status at <https://extranet.it4i.cz/anselm/>
 
 | queue               | active project | project resources | nodes                                                | min ncpus | priority | authorization | walltime |
-| ------------------* | -------------* | ----------------* | ---------------------------------------------------* | --------* | -------* | ------------* | -------* |
+| ------------------- | -------------- | ----------------- | ---------------------------------------------------- | --------- | -------- | ------------- | -------- |
 | qexp                | no             | none required     | 2 reserved, 31 totalincluding MIC, GPU and FAT nodes | 1         | 150      | no            | 1 h      |
 | qprod               | yes            | 0                 | 178 nodes w/o accelerator                            | 16        | 0        | no            | 24/48 h  |
 | qlong               | yes            | 0                 | 60 nodes w/o accelerator                             | 16        | 0        | no            | 72/144 h |
@@ -20,11 +20,11 @@ The resources are allocated to the job in a fair-share fashion, subject to const
 
 **The qexp queue is equipped with the nodes not having the very same CPU clock speed.** Should you need the very same CPU speed, you have to select the proper nodes during the PSB job submission.
 
-*   **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user, from a pool of nodes containing Nvidia accelerated nodes (cn181-203), MIC accelerated nodes (cn204-207) and Fat nodes with 512GB RAM (cn208-209). This enables to test and tune also accelerated code or code with higher RAM requirements. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
-*   **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, except the reserved ones. 178 nodes without accelerator are included. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
-*   **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 60 nodes without acceleration may be accessed via the qlong queue. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time * 3 x 48 h).
-*   **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project.
-*   **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
+-   **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user, from a pool of nodes containing Nvidia accelerated nodes (cn181-203), MIC accelerated nodes (cn204-207) and Fat nodes with 512GB RAM (cn208-209). This enables to test and tune also accelerated code or code with higher RAM requirements. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
+-   **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, except the reserved ones. 178 nodes without accelerator are included. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
+-   **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 60 nodes without acceleration may be accessed via the qlong queue. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 x 48 h).
+-   **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project.
+-   **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
 
 ### Notes
 
@@ -122,7 +122,7 @@ User may check at any time, how many core-hours have been consumed by himself/he
 $ it4ifree
 Password:
      PID    Total   Used   ...by me Free
-   -------* ------* -----* -------* -------
+   -------- ------- ------ -------- -------
    OPEN-0-0 1500000 400644   225265 1099356
    DD-13-1    10000   2606     2606    7394
 ```
diff --git a/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md b/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
index c1b177e848618bb2af7c688b1e76032d0517fba2..d830fa79d1e8f9044058733423d6da2ca1e596a0 100644
--- a/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
+++ b/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
@@ -5,7 +5,7 @@
 The Anselm cluster is accessed by SSH protocol via login nodes login1 and login2 at address anselm.it4i.cz. The login nodes may be addressed specifically, by prepending the login node name to the address.
 
 | Login address         | Port | Protocol | Login node                                   |
-| --------------------* | ---* | -------* | -------------------------------------------* |
+| --------------------- | ---- | -------- | -------------------------------------------- |
 | anselm.it4i.cz        | 22   | ssh      | round-robin DNS record for login1 and login2 |
 | login1.anselm.it4i.cz | 22   | ssh      | login1                                       |
 | login2.anselm.it4i.cz | 22   | ssh      | login2                                       |
@@ -61,7 +61,7 @@ Example to the cluster login:
 Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols.  (Not available yet.) In case large volumes of data are transferred, use dedicated data mover node dm1.anselm.it4i.cz for increased performance.
 
 | Address               | Port | Protocol  |
-| --------------------* | ---* | --------* |
+| --------------------- | ---- | --------- |
 | anselm.it4i.cz        | 22   | scp, sftp |
 | login1.anselm.it4i.cz | 22   | scp, sftp |
 | login2.anselm.it4i.cz | 22   | scp, sftp |
@@ -120,7 +120,7 @@ More information about the shared file systems is available [here](storage/).
 Outgoing connections, from Anselm Cluster login nodes to the outside world, are restricted to following ports:
 
 | Port | Protocol |
-| ---* | -------* |
+| ---- | -------- |
 | 22   | ssh      |
 | 80   | http     |
 | 443  | https    |
@@ -198,9 +198,9 @@ Now, configure the applications proxy settings to **localhost:6000**. Use port f
 
 ## Graphical User Interface
 
-*   The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
-*   The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
+-   The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
+-   The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
 
 ## VPN Access
 
-*   Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/).
+-   Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/).
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md
index 88065885a60092fc2e73cfc7f2fed957b8fe1651..89fedb100bd003835dd0f8efa8f21431790c5f88 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md
@@ -129,7 +129,7 @@ To run ANSYS Fluent in batch mode with user's config file you can utilize/modify
  #Default arguments for all jobs
  fluent_args="-ssh -g -i $input $fluent_args"
 
- echo "---------* Going to start a fluent job with the following settings:
+ echo "---------- Going to start a fluent job with the following settings:
  Input: $input
  Case: $case
  Output: $outfile
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
index 72d5b40af19bdcaced01719a8b31441abd1df191..d99fe416ada838585b509d0de20998286fa2bb32 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
@@ -1,7 +1,7 @@
 # ANSYS MAPDL
 
 **[ANSYS Multiphysics](http://www.ansys.com/products/multiphysics)**
-software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high* and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver.
+software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high- and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver.
 
 To run ANSYS MAPDL in batch mode you can utilize/modify the default mapdl.pbs script and execute it via the qsub command.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
index f9df70d834ce91a52ae6734ea62281375c076c64..0918ca1926ccdb0df6e5b4a743ba2afffa109a6f 100644
--- a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
+++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
@@ -19,7 +19,7 @@ Currently on Anselm is installed version 2010.1, patch level 45, parallel versio
 Compilation parameters are default:
 
 | Parameter                          | Value        |
-| ---------------------------------* | -----------* |
+| ---------------------------------- | ------------ |
 | max number of atoms                | 200          |
 | max number of valence orbitals     | 300          |
 | max number of basis functions      | 4095         |
@@ -33,7 +33,7 @@ Compilation parameters are default:
 Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details.
 
 !!! Note
-     The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option  mpiprocs=16:ompthreads=1 to PBS.
+	The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option  mpiprocs=16:ompthreads=1 to PBS.
 
 You are advised to use the -d option to point to a directory in [SCRATCH file system](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
index 569f20771197f93a74559037809e38bb606449f0..d22c987f2f4e794def52c33eb6950b825eac6708 100644
--- a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
+++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
@@ -12,10 +12,10 @@ NWChem aims to provide its users with computational chemistry tools that are sca
 
 The following versions are currently installed:
 
-*   6.1.1, not recommended, problems have been observed with this version
-*   6.3-rev2-patch1, current release with QMD patch applied. Compiled with Intel compilers, MKL and Intel MPI
-*   6.3-rev2-patch1-openmpi, same as above, but compiled with OpenMPI and NWChem provided BLAS instead of MKL. This version is expected to be slower
-*   6.3-rev2-patch1-venus, this version contains only libraries for VENUS interface linking. Does not provide standalone NWChem executable
+-   6.1.1, not recommended, problems have been observed with this version
+-   6.3-rev2-patch1, current release with QMD patch applied. Compiled with Intel compilers, MKL and Intel MPI
+-   6.3-rev2-patch1-openmpi, same as above, but compiled with OpenMPI and NWChem provided BLAS instead of MKL. This version is expected to be slower
+-   6.3-rev2-patch1-venus, this version contains only libraries for VENUS interface linking. Does not provide standalone NWChem executable
 
 For a current list of installed versions, execute:
 
@@ -40,5 +40,5 @@ NWChem is compiled for parallel MPI execution. Normal procedure for MPI jobs app
 
 Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
 
-*   MEMORY : controls the amount of memory NWChem will use
-*   SCRATCH_DIR : set this to a directory in [SCRATCH file system](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, e.g.. "scf direct"
+-   MEMORY : controls the amount of memory NWChem will use
+-   SCRATCH_DIR : set this to a directory in [SCRATCH file system](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, e.g.. "scf direct"
diff --git a/docs.it4i/anselm-cluster-documentation/software/compilers.md b/docs.it4i/anselm-cluster-documentation/software/compilers.md
index 969c34cf49e4bf42f5a32eb26d13048458da52c4..86f354ba1fee2daa9116035e7b70673adab12aa2 100644
--- a/docs.it4i/anselm-cluster-documentation/software/compilers.md
+++ b/docs.it4i/anselm-cluster-documentation/software/compilers.md
@@ -4,11 +4,11 @@
 
 Currently there are several compilers for different programming languages available on the Anselm cluster:
 
-*   C/C++
-*   Fortran 77/90/95
-*   Unified Parallel C
-*   Java
-*   NVIDIA CUDA
+-   C/C++
+-   Fortran 77/90/95
+-   Unified Parallel C
+-   Java
+-   NVIDIA CUDA
 
 The C/C++ and Fortran compilers are divided into two main groups GNU and Intel.
 
@@ -45,8 +45,8 @@ For more information about the possibilities of the compilers, please see the ma
 
  UPC is supported by two compiler/runtime implementations:
 
-*   GNU * SMP/multi-threading support only
-*   Berkley * multi-node support as well as SMP/multi-threading support
+-   GNU - SMP/multi-threading support only
+-   Berkley - multi-node support as well as SMP/multi-threading support
 
 ### GNU UPC Compiler
 
@@ -63,7 +63,7 @@ Simple program to test the compiler
 ```bash
     $ cat count.upc
 
-    /* hello.upc * a simple UPC example */
+    /* hello.upc - a simple UPC example */
     #include <upc.h>
     #include <stdio.h>
 
@@ -72,7 +72,7 @@ Simple program to test the compiler
         printf("Welcome to GNU UPC!!!n");
       }
       upc_barrier;
-      printf(" * Hello from thread %in", MYTHREAD);
+      printf(" - Hello from thread %in", MYTHREAD);
       return 0;
     }
 ```
@@ -112,7 +112,7 @@ Example UPC code:
 ```bash
     $ cat hello.upc
 
-    /* hello.upc * a simple UPC example */
+    /* hello.upc - a simple UPC example */
     #include <upc.h>
     #include <stdio.h>
 
@@ -121,7 +121,7 @@ Example UPC code:
         printf("Welcome to Berkeley UPC!!!n");
       }
       upc_barrier;
-      printf(" * Hello from thread %in", MYTHREAD);
+      printf(" - Hello from thread %in", MYTHREAD);
       return 0;
     }
 ```
diff --git a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
index bebd2735559de74936dd8c8799735c5efc66f379..befce6a433d0f0f35a7429deb1b7e6b11311b335 100644
--- a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
+++ b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
@@ -1,4 +1,4 @@
-# COMSOL Multiphysics     
+# COMSOL Multiphysics	
 
 ## Introduction
 
@@ -6,11 +6,11 @@
 standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical
 applications.
 
-*   [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
-*   [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
-*   [CFD Module](http://www.comsol.com/cfd-module),
-*   [Acoustics Module](http://www.comsol.com/acoustics-module),
-*   and [many others](http://www.comsol.com/products)
+-   [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
+-   [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
+-   [CFD Module](http://www.comsol.com/cfd-module),
+-   [Acoustics Module](http://www.comsol.com/acoustics-module),
+-   and [many others](http://www.comsol.com/products)
 
 COMSOL also allows an interface support for equation-based modelling of partial differential equations.
 
@@ -18,19 +18,19 @@ COMSOL also allows an interface support for equation-based modelling of partial
 
 On the Anselm cluster COMSOL is available in the latest stable version. There are two variants of the release:
 
-*   **Non commercial** or so called **EDU variant**, which can be used for research and educational purposes.
-*   **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
+-   **Non commercial** or so called **EDU variant**, which can be used for research and educational purposes.
+-   **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
 
 To load the of COMSOL load the module
 
 ```bash
-     $ module load comsol
+	$ module load comsol
 ```
 
 By default the **EDU variant** will be loaded. If user needs other version or variant, load the particular version. To obtain the list of available versions use
 
 ```bash
-     $ module avail comsol
+	$ module avail comsol
 ```
 
 If user needs to prepare COMSOL jobs in the interactive mode it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the Virtual Network Computing (VNC).
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
index abd882adb8f6c3e002ac5b9efbe802a2d54f31c4..7d2fe37aaba8aab17a4086d6c657faaf6d5d65a3 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
@@ -1,8 +1,8 @@
 # Allinea Forge (DDT,MAP)
 
-Allinea Forge consist of two tools * debugger DDT and profiler MAP.
+Allinea Forge consist of two tools - debugger DDT and profiler MAP.
 
-Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel Xeon Phi accelerators. DDT provides all the standard debugging features (stack trace, breakpoints, watches, view variables, threads etc.) for every thread running as part of your program, or for every process * even if these processes are distributed across a cluster using an MPI implementation.
+Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel Xeon Phi accelerators. DDT provides all the standard debugging features (stack trace, breakpoints, watches, view variables, threads etc.) for every thread running as part of your program, or for every process - even if these processes are distributed across a cluster using an MPI implementation.
 
 Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed for profiling parallel code, which uses pthreads, OpenMP or MPI.
 
@@ -10,13 +10,13 @@ Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed for profil
 
 On Anselm users can debug OpenMP or MPI code that runs up to 64 parallel processes. In case of debugging GPU or Xeon Phi accelerated codes the limit is 8 accelerators. These limitation means that:
 
-*   1 user can debug up 64 processes, or
-*   32 users can debug 2 processes, etc.
+-   1 user can debug up 64 processes, or
+-   32 users can debug 2 processes, etc.
 
 In case of debugging on accelerators:
 
-*   1 user can debug on up to 8 accelerators, or
-*   8 users can debug on single accelerator.
+-   1 user can debug on up to 8 accelerators, or
+-   8 users can debug on single accelerator.
 
 ## Compiling Code to Run With DDT
 
@@ -48,8 +48,8 @@ $ mpif90 -g -O0 -o test_debug test.f
 Before debugging, you need to compile your code with theses flags:
 
 !!! Note
-    * **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
-    * **O0** : Suppress all optimizations.
+    - **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
+    - **O0** : Suppress all optimizations.
 
 ## Starting a Job With DDT
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md
index ba2f8ebf827708e62625e3901ef856189c7ab43f..1a5df6b08568883969a5f8f61f3e73afb1632312 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md
@@ -21,7 +21,7 @@ The module sets up environment variables, required for using the Allinea Perform
 ## Usage
 
 !!! Note
-     Use the the perf-report wrapper on your (MPI) program.
+	Use the the perf-report wrapper on your (MPI) program.
 
 Instead of [running your MPI program the usual way](../mpi/), use the the perf report wrapper:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md
index 91561c20138e1da232e3a31dc0cc1a1553d98d55..23849f609b3b96db56c1f93da53f0f350cc9b1e9 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md
@@ -4,9 +4,9 @@
 
 CUBE is a graphical performance report explorer for displaying data from Score-P and Scalasca (and other compatible tools). The name comes from the fact that it displays performance data in a three-dimensions :
 
-*   **performance metric**, where a number of metrics are available, such as communication time or cache misses,
-*   **call path**, which contains the call tree of your program
-*   **system resource**, which contains system's nodes, processes and threads, depending on the parallel programming model.
+-   **performance metric**, where a number of metrics are available, such as communication time or cache misses,
+-   **call path**, which contains the call tree of your program
+-   **system resource**, which contains system's nodes, processes and threads, depending on the parallel programming model.
 
 Each dimension is organized in a tree, for example the time performance metric is divided into Execution time and Overhead time, call path dimension is organized by files and routines in your source code etc.
 
@@ -20,15 +20,15 @@ Each node in the tree is colored by severity (the color scheme is displayed at t
 
 Currently, there are two versions of CUBE 4.2.3 available as [modules](../../environment-and-modules/):
 
-*   cube/4.2.3-gcc, compiled with GCC
-*   cube/4.2.3-icc, compiled with Intel compiler
+-   cube/4.2.3-gcc, compiled with GCC
+-   cube/4.2.3-icc, compiled with Intel compiler
 
 ## Usage
 
 CUBE is a graphical application. Refer to Graphical User Interface documentation for a list of methods to launch graphical applications on Anselm.
 
 !!! Note
-     Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes.
+	Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes.
 
 After loading the appropriate module, simply launch cube command, or alternatively you can use  scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md
index 28b872c46d27a4dc13bd1fbba4679d8f45c81c20..68e804f48069479a127e82659d8fbba928f57fe1 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md
@@ -39,7 +39,7 @@ Read more at the [Allinea Performance Reports](debuggers/allinea-performance-rep
 
 ## RougeWave Totalview
 
-TotalView is a source* and machine-level debugger for multi-process, multi-threaded programs. Its wide range of tools provides ways to analyze, organize, and test programs, making it easy to isolate and identify problems in individual threads and processes in programs of great complexity.
+TotalView is a source- and machine-level debugger for multi-process, multi-threaded programs. Its wide range of tools provides ways to analyze, organize, and test programs, making it easy to isolate and identify problems in individual threads and processes in programs of great complexity.
 
 ```bash
     $ module load totalview
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
index 68446f02319893e49ca704486c70ea21aeddea73..c408fd4fe08815c59c5e5beafecd99fe4a1d6b9a 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
@@ -30,29 +30,29 @@ Sample output:
 
 ```bash
     ---------------------------------------||---------------------------------------
-    -*             Socket 0              --||-*             Socket 1              --
+    --             Socket 0              --||--             Socket 1              --
     ---------------------------------------||---------------------------------------
     ---------------------------------------||---------------------------------------
     ---------------------------------------||---------------------------------------
-    -*   Memory Performance Monitoring   --||-*   Memory Performance Monitoring   --
+    --   Memory Performance Monitoring   --||--   Memory Performance Monitoring   --
     ---------------------------------------||---------------------------------------
-    -*  Mem Ch 0: Reads (MB/s):    2.44  --||-*  Mem Ch 0: Reads (MB/s):    0.26  --
-    -*            Writes(MB/s):    2.16  --||-*            Writes(MB/s):    0.08  --
-    -*  Mem Ch 1: Reads (MB/s):    0.35  --||-*  Mem Ch 1: Reads (MB/s):    0.78  --
-    -*            Writes(MB/s):    0.13  --||-*            Writes(MB/s):    0.65  --
-    -*  Mem Ch 2: Reads (MB/s):    0.32  --||-*  Mem Ch 2: Reads (MB/s):    0.21  --
-    -*            Writes(MB/s):    0.12  --||-*            Writes(MB/s):    0.07  --
-    -*  Mem Ch 3: Reads (MB/s):    0.36  --||-*  Mem Ch 3: Reads (MB/s):    0.20  --
-    -*            Writes(MB/s):    0.13  --||-*            Writes(MB/s):    0.07  --
-    -* NODE0 Mem Read (MB/s):      3.47  --||-* NODE1 Mem Read (MB/s):      1.45  --
-    -* NODE0 Mem Write (MB/s):     2.55  --||-* NODE1 Mem Write (MB/s):     0.88  --
-    -* NODE0 P. Write (T/s) :     31506  --||-* NODE1 P. Write (T/s):       9099  --
-    -* NODE0 Memory (MB/s):        6.02  --||-* NODE1 Memory (MB/s):        2.33  --
+    --  Mem Ch 0: Reads (MB/s):    2.44  --||--  Mem Ch 0: Reads (MB/s):    0.26  --
+    --            Writes(MB/s):    2.16  --||--            Writes(MB/s):    0.08  --
+    --  Mem Ch 1: Reads (MB/s):    0.35  --||--  Mem Ch 1: Reads (MB/s):    0.78  --
+    --            Writes(MB/s):    0.13  --||--            Writes(MB/s):    0.65  --
+    --  Mem Ch 2: Reads (MB/s):    0.32  --||--  Mem Ch 2: Reads (MB/s):    0.21  --
+    --            Writes(MB/s):    0.12  --||--            Writes(MB/s):    0.07  --
+    --  Mem Ch 3: Reads (MB/s):    0.36  --||--  Mem Ch 3: Reads (MB/s):    0.20  --
+    --            Writes(MB/s):    0.13  --||--            Writes(MB/s):    0.07  --
+    -- NODE0 Mem Read (MB/s):      3.47  --||-- NODE1 Mem Read (MB/s):      1.45  --
+    -- NODE0 Mem Write (MB/s):     2.55  --||-- NODE1 Mem Write (MB/s):     0.88  --
+    -- NODE0 P. Write (T/s) :     31506  --||-- NODE1 P. Write (T/s):       9099  --
+    -- NODE0 Memory (MB/s):        6.02  --||-- NODE1 Memory (MB/s):        2.33  --
     ---------------------------------------||---------------------------------------
-    -*                   System Read Throughput(MB/s):      4.93                  --
-    -*                  System Write Throughput(MB/s):      3.43                  --
-    -*                 System Memory Throughput(MB/s):      8.35                  --
-    ---------------------------------------||--------------------------------------* 
+    --                   System Read Throughput(MB/s):      4.93                  --
+    --                  System Write Throughput(MB/s):      3.43                  --
+    --                 System Memory Throughput(MB/s):      8.35                  --
+    ---------------------------------------||--------------------------------------- 
 ```
 
 ### Pcm-Msr
@@ -193,7 +193,7 @@ Can be used as a sensor for ksysguard GUI, which is currently not installed on A
 In a similar fashion to PAPI, PCM provides a C++ API to access the performance counter from within your application. Refer to the [Doxygen documentation](http://intel-pcm-api-documentation.github.io/classPCM.html) for details of the API.
 
 !!! Note
-     Due to security limitations, using PCM API to monitor your applications is currently not possible on Anselm. (The application must be run as root user)
+	Due to security limitations, using PCM API to monitor your applications is currently not possible on Anselm. (The application must be run as root user)
 
 Sample program using the API :
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
index b7395572ee85d34a6c1ec7cf57392bd983a0417d..e9bae568d427dcda6f11dd0a728533e13d194d17 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
@@ -4,11 +4,11 @@
 
 Intel VTune Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
 
-*   Hotspot analysis
-*   Locks and waits analysis
-*   Low level specific counters, such as branch analysis and memory
+-   Hotspot analysis
+-   Locks and waits analysis
+-   Low level specific counters, such as branch analysis and memory
     bandwidth
-*   Power usage analysis * frequency and sleep states.
+-   Power usage analysis - frequency and sleep states.
 
 ![screenshot](../../../img/vtune-amplifier.png)
 
@@ -27,7 +27,7 @@ and launch the GUI :
 ```
 
 !!! Note
-     To profile an application with VTune Amplifier, special kernel modules need to be loaded. The modules are not loaded on Anselm login nodes, thus direct profiling on login nodes is not possible. Use VTune on compute nodes and refer to the documentation on using GUI applications.
+	To profile an application with VTune Amplifier, special kernel modules need to be loaded. The modules are not loaded on Anselm login nodes, thus direct profiling on login nodes is not possible. Use VTune on compute nodes and refer to the documentation on using GUI applications.
 
 The GUI will open in new window. Click on "_New Project..._" to create a new project. After clicking _OK_, a new window with project properties will appear.  At "_Application:_", select the bath to your binary you want to profile (the binary should be compiled with -g flag). Some additional options such as command line arguments can be selected. At "_Managed code profiling mode:_" select "_Native_" (unless you want to profile managed mode .NET/Mono applications). After clicking _OK_, your project is created.
 
@@ -40,7 +40,7 @@ VTune Amplifier also allows a form of remote analysis. In this mode, data for an
 The command line will look like this:
 
 ```bash
-    /apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -collect advanced-hotspots -knob collection-detail=stack-and-callcount -mrte-mode=native -target-duration-type=veryshort -app-working-dir /home/sta545/test -* /home/sta545/test_pgsesv
+    /apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -collect advanced-hotspots -knob collection-detail=stack-and-callcount -mrte-mode=native -target-duration-type=veryshort -app-working-dir /home/sta545/test -- /home/sta545/test_pgsesv
 ```
 
 Copy the line to clipboard and then you can paste it in your jobscript or in command line. After the collection is run, open the GUI once again, click the menu button in the upper right corner, and select "_Open > Result..._". The GUI will load the results from the run.
@@ -48,7 +48,7 @@ Copy the line to clipboard and then you can paste it in your jobscript or in com
 ## Xeon Phi
 
 !!! Note
-     This section is outdated. It will be updated with new information soon.
+	This section is outdated. It will be updated with new information soon.
 
 It is possible to analyze both native and offload Xeon Phi applications. For offload mode, just specify the path to the binary. For native mode, you need to specify in project properties:
 
@@ -59,12 +59,12 @@ Application parameters:  mic0 source ~/.profile && /path/to/your/bin
 Note that we include  source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/).
 
 !!! Note
-     If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
+	If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
 
 You may also use remote analysis to collect data from the MIC and then analyze it in the GUI later :
 
 ```bash
-    $ amplxe-cl -collect knc-hotspots -no-auto-finalize -* ssh mic0
+    $ amplxe-cl -collect knc-hotspots -no-auto-finalize -- ssh mic0
     "export LD_LIBRARY_PATH=/apps/intel/composer_xe_2015.2.164/compiler/lib/mic/:/apps/intel/composer_xe_2015.2.164/mkl/lib/mic/; export KMP_AFFINITY=compact; /tmp/app.mic"
 ```
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
index 16102506ab9a06861edd2949ddfb284b0a71fc01..0671850ee5d14d3442abfef77d965a564128a17d 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
@@ -4,7 +4,7 @@
 
 Performance Application Programming Interface (PAPI)  is a portable interface to access hardware performance counters (such as instruction counts and cache misses) found in most modern architectures. With the new component framework, PAPI is not limited only to CPU counters, but offers also components for CUDA, network, Infiniband etc.
 
-PAPI provides two levels of interface * a simpler, high level interface and more detailed low level interface.
+PAPI provides two levels of interface - a simpler, high level interface and more detailed low level interface.
 
 PAPI can be used with parallel as well as serial programs.
 
@@ -76,15 +76,15 @@ Prints information about the memory architecture of the current CPU.
 
 PAPI provides two kinds of events:
 
-*   **Preset events** is a set of predefined common CPU events, standardized across platforms.
-*   **Native events **is a set of all events supported by the current hardware. This is a larger set of features than preset. For other components than CPU, only native events are usually available.
+-   **Preset events** is a set of predefined common CPU events, standardized across platforms.
+-   **Native events **is a set of all events supported by the current hardware. This is a larger set of features than preset. For other components than CPU, only native events are usually available.
 
 To use PAPI in your application, you need to link the appropriate include file.
 
-*   papi.h for C
-*   f77papi.h for Fortran 77
-*   f90papi.h for Fortran 90
-*   fpapi.h for Fortran with preprocessor
+-   papi.h for C
+-   f77papi.h for Fortran 77
+-   f90papi.h for Fortran 90
+-   fpapi.h for Fortran with preprocessor
 
 The include path is automatically added by papi module to $INCLUDE.
 
@@ -169,7 +169,7 @@ Let's try with optimizations enabled :
     MFLOPS: inf
 ```
 
-Now we see a seemingly strange result * the multiplication took no time and only 6 floating point instructions were issued. This is because the compiler optimizations have completely removed the multiplication loop, as the result is actually not used anywhere in the program. We can fix this by adding some "dummy" code at the end of the Matrix-Matrix multiplication routine :
+Now we see a seemingly strange result - the multiplication took no time and only 6 floating point instructions were issued. This is because the compiler optimizations have completely removed the multiplication loop, as the result is actually not used anywhere in the program. We can fix this by adding some "dummy" code at the end of the Matrix-Matrix multiplication routine :
 
 ```cpp
     for (i=0; i<SIZE;i++)
@@ -191,7 +191,7 @@ Now the compiler won't remove the multiplication loop. (However it is still not
 ### Intel Xeon Phi
 
 !!! Note
-     PAPI currently supports only a subset of counters on the Intel Xeon Phi processor compared to Intel Xeon, for example the floating point operations counter is missing.
+	PAPI currently supports only a subset of counters on the Intel Xeon Phi processor compared to Intel Xeon, for example the floating point operations counter is missing.
 
 To use PAPI in [Intel Xeon Phi](../intel-xeon-phi/) native applications, you need to load module with " -mic" suffix, for example " papi/5.3.2-mic" :
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
index 8708e2297478b7de36f4ab27c507662878c67e36..45c0768e7cae1ed4e5256e461b2b29f40aa86bb5 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
@@ -10,8 +10,8 @@ Scalasca supports profiling of MPI, OpenMP and hybrid MPI+OpenMP applications.
 
 There are currently two versions of Scalasca 2.0 [modules](../../environment-and-modules/) installed on Anselm:
 
-*   scalasca2/2.0-gcc-openmpi, for usage with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/),
-*   scalasca2/2.0-icc-impi, for usage with [Intel Compiler](../compilers.html) and [Intel MPI](../mpi/running-mpich2/).
+-   scalasca2/2.0-gcc-openmpi, for usage with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/),
+-   scalasca2/2.0-icc-impi, for usage with [Intel Compiler](../compilers.html) and [Intel MPI](../mpi/running-mpich2/).
 
 ## Usage
 
@@ -39,11 +39,11 @@ An example :
 
 Some notable Scalasca options are:
 
-*   **-t Enable trace data collection. By default, only summary data are collected.**
-*   **-e &lt;directory> Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep\_, followed by name of the executable and launch configuration.**
+-   **-t Enable trace data collection. By default, only summary data are collected.**
+-   **-e &lt;directory> Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep\_, followed by name of the executable and launch configuration.**
 
 !!! Note
-     Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage/storage/).
+	Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage/storage/).
 
 ### Analysis of Reports
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
index f0d0c33b8e48afa24e51d6540d53705dfa1e477a..4f1296679c56b6d65a0edb873196c5c0bb537519 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
@@ -10,8 +10,8 @@ Score-P can be used as an instrumentation tool for [Scalasca](scalasca/).
 
 There are currently two versions of Score-P version 1.2.6 [modules](../../environment-and-modules/) installed on Anselm :
 
-*   scorep/1.2.3-gcc-openmpi, for usage     with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/)
-*   scorep/1.2.3-icc-impi, for usage with [Intel Compiler](../compilers.html)> and [Intel MPI](../mpi/running-mpich2/)>.
+-   scorep/1.2.3-gcc-openmpi, for usage     with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/)
+-   scorep/1.2.3-icc-impi, for usage with [Intel Compiler](../compilers.html)> and [Intel MPI](../mpi/running-mpich2/)>.
 
 ## Instrumentation
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md
index 53d3a6032baded8cc2169bdfa0a2a55edbe7cd48..1389d347704845c9fbcb5d5f9a479b29790275df 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md
@@ -58,8 +58,8 @@ Compile the code:
 Before debugging, you need to compile your code with theses flags:
 
 !!! Note
-    * **-g** : Generates extra debugging information usable by GDB. **-g3** includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
-    * **-O0** : Suppress all optimizations.
+    - **-g** : Generates extra debugging information usable by GDB. **-g3** includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
+    - **-O0** : Suppress all optimizations.
 
 ## Starting a Job With TotalView
 
@@ -87,7 +87,7 @@ To debug a serial code use:
     totalview test_debug
 ```
 
-### Debugging a Parallel Code * Option 1
+### Debugging a Parallel Code - Option 1
 
 To debug a parallel code compiled with **OpenMPI** you need to setup your TotalView environment:
 
@@ -121,7 +121,7 @@ The source code of this function can be also found in
 ```
 
 !!! Note
-     You can also add only following line to you ~/.tvdrc file instead of the entire function: 
+	You can also add only following line to you ~/.tvdrc file instead of the entire function: 
     **source /apps/mpi/openmpi/intel/1.6.5/etc/openmpi-totalview.tcl**
 
 You need to do this step only once.
@@ -140,7 +140,7 @@ At this point the main TotalView GUI window will appear and you can insert the b
 
 ![](../../../img/totalview2.png)
 
-### Debugging a Parallel Code * Option 2
+### Debugging a Parallel Code - Option 2
 
 Other option to start new parallel debugging session from a command line is to let TotalView to execute mpirun by itself. In this case user has to specify a MPI implementation used to compile the source code.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
index ff65cbc7c2248641b641327f97c6e13e6171d386..8332377a295ac21a6175918d893043143dd6c669 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
@@ -10,19 +10,19 @@ Valgind is an extremely useful tool for debugging memory errors such as [off-by-
 
 The main tools available in Valgrind are :
 
-*   **Memcheck**, the original, must used and default tool. Verifies memory access in you program and can detect use of unitialized memory, out of bounds memory access, memory leaks, double free, etc.
-*   **Massif**, a heap profiler.
-*   **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications.
-*   **Cachegrind**, a cache profiler.
-*   **Callgrind**, a callgraph analyzer.
-*   For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/).
+-   **Memcheck**, the original, must used and default tool. Verifies memory access in you program and can detect use of unitialized memory, out of bounds memory access, memory leaks, double free, etc.
+-   **Massif**, a heap profiler.
+-   **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications.
+-   **Cachegrind**, a cache profiler.
+-   **Callgrind**, a callgraph analyzer.
+-   For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/).
 
 ## Installed Versions
 
 There are two versions of Valgrind available on Anselm.
 
-*   Version 3.6.0, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support.
-*   Version 3.9.0 with support for Intel MPI, available in [module](../../environment-and-modules/) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind.
+-   Version 3.6.0, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support.
+-   Version 3.9.0 with support for Intel MPI, available in [module](../../environment-and-modules/) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind.
 
 ## Usage
 
@@ -37,7 +37,7 @@ For example, lets look at this C code, which has two problems :
     {
        int* x = malloc(10 * sizeof(int));
        x[10] = 0; // problem 1: heap block overrun
-    }             // problem 2: memory leak -* x not freed
+    }             // problem 2: memory leak -- x not freed
 
     int main(void)
     {
@@ -91,7 +91,7 @@ If no Valgrind options are specified, Valgrind defaults to running Memcheck tool
     ==12652== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 6 from 6)
 ```
 
-In the output we can see that Valgrind has detected both errors * the off-by-one memory access at line 5 and a memory leak of 40 bytes. If we want a detailed analysis of the memory leak, we need to run Valgrind with  --leak-check=full option :
+In the output we can see that Valgrind has detected both errors - the off-by-one memory access at line 5 and a memory leak of 40 bytes. If we want a detailed analysis of the memory leak, we need to run Valgrind with  --leak-check=full option :
 
 ```bash
     $ valgrind --leak-check=full ./valgrind-example
@@ -176,7 +176,7 @@ Lets look at this MPI example :
     }
 ```
 
-There are two errors * use of uninitialized memory and invalid length of the buffer. Lets debug it with valgrind :
+There are two errors - use of uninitialized memory and invalid length of the buffer. Lets debug it with valgrind :
 
 ```bash
     $ module add intel impi
diff --git a/docs.it4i/anselm-cluster-documentation/software/gpi2.md b/docs.it4i/anselm-cluster-documentation/software/gpi2.md
index f7cc23324ca5da65290622d02a845d7543c95d28..ec96e2653a3bfeb9614be13b969ff3273b3ee255 100644
--- a/docs.it4i/anselm-cluster-documentation/software/gpi2.md
+++ b/docs.it4i/anselm-cluster-documentation/software/gpi2.md
@@ -92,7 +92,7 @@ This example will produce $PBS_NODEFILE with 16 entries per node.
 !!! note
     gaspi_logger views the output form GPI-2 application ranks
 
-The gaspi_logger utility is used to view the output from all nodes except the master node (rank 0). The gaspi_logger is started, on another session, on the master node * the node where the gaspi_run is executed. The output of the application, when called with gaspi_printf(), will be redirected to the gaspi_logger. Other I/O routines (e.g. printf) will not.
+The gaspi_logger utility is used to view the output from all nodes except the master node (rank 0). The gaspi_logger is started, on another session, on the master node - the node where the gaspi_run is executed. The output of the application, when called with gaspi_printf(), will be redirected to the gaspi_logger. Other I/O routines (e.g. printf) will not.
 
 ## Example
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
index df0b0a8a124a2f70d11a2e2adc4eb3d17cf227a0..50b8b005e4f65c2ba9eb51cd8bd21fc398979f76 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
@@ -32,5 +32,5 @@ Read more at <http://software.intel.com/sites/products/documentation/doclib/stdx
 
 Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon will use Haswell architecture. >The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. >To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags >designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this:
 
-*   Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge nodes.
-*   Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries.
+-   Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge nodes.
+-   Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries.
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md
index 154ef140b22e576337dc47a7fbe525a8e7581035..5fef3b2c2d9428eb6e3f193824c6357d942c9644 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md
@@ -5,7 +5,7 @@
 Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX vector instructions is available, via module ipp. The IPP is a very rich library of highly optimized algorithmic building blocks for media and data applications. This includes signal, image and frame processing algorithms, such as FFT, FIR, Convolution, Optical Flow, Hough transform, Sum, MinMax, as well as cryptographic functions, linear algebra functions and many more.
 
 !!! Note
-     Check out IPP before implementing own math functions for data processing, it is likely already there.
+	Check out IPP before implementing own math functions for data processing, it is likely already there.
 
 ```bash
     $ module load ipp
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
index 8668d63669a7ea68cc91b75e6e7d91a3fb0b3c48..d8faf804e84a2c7eca28817e6d583db3369a6e25 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
@@ -4,14 +4,14 @@
 
 Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, extensively threaded and optimized for maximum performance. Intel MKL provides these basic math kernels:
 
-*   BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations.
-*   The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations.
-*   ScaLAPACK distributed processing linear algebra routines for Linux and Windows operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).
-*   Fast Fourier transform (FFT) functions in one, two, or three dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions.
-*   Vector Math Library (VML) routines for optimized mathematical operations on vectors.
-*   Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for    several probability distributions, convolution and correlation routines, and summary statistics functions.
-*   Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
-*   Extended Eigensolver, a shared memory  version of an eigensolver based on the Feast Eigenvalue Solver.
+-   BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations.
+-   The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations.
+-   ScaLAPACK distributed processing linear algebra routines for Linux and Windows operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).
+-   Fast Fourier transform (FFT) functions in one, two, or three dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions.
+-   Vector Math Library (VML) routines for optimized mathematical operations on vectors.
+-   Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for    several probability distributions, convolution and correlation routines, and summary statistics functions.
+-   Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
+-   Extended Eigensolver, a shared memory  version of an eigensolver based on the Feast Eigenvalue Solver.
 
 For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm).
 
@@ -24,14 +24,14 @@ Intel MKL version 13.5.192 is available on Anselm
 The module sets up environment variables, required for linking and running mkl enabled applications. The most important variables are the $MKLROOT, $MKL_INC_DIR, $MKL_LIB_DIR and $MKL_EXAMPLES
 
 !!! Note
-     The MKL library may be linked using any compiler. With intel compiler use -mkl option to link default threaded MKL.
+	The MKL library may be linked using any compiler. With intel compiler use -mkl option to link default threaded MKL.
 
 ### Interfaces
 
 The MKL library provides number of interfaces. The fundamental once are the LP64 and ILP64. The Intel MKL ILP64 libraries use the 64-bit integer type (necessary for indexing large arrays, with more than 231^-1 elements), whereas the LP64 libraries index arrays with the 32-bit integer type.
 
 | Interface | Integer type                                 |
-| --------* | -------------------------------------------* |
+| --------- | -------------------------------------------- |
 | LP64      | 32-bit, int, integer(kind=4), MPI_INT        |
 | ILP64     | 64-bit, long int, integer(kind=8), MPI_INT64 |
 
@@ -48,7 +48,7 @@ You will need the mkl module loaded to run the mkl enabled executable. This may
 ### Threading
 
 !!! Note
-     Advantage in using the MKL library is that it brings threaded parallelization to applications that are otherwise not parallel.
+	Advantage in using the MKL library is that it brings threaded parallelization to applications that are otherwise not parallel.
 
 For this to work, the application must link the threaded MKL library (default). Number and behaviour of MKL threads may be controlled via the OpenMP environment variables, such as OMP_NUM_THREADS and KMP_AFFINITY. MKL_NUM_THREADS takes precedence over OMP_NUM_THREADS
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md
index 1387fe7cb2671d2047222de031ceb11bb9c3b6f9..4546ac077d4031f552c9b973e00536179b34e4f4 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md
@@ -14,7 +14,7 @@ Intel TBB version 4.1 is available on Anselm
 The module sets up environment variables, required for linking and running tbb enabled applications.
 
 !!! Note
-     Link the tbb library, using -ltbb
+	Link the tbb library, using -ltbb
 
 ## Examples
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md b/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md
index 1fcb7e6fd36b8a689722e4d1e04e23d7eef9c2b5..7f478b558b62699e8eece30865070558b9d0af7c 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md
@@ -233,13 +233,13 @@ During the compilation Intel compiler shows which loops have been vectorized in
 Some interesting compiler flags useful not only for code debugging are:
 
 !!! Note
-     Debugging
+	Debugging
 
-    openmp_report[0|1|2] * controls the compiler based vectorization diagnostic level
-    vec-report[0|1|2] * controls the OpenMP parallelizer diagnostic level
+    openmp_report[0|1|2] - controls the compiler based vectorization diagnostic level
+    vec-report[0|1|2] - controls the OpenMP parallelizer diagnostic level
 
     Performance ooptimization
-    xhost * FOR HOST ONLY * to generate AVX (Advanced Vector Extensions) instructions.
+    xhost - FOR HOST ONLY - to generate AVX (Advanced Vector Extensions) instructions.
 
 ## Automatic Offload Using Intel MKL Library
 
@@ -270,7 +270,7 @@ At first get an interactive PBS session on a node with MIC accelerator and load
     $ module load intel
 ```
 
-Following example show how to automatically offload an SGEMM (single precision * g dir="auto">eneral matrix multiply) function to MIC coprocessor. The code can be copied to a file and compiled without any necessary modification.
+Following example show how to automatically offload an SGEMM (single precision - g dir="auto">eneral matrix multiply) function to MIC coprocessor. The code can be copied to a file and compiled without any necessary modification.
 
 ```bash
     $ vim sgemm-ao-short.c
@@ -421,13 +421,13 @@ If the code is parallelized using OpenMP a set of additional libraries is requir
 For your information the list of libraries and their location required for execution of an OpenMP parallel code on Intel Xeon Phi is:
 
 !!! Note
-     /apps/intel/composer_xe_2013.5.192/compiler/lib/mic
+	/apps/intel/composer_xe_2013.5.192/compiler/lib/mic
 
-    * libiomp5.so
-    * libimf.so
-    * libsvml.so
-    * libirng.so
-    * libintlc.so.5
+    - libiomp5.so
+    - libimf.so
+    - libsvml.so
+    - libirng.so
+    - libintlc.so.5
 
 Finally, to run the compiled code use:
 
@@ -502,7 +502,7 @@ After executing the complied binary file, following output should be displayed.
 ```
 
 !!! Note
-     More information about this example can be found on Intel website: <http://software.intel.com/en-us/vcsource/samples/caps-basic/>
+	More information about this example can be found on Intel website: <http://software.intel.com/en-us/vcsource/samples/caps-basic/>
 
 The second example that can be found in "/apps/intel/opencl-examples" directory is General Matrix Multiply. You can follow the the same procedure to download the example to your directory and compile it.
 
@@ -604,11 +604,11 @@ An example of basic MPI version of "hello-world" example in C language, that can
 Intel MPI for the Xeon Phi coprocessors offers different MPI programming models:
 
 !!! Note
-     **Host-only model** * all MPI ranks reside on the host. The coprocessors can be used by using offload pragmas. (Using MPI calls inside offloaded code is not supported.)
+	**Host-only model** - all MPI ranks reside on the host. The coprocessors can be used by using offload pragmas. (Using MPI calls inside offloaded code is not supported.)
 
-    **Coprocessor-only model** * all MPI ranks reside only on the coprocessors.
+    **Coprocessor-only model** - all MPI ranks reside only on the coprocessors.
 
-    **Symmetric model** * the MPI ranks reside on both the host and the coprocessor. Most general MPI case.
+    **Symmetric model** - the MPI ranks reside on both the host and the coprocessor. Most general MPI case.
 
 ### Host-Only Model
 
@@ -651,8 +651,8 @@ Similarly to execution of OpenMP programs in native mode, since the environmenta
 ```
 
 !!! Note
-    * this file sets up both environmental variable for both MPI and OpenMP libraries.
-    * this file sets up the paths to a particular version of Intel MPI library and particular version of an Intel compiler. These versions have to match with loaded modules.
+    - this file sets up both environmental variable for both MPI and OpenMP libraries.
+    - this file sets up the paths to a particular version of Intel MPI library and particular version of an Intel compiler. These versions have to match with loaded modules.
 
 To access a MIC accelerator located on a node that user is currently connected to, use:
 
@@ -704,8 +704,8 @@ or using mpirun
 ```
 
 !!! Note
-    * the full path to the binary has to specified (here: `>~/mpi-test-mic`)
-    * the `LD_LIBRARY_PATH` has to match with Intel MPI module used to compile the MPI code
+    - the full path to the binary has to specified (here: `>~/mpi-test-mic`)
+    - the `LD_LIBRARY_PATH` has to match with Intel MPI module used to compile the MPI code
 
 The output should be again similar to:
 
@@ -726,7 +726,7 @@ A simple test to see if the file is present is to execute:
       /bin/pmi_proxy
 ```
 
-**Execution on host * MPI processes distributed over multiple accelerators on multiple nodes**
+**Execution on host - MPI processes distributed over multiple accelerators on multiple nodes**
 
 To get access to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters:
 
@@ -753,9 +753,9 @@ This output means that the PBS allocated nodes cn204 and cn205, which means that
 
 !!! Note
     At this point user can connect to any of the allocated nodes or any of the allocated MIC accelerators using ssh:
-    * to connect to the second node : `$ ssh cn205`
-    * to connect to the accelerator on the first node from the first node: `$ ssh cn204-mic0` or `$ ssh mic0`
-    * to connect to the accelerator on the second node from the first node: `$ ssh cn205-mic0`
+    - to connect to the second node : `$ ssh cn205`
+    - to connect to the accelerator on the first node from the first node: `$ ssh cn204-mic0` or `$ ssh mic0`
+    - to connect to the accelerator on the second node from the first node: `$ ssh cn205-mic0`
 
 At this point we expect that correct modules are loaded and binary is compiled. For parallel execution the mpiexec.hydra is used. Again the first step is to tell mpiexec that the MPI can be executed on MIC accelerators by setting up the environmental variable "I_MPI_MIC"
 
@@ -873,7 +873,7 @@ To run the MPI code using mpirun and the machine file "hosts_file_mix" use:
 A possible output of the MPI "hello-world" example executed on two hosts and two accelerators is:
 
 ```bash
-     Hello world from process 0 of 8 on host cn204
+	Hello world from process 0 of 8 on host cn204
     Hello world from process 1 of 8 on host cn204
     Hello world from process 2 of 8 on host cn204-mic0
     Hello world from process 3 of 8 on host cn204-mic0
@@ -891,11 +891,11 @@ A possible output of the MPI "hello-world" example executed on two hosts and two
 PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated:
 
 !!! Note
-     **Host only node-file:**
+	**Host only node-file:**
 
-     * /lscratch/${PBS_JOBID}/nodefile-cn MIC only node-file:
-     * /lscratch/${PBS_JOBID}/nodefile-mic Host and MIC node-file:
-     * /lscratch/${PBS_JOBID}/nodefile-mix
+     - /lscratch/${PBS_JOBID}/nodefile-cn MIC only node-file:
+     - /lscratch/${PBS_JOBID}/nodefile-mic Host and MIC node-file:
+     - /lscratch/${PBS_JOBID}/nodefile-mix
 
 Each host or accelerator is listed only per files. User has to specify how many jobs should be executed per node using `-n` parameter of the mpirun command.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
index fe35c4ba66bbea0d91bec95527cf6196f96289d7..7d7dc89b89b39c346664feef868528a9f9bc4217 100644
--- a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
+++ b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
@@ -11,7 +11,7 @@ If an ISV application was purchased for educational (research) purposes and also
 ## Overview of the Licenses Usage
 
 !!! Note
-     The overview is generated every minute and is accessible from web or command line interface.
+	The overview is generated every minute and is accessible from web or command line interface.
 
 ### Web Interface
 
@@ -23,7 +23,7 @@ For each license there is a table, which provides the information about the name
 For each license there is a unique text file, which provides the information about the name, number of available (purchased/licensed), number of used and number of free license features. The text files are accessible from the Anselm command prompt.
 
 | Product    | File with license state                           | Note                |
-| ---------* | ------------------------------------------------* | ------------------* |
+| ---------- | ------------------------------------------------- | ------------------- |
 | ansys      | /apps/user/licenses/ansys_features_state.txt      | Commercial          |
 | comsol     | /apps/user/licenses/comsol_features_state.txt     | Commercial          |
 | comsol-edu | /apps/user/licenses/comsol-edu_features_state.txt | Non-commercial only |
@@ -61,26 +61,26 @@ The general format of the name is `feature__APP__FEATURE`.
 
 Names of applications (APP):
 
-*   ansys
-*   comsol
-*   comsol-edu
-*   matlab
-*   matlab-edu
+-   ansys
+-   comsol
+-   comsol-edu
+-   matlab
+-   matlab-edu
 
 To get the FEATUREs of a license take a look into the corresponding state file ([see above](isv_licenses/#Licence)), or use:
 
 **Application and List of provided features**
 
-*   **ansys** $ grep -v "#" /apps/user/licenses/ansys_features_state.txt | cut -f1 -d' '
-*   **comsol** $ grep -v "#" /apps/user/licenses/comsol_features_state.txt | cut -f1 -d' '
-*   **comsol-ed** $ grep -v "#" /apps/user/licenses/comsol-edu_features_state.txt | cut -f1 -d' '
-*   **matlab** $ grep -v "#" /apps/user/licenses/matlab_features_state.txt | cut -f1 -d' '
-*   **matlab-edu** $ grep -v "#" /apps/user/licenses/matlab-edu_features_state.txt | cut -f1 -d' '
+-   **ansys** $ grep -v "#" /apps/user/licenses/ansys_features_state.txt | cut -f1 -d' '
+-   **comsol** $ grep -v "#" /apps/user/licenses/comsol_features_state.txt | cut -f1 -d' '
+-   **comsol-ed** $ grep -v "#" /apps/user/licenses/comsol-edu_features_state.txt | cut -f1 -d' '
+-   **matlab** $ grep -v "#" /apps/user/licenses/matlab_features_state.txt | cut -f1 -d' '
+-   **matlab-edu** $ grep -v "#" /apps/user/licenses/matlab-edu_features_state.txt | cut -f1 -d' '
 
 Example of PBS Pro resource name, based on APP and FEATURE name:
 
 | Application | Feature                    | PBS Pro resource name                           |
-| ----------* | -------------------------* | ----------------------------------------------* |
+| ----------- | -------------------------- | ----------------------------------------------- |
 | ansys       | acfd                       | feature_ansys_acfd                              |
 | ansys       | aa_r                       | feature_ansys_aa_r                              |
 | comsol      | COMSOL                     | feature_comsol_COMSOL                           |
diff --git a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
index d174981aa137eec2c9fa7dae7ee309d3ed97e979..31e371f292832d03640595a0fb922d5763e79ed1 100644
--- a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
+++ b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
@@ -6,13 +6,13 @@ Running virtual machines on compute nodes
 
 There are situations when Anselm's environment is not suitable for user needs.
 
-*   Application requires different operating system (e.g Windows), application is not available for Linux
-*   Application requires different versions of base system libraries and tools
-*   Application requires specific setup (installation, configuration) of complex software stack
-*   Application requires privileged access to operating system
-*   ... and combinations of above cases
+-   Application requires different operating system (e.g Windows), application is not available for Linux
+-   Application requires different versions of base system libraries and tools
+-   Application requires specific setup (installation, configuration) of complex software stack
+-   Application requires privileged access to operating system
+-   ... and combinations of above cases
 
-We offer solution for these cases * **virtualization**. Anselm's environment gives the possibility to run virtual machines on compute nodes. Users can create their own images of operating system with specific software stack and run instances of these images as virtual machines on compute nodes. Run of virtual machines is provided by standard mechanism of [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction/).
+We offer solution for these cases - **virtualization**. Anselm's environment gives the possibility to run virtual machines on compute nodes. Users can create their own images of operating system with specific software stack and run instances of these images as virtual machines on compute nodes. Run of virtual machines is provided by standard mechanism of [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction/).
 
 Solution is based on QEMU-KVM software stack and provides hardware-assisted x86 virtualization.
 
@@ -27,7 +27,7 @@ Virtualization has also some drawbacks, it is not so easy to setup efficient sol
 Solution described in chapter [HOWTO](virtualization/#howto)  is suitable for single node tasks, does not introduce virtual machine clustering.
 
 !!! Note
-     Please consider virtualization as last resort solution for your needs.
+	Please consider virtualization as last resort solution for your needs.
 
 !!! Warning
     Please consult use of virtualization with IT4Innovation's support.
@@ -39,7 +39,7 @@ For running Windows application (when source code and Linux native application a
 IT4Innovations does not provide any licenses for operating systems and software of virtual machines. Users are ( in accordance with [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf)) fully responsible for licensing all software running in virtual machines on Anselm. Be aware of complex conditions of licensing software in virtual environments.
 
 !!! Note
-     Users are responsible for licensing OS e.g. MS Windows and all software running in their virtual machines.
+	Users are responsible for licensing OS e.g. MS Windows and all software running in their virtual machines.
 
 ## Howto
 
@@ -49,7 +49,7 @@ We propose this job workflow:
 
 ![Workflow](../../img/virtualization-job-workflow "Virtualization Job Workflow")
 
-Our recommended solution is that job script creates distinct shared job directory, which makes a central point for data exchange between Anselm's environment, compute node (host) (e.g. HOME, SCRATCH, local scratch and other local or cluster file systems) and virtual machine (guest). Job script links or copies input data and instructions what to do (run script) for virtual machine to job directory and virtual machine process input data according instructions in job directory and store output back to job directory. We recommend, that virtual machine is running in so called [snapshot mode](virtualization/#snapshot-mode), image is immutable * image does not change, so one image can be used for many concurrent jobs.
+Our recommended solution is that job script creates distinct shared job directory, which makes a central point for data exchange between Anselm's environment, compute node (host) (e.g. HOME, SCRATCH, local scratch and other local or cluster file systems) and virtual machine (guest). Job script links or copies input data and instructions what to do (run script) for virtual machine to job directory and virtual machine process input data according instructions in job directory and store output back to job directory. We recommend, that virtual machine is running in so called [snapshot mode](virtualization/#snapshot-mode), image is immutable - image does not change, so one image can be used for many concurrent jobs.
 
 ### Procedure
 
@@ -65,13 +65,13 @@ You can either use your existing image or create new image from scratch.
 
 QEMU currently supports these image types or formats:
 
-*   raw
-*   cloop
-*   cow
-*   qcow
-*   qcow2
-*   vmdk * VMware 3 & 4, or 6 image format, for exchanging images with that product
-*   vdi * VirtualBox 1.1 compatible image format, for exchanging images with VirtualBox.
+-   raw
+-   cloop
+-   cow
+-   qcow
+-   qcow2
+-   vmdk - VMware 3 & 4, or 6 image format, for exchanging images with that product
+-   vdi - VirtualBox 1.1 compatible image format, for exchanging images with VirtualBox.
 
 You can convert your existing image using qemu-img convert command. Supported formats of this command are: blkdebug blkverify bochs cloop cow dmg file ftp ftps host_cdrom host_device host_floppy http https nbd parallels qcow qcow2 qed raw sheepdog tftp vdi vhdx vmdk vpc vvfat.
 
@@ -97,10 +97,10 @@ Your image should run some kind of operating system startup script. Startup scri
 
 We recommend, that startup script
 
-*   maps Job Directory from host (from compute node)
-*   runs script (we call it "run script") from Job Directory and waits for application's exit
-    *   for management purposes if run script does not exist wait for some time period (few minutes)
-*   shutdowns/quits OS
+-   maps Job Directory from host (from compute node)
+-   runs script (we call it "run script") from Job Directory and waits for application's exit
+    -   for management purposes if run script does not exist wait for some time period (few minutes)
+-   shutdowns/quits OS
 
 For Windows operating systems we suggest using Local Group Policy Startup script, for Linux operating systems rc.local, runlevel init script or similar service.
 
@@ -249,7 +249,7 @@ Run virtual machine using optimized devices, user network back-end with sharing
 Thanks to port forwarding you can access virtual machine via SSH (Linux) or RDP (Windows) connecting to IP address of compute node (and port 2222 for SSH). You must use VPN network).
 
 !!! Note
-     Keep in mind, that if you use virtio devices, you must have virtio drivers installed on your virtual machine.
+	Keep in mind, that if you use virtio devices, you must have virtio drivers installed on your virtual machine.
 
 ### Networking and Data Sharing
 
@@ -298,7 +298,7 @@ Create virtual network switch.
 Run SLIRP daemon over SSH tunnel on login node and connect it to virtual network switch.
 
 ```bash
-    $ dpipe vde_plug /tmp/sw0 = ssh login1 $VDE2_DIR/bin/slirpvde -s * --dhcp &
+    $ dpipe vde_plug /tmp/sw0 = ssh login1 $VDE2_DIR/bin/slirpvde -s - --dhcp &
 ```
 
 Run qemu using vde network back-end, connect to created virtual switch.
@@ -338,9 +338,9 @@ Interface tap0 has IP address 192.168.1.1 and network mask 255.255.255.0 (/24).
 
 Redirected ports:
 
-*   DNS udp/53->udp/3053, tcp/53->tcp3053
-*   DHCP udp/67->udp3067
-*   SMB tcp/139->tcp3139, tcp/445->tcp3445).
+-   DNS udp/53->udp/3053, tcp/53->tcp3053
+-   DHCP udp/67->udp3067
+-   SMB tcp/139->tcp3139, tcp/445->tcp3445).
 
 You can configure IP address of virtual machine statically or dynamically. For dynamic addressing provide your DHCP server on port 3067 of tap0 interface, you can also provide your DNS server on port 3053 of tap0 interface for example:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md b/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md
index ae5c685364f24d143cdd976a134849d1fe62b531..7d954569b14c633272ee4ee793f0a62703f7827c 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md
@@ -7,7 +7,7 @@ The OpenMPI programs may be executed only via the PBS Workload manager, by enter
 ### Basic Usage
 
 !!! Note
-     Use the mpiexec to run the OpenMPI code.
+	Use the mpiexec to run the OpenMPI code.
 
 Example:
 
@@ -28,7 +28,7 @@ Example:
 ```
 
 !!! Note
-     Please be aware, that in this example, the directive **-pernode** is used to run only **one task per node**, which is normally an unwanted behaviour (unless you want to run hybrid code with just one MPI and 16 OpenMP tasks per node). In normal MPI programs **omit the -pernode directive** to run up to 16 MPI tasks per each node.
+	Please be aware, that in this example, the directive **-pernode** is used to run only **one task per node**, which is normally an unwanted behaviour (unless you want to run hybrid code with just one MPI and 16 OpenMP tasks per node). In normal MPI programs **omit the -pernode directive** to run up to 16 MPI tasks per each node.
 
 In this example, we allocate 4 nodes via the express queue interactively. We set up the openmpi environment and interactively run the helloworld_mpi.x program. Note that the executable helloworld_mpi.x must be available within the
 same path on all nodes. This is automatically fulfilled on the /home and /scratch filesystem.
@@ -49,7 +49,7 @@ You need to preload the executable, if running on the local scratch /lscratch fi
 In this example, we assume the executable helloworld_mpi.x is present on compute node cn17 on local scratch. We call the mpiexec whith the **--preload-binary** argument (valid for openmpi). The mpiexec will copy the executable from cn17 to the /lscratch/15210.srv11 directory on cn108, cn109 and cn110 and execute the program.
 
 !!! Note
-     MPI process mapping may be controlled by PBS parameters.
+	MPI process mapping may be controlled by PBS parameters.
 
 The mpiprocs and ompthreads parameters allow for selection of number of running MPI processes per node as well as number of OpenMP threads per MPI process.
 
@@ -98,7 +98,7 @@ In this example, we demonstrate recommended way to run an MPI application, using
 ### OpenMP Thread Affinity
 
 !!! Note
-     Important!  Bind every OpenMP thread to a core!
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two examples with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You might want to avoid this by setting these environment variable for GCC OpenMP:
 
@@ -153,7 +153,7 @@ In this example, we see that ranks have been mapped on nodes according to the or
 Exact control of MPI process placement and resource binding is provided by specifying a rankfile
 
 !!! Note
-     Appropriate binding may boost performance of your application.
+	Appropriate binding may boost performance of your application.
 
 Example rankfile
 
@@ -207,7 +207,7 @@ In all cases, binding and threading may be verified by executing for example:
 Some options have changed in OpenMPI version 1.8.
 
 | version 1.6.5    | version 1.8.1       |
-| ---------------* | ------------------* |
+| ---------------- | ------------------- |
 | --bind-to-none   | --bind-to none      |
 | --bind-to-core   | --bind-to core      |
 | --bind-to-socket | --bind-to socket    |
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
index 46397a9f60c0f50ffc2ec9e7871361a55c002290..b2290555f209d8b65a7d6a3e04bb802bf2ae8ff2 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
@@ -5,7 +5,7 @@
 The Anselm cluster provides several implementations of the MPI library:
 
 | MPI Library                                          | Thread support                                                  |
-| ---------------------------------------------------* | --------------------------------------------------------------* |
+| ---------------------------------------------------- | --------------------------------------------------------------- |
 | The highly optimized and stable **bullxmpi 1.2.4.1** | Partial thread support up to MPI_THREAD_SERIALIZED              |
 | The **Intel MPI 4.1**                                | Full thread support up to MPI_THREAD_MULTIPLE                   |
 | The [OpenMPI 1.6.5](href="http://www.open-mpi.org)   | Full thread support up to MPI_THREAD_MULTIPLE, BLCR c/r support |
@@ -18,7 +18,7 @@ Look up section modulefiles/mpi in module avail
 
 ```bash
     $ module avail
-    ------------------------* /opt/modules/modulefiles/mpi -------------------------
+    ------------------------- /opt/modules/modulefiles/mpi -------------------------
     bullxmpi/bullxmpi-1.2.4.1  mvapich2/1.9-icc
     impi/4.0.3.008             openmpi/1.6.5-gcc(default)
     impi/4.1.0.024             openmpi/1.6.5-gcc46
@@ -32,7 +32,7 @@ Look up section modulefiles/mpi in module avail
 There are default compilers associated with any particular MPI implementation. The defaults may be changed, the MPI libraries may be used in conjunction with any compiler. The defaults are selected via the modules in following way
 
 | Module       | MPI              | Compiler suite                                                                 |
-| -----------* | ---------------* | -----------------------------------------------------------------------------* |
+| ------------ | ---------------- | ------------------------------------------------------------------------------ |
 | PrgEnv-gnu   | bullxmpi-1.2.4.1 | bullx GNU 4.4.6                                                                |
 | PrgEnv-intel | Intel MPI 4.1.1  | Intel 13.1.1                                                                   |
 | bullxmpi     | bullxmpi-1.2.4.1 | none, select via module                                                        |
@@ -61,7 +61,7 @@ In this example, the openmpi 1.6.5 using intel compilers is activated
 ## Compiling MPI Programs
 
 !!! Note
-     After setting up your MPI environment, compile your program using one of the mpi wrappers
+	After setting up your MPI environment, compile your program using one of the mpi wrappers
 
 ```bash
     $ mpicc -v
@@ -108,7 +108,7 @@ Compile the above example with
 ## Running MPI Programs
 
 !!! Note
-     The MPI program executable must be compatible with the loaded MPI module. 
+	The MPI program executable must be compatible with the loaded MPI module. 
     Always compile and execute using the very same MPI module.
 
 It is strongly discouraged to mix mpi implementations. Linking an application with one MPI implementation and running mpirun/mpiexec form other implementation may result in unexpected errors.
@@ -120,7 +120,7 @@ The MPI program executable must be available within the same path on all nodes.
 Optimal way to run an MPI program depends on its memory requirements, memory access pattern and communication pattern.
 
 !!! Note
-     Consider these ways to run an MPI program:
+	Consider these ways to run an MPI program:
 
     1. One MPI process per node, 16 threads per process
     2. Two MPI processes per node, 8 threads per process
@@ -131,7 +131,7 @@ Optimal way to run an MPI program depends on its memory requirements, memory acc
 **Two MPI** processes per node, using 8 threads each, bound to processor socket is most useful for memory bandwidth bound applications such as BLAS1 or FFT, with scalable memory demand. However, note that the two processes will share access to the network interface. The 8 threads and socket binding should ensure maximum memory access bandwidth and minimize communication, migration and NUMA effect overheads.
 
 !!! Note
-     Important!  Bind every OpenMP thread to a core!
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two cases with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You want to avoid this by setting the KMP_AFFINITY or GOMP_CPU_AFFINITY environment variables.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md
index b9032bc54da9ebedb63080e9548627089276d154..9fe89641117e55704c029a26043311b6aec3a96a 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md
@@ -7,7 +7,7 @@ The MPICH2 programs use mpd daemon or ssh connection to spawn processes, no PBS
 ### Basic Usage
 
 !!! Note
-     Use the mpirun to execute the MPICH2 code.
+	Use the mpirun to execute the MPICH2 code.
 
 Example:
 
@@ -44,7 +44,7 @@ You need to preload the executable, if running on the local scratch /lscratch fi
 In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the cp command via mpirun, copying the executable from shared home to local scratch . Second  mpirun will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node.
 
 !!! Note
-     MPI process mapping may be controlled by PBS parameters.
+	MPI process mapping may be controlled by PBS parameters.
 
 The mpiprocs and ompthreads parameters allow for selection of number of running MPI processes per node as well as number of OpenMP threads per MPI process.
 
@@ -93,7 +93,7 @@ In this example, we demonstrate recommended way to run an MPI application, using
 ### OpenMP Thread Affinity
 
 !!! Note
-     Important!  Bind every OpenMP thread to a core!
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two examples with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You might want to avoid this by setting these environment variable for GCC OpenMP:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
index f60f45258e5c5ca442e56e61092793d54d8207ec..1f933d82f633a98cf16394bdf10e608d2f2c1b8f 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
@@ -4,8 +4,8 @@
 
 Matlab is available in versions R2015a and R2015b. There are always two variants of the release:
 
-*   Non commercial or so called EDU variant, which can be used for common research and educational purposes.
-*   Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
+-   Non commercial or so called EDU variant, which can be used for common research and educational purposes.
+-   Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
 
 To load the latest version of Matlab load the module
 
@@ -42,7 +42,7 @@ plots, images, etc... will be still available.
 ## Running Parallel Matlab Using Distributed Computing Toolbox / Engine
 
 !!! Note
-     Distributed toolbox is available only for the EDU variant
+	Distributed toolbox is available only for the EDU variant
 
 The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1).
 
@@ -65,7 +65,7 @@ Or in the GUI, go to tab HOME -> Parallel -> Manage Cluster Profiles..., click I
 With the new mode, MATLAB itself launches the workers via PBS, so you can either use interactive mode or a batch mode on one node, but the actual parallel processing will be done in a separate job started by MATLAB itself. Alternatively, you can use "local" mode to run parallel code on just a single node.
 
 !!! Note
-     The profile is confusingly named Salomon, but you can use it also on Anselm.
+	The profile is confusingly named Salomon, but you can use it also on Anselm.
 
 ### Parallel Matlab Interactive Session
 
@@ -179,7 +179,7 @@ You can copy and paste the example in a .m file and execute. Note that the parpo
 
 ### Parallel Matlab Batch Job Using PBS Mode (Workers Spawned in a Separate Job)
 
-This mode uses PBS scheduler to launch the parallel pool. It uses the SalomonPBSPro profile that needs to be imported to Cluster Manager, as mentioned before. This methodod uses MATLAB's PBS Scheduler interface * it spawns the workers in a separate job submitted by MATLAB using qsub.
+This mode uses PBS scheduler to launch the parallel pool. It uses the SalomonPBSPro profile that needs to be imported to Cluster Manager, as mentioned before. This methodod uses MATLAB's PBS Scheduler interface - it spawns the workers in a separate job submitted by MATLAB using qsub.
 
 This is an example of m-script using PBS mode:
 
@@ -260,7 +260,7 @@ In case of non-interactive session please read the [following information](../is
 Starting Matlab workers is an expensive process that requires certain amount of time. For your information please see the following table:
 
 | compute nodes | number of workers | start-up time[s] |
-| ------------* | ----------------* | ---------------* |
+| ------------- | ----------------- | ---------------- |
 | 16            | 384               | 831              |
 | 8             | 192               | 807              |
 | 4             | 96                | 483              |
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
index e98fa4a236ebcea2bd0a27349bbd93699770469b..c69a9eea9debc508b84dbe0b502e205272451eea 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
@@ -3,12 +3,12 @@
 ## Introduction
 
 !!! Note
-     This document relates to the old versions R2013 and R2014. For MATLAB 2015, please use [this documentation instead](matlab/).
+	This document relates to the old versions R2013 and R2014. For MATLAB 2015, please use [this documentation instead](matlab/).
 
 Matlab is available in the latest stable version. There are always two variants of the release:
 
-*   Non commercial or so called EDU variant, which can be used for common research and educational purposes.
-*   Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
+-   Non commercial or so called EDU variant, which can be used for common research and educational purposes.
+-   Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
 
 To load the latest version of Matlab load the module
 
@@ -202,7 +202,7 @@ In case of non-interactive session please read the [following information](../is
 Starting Matlab workers is an expensive process that requires certain amount of time. For your information please see the following table:
 
 | compute nodes | number of workers | start-up time[s] |
-| ------------* | ----------------* | ---------------* |
+| ------------- | ----------------- | ---------------- |
 | 16            | 256               | 1008             |
 | 8             | 128               | 534              |
 | 4             | 64                | 333              |
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
index 6f8c807ba09c0a9f1157a8eff78087926b613e7a..fa6a0378ddc36a1b085d0e24f625fd40030679cb 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
@@ -7,7 +7,7 @@ GNU Octave is a high-level interpreted language, primarily intended for numerica
 Two versions of octave are available on Anselm, via module
 
 | Version                                               | module                    |
-| ----------------------------------------------------* | ------------------------* |
+| ----------------------------------------------------- | ------------------------- |
 | Octave 3.8.2, compiled with GCC and Multithreaded MKL | Octave/3.8.2-gimkl-2.11.5 |
 | Octave 4.0.1, compiled with GCC and Multithreaded MKL | Octave/4.0.1-gimkl-2.11.5 |
 | Octave 4.0.0, compiled with >GCC and OpenBLAS         | Octave/4.0.0-foss-2015g   |
@@ -90,14 +90,14 @@ In this example, the calculation was automatically divided among the CPU cores a
 
 A version of [native](../intel-xeon-phi/#section-4) Octave is compiled for Xeon Phi accelerators. Some limitations apply for this version:
 
-*   Only command line support. GUI, graph plotting etc. is not supported.
-*   Command history in interactive mode is not supported.
+-   Only command line support. GUI, graph plotting etc. is not supported.
+-   Command history in interactive mode is not supported.
 
 Octave is linked with parallel Intel MKL, so it best suited for batch processing of tasks that utilize BLAS, LAPACK and FFT operations. By default, number of threads is set to 120, you can control this with > OMP_NUM_THREADS environment
 variable.
 
 !!! Note
-     Calculations that do not employ parallelism (either by using parallel MKL e.g. via matrix operations,  fork() function, [parallel package](http://octave.sourceforge.net/parallel/) or other mechanism) will actually run slower than on host CPU.
+	Calculations that do not employ parallelism (either by using parallel MKL e.g. via matrix operations,  fork() function, [parallel package](http://octave.sourceforge.net/parallel/) or other mechanism) will actually run slower than on host CPU.
 
 To use Octave on a node with Xeon Phi:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md
index b73683e146a0f99fdb7bea41b73214ac1b9b738f..c99930167290cd2b2707a6d07a0f61a1ef89ad85 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md
@@ -17,7 +17,7 @@ Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals
 The R version 3.0.1 is available on Anselm, along with GUI interface Rstudio
 
 | Application | Version      | module  |
-| ----------* | -----------* | ------* |
+| ----------- | ------------ | ------- |
 | **R**       | R 3.0.1      | R       |
 | **Rstudio** | Rstudio 0.97 | Rstudio |
 
@@ -96,7 +96,7 @@ Download the package [parallell](package-parallel-vignette.pdf) vignette.
 The forking is the most simple to use. Forking family of functions provide parallelized, drop in replacement for the serial apply() family of functions.
 
 !!! Note
-     Forking via package parallel provides functionality similar to OpenMP construct
+	Forking via package parallel provides functionality similar to OpenMP construct
 
     omp parallel for
 
@@ -108,13 +108,13 @@ Forking example:
     library(parallel)
 
     #integrand function
-    f <* function(i,h) {
-    x <* h*(i-0.5)
+    f <- function(i,h) {
+    x <- h*(i-0.5)
     return (4/(1 + x*x))
     }
 
     #initialize
-    size <* detectCores()
+    size <- detectCores()
 
     while (TRUE)
     {
@@ -125,11 +125,11 @@ Forking example:
       if(n<=0) break
 
       #run the calculation
-      n <* max(n,size)
-      h <*   1.0/n
+      n <- max(n,size)
+      h <-   1.0/n
 
-      i <* seq(1,n);
-      pi3 <* h*sum(simplify2array(mclapply(i,f,h,mc.cores=size)));
+      i <- seq(1,n);
+      pi3 <- h*sum(simplify2array(mclapply(i,f,h,mc.cores=size)));
 
       #print results
       cat(sprintf("Value of PI %16.14f, diff= %16.14fn",pi3,pi3-pi))
@@ -147,7 +147,7 @@ Every evaluation of the integrad function runs in parallel on different process.
 ## Package Rmpi
 
 !!! Note
-     package Rmpi provides an interface (wrapper) to MPI APIs.
+	package Rmpi provides an interface (wrapper) to MPI APIs.
 
 It also provides interactive R slave environment. On Anselm, Rmpi provides interface to the [OpenMPI](../mpi-1/Running_OpenMPI/).
 
@@ -164,7 +164,7 @@ Rmpi may be used in three basic ways. The static approach is identical to execut
 
 ### Static Rmpi
 
-Static Rmpi programs are executed via mpiexec, as any other MPI programs. Number of processes is static * given at the launch time.
+Static Rmpi programs are executed via mpiexec, as any other MPI programs. Number of processes is static - given at the launch time.
 
 Static Rmpi example:
 
@@ -172,15 +172,15 @@ Static Rmpi example:
     library(Rmpi)
 
     #integrand function
-    f <* function(i,h) {
-    x <* h*(i-0.5)
+    f <- function(i,h) {
+    x <- h*(i-0.5)
     return (4/(1 + x*x))
     }
 
     #initialize
     invisible(mpi.comm.dup(0,1))
-    rank <* mpi.comm.rank()
-    size <* mpi.comm.size()
+    rank <- mpi.comm.rank()
+    size <- mpi.comm.size()
     n<-0
 
     while (TRUE)
@@ -192,18 +192,18 @@ Static Rmpi example:
       }
 
       #broadcat the intervals
-      n <* mpi.bcast(as.integer(n),type=1)
+      n <- mpi.bcast(as.integer(n),type=1)
 
       if(n<=0) break
 
       #run the calculation
-      n <* max(n,size)
-      h <*   1.0/n
+      n <- max(n,size)
+      h <-   1.0/n
 
-      i <* seq(rank+1,n,size);
-      mypi <* h*sum(sapply(i,f,h));
+      i <- seq(rank+1,n,size);
+      mypi <- h*sum(sapply(i,f,h));
 
-      pi3 <* mpi.reduce(mypi)
+      pi3 <- mpi.reduce(mypi)
 
       #print results
       if (rank==0) cat(sprintf("Value of PI %16.14f, diff= %16.14fn",pi3,pi3-pi))
@@ -228,17 +228,17 @@ Dynamic Rmpi example:
 
 ```cpp
     #integrand function
-    f <* function(i,h) {
-    x <* h*(i-0.5)
+    f <- function(i,h) {
+    x <- h*(i-0.5)
     return (4/(1 + x*x))
     }
 
     #the worker function
-    workerpi <* function()
+    workerpi <- function()
     {
     #initialize
-    rank <* mpi.comm.rank()
-    size <* mpi.comm.size()
+    rank <- mpi.comm.rank()
+    size <- mpi.comm.size()
     n<-0
 
     while (TRUE)
@@ -250,18 +250,18 @@ Dynamic Rmpi example:
       }
 
       #broadcat the intervals
-      n <* mpi.bcast(as.integer(n),type=1)
+      n <- mpi.bcast(as.integer(n),type=1)
 
       if(n<=0) break
 
       #run the calculation
-      n <* max(n,size)
-      h <*   1.0/n
+      n <- max(n,size)
+      h <-   1.0/n
 
-      i <* seq(rank+1,n,size);
-      mypi <* h*sum(sapply(i,f,h));
+      i <- seq(rank+1,n,size);
+      mypi <- h*sum(sapply(i,f,h));
 
-      pi3 <* mpi.reduce(mypi)
+      pi3 <- mpi.reduce(mypi)
 
       #print results
       if (rank==0) cat(sprintf("Value of PI %16.14f, diff= %16.14fn",pi3,pi3-pi))
@@ -297,7 +297,7 @@ Execute the example as:
 mpi.apply is a specific way of executing Dynamic Rmpi programs.
 
 !!! Note
-     mpi.apply() family of functions provide MPI parallelized, drop in replacement for the serial apply() family of functions.
+	mpi.apply() family of functions provide MPI parallelized, drop in replacement for the serial apply() family of functions.
 
 Execution is identical to other dynamic Rmpi programs.
 
@@ -305,20 +305,20 @@ mpi.apply Rmpi example:
 
 ```bash
     #integrand function
-    f <* function(i,h) {
-    x <* h*(i-0.5)
+    f <- function(i,h) {
+    x <- h*(i-0.5)
     return (4/(1 + x*x))
     }
 
     #the worker function
-    workerpi <* function(rank,size,n)
+    workerpi <- function(rank,size,n)
     {
       #run the calculation
-      n <* max(n,size)
-      h <* 1.0/n
+      n <- max(n,size)
+      h <- 1.0/n
 
-      i <* seq(rank,n,size);
-      mypi <* h*sum(sapply(i,f,h));
+      i <- seq(rank,n,size);
+      mypi <- h*sum(sapply(i,f,h));
 
       return(mypi)
     }
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md
index 8148d5daa75d7cb4c3c321e21f647bf5a8639828..67c3fdf090a195dc3cd88d66dd920e0cd6163648 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md
@@ -7,7 +7,7 @@ FFTW is a C subroutine library for computing the discrete Fourier transform  in
 Two versions, **3.3.3** and **2.1.5** of FFTW are available on Anselm, each compiled for **Intel MPI** and **OpenMPI** using **intel** and **gnu** compilers. These are available via modules:
 
 | Version        | Parallelization | module              | linker options                      |
-| -------------* | --------------* | ------------------* | ----------------------------------* |
+| -------------- | --------------- | ------------------- | ----------------------------------- |
 | FFTW3 gcc3.3.3 | pthread, OpenMP | fftw3/3.3.3-gcc     | -lfftw3, -lfftw3_threads-lfftw3_omp |
 | FFTW3 icc3.3.3 | pthread, OpenMP | fftw3               | -lfftw3, -lfftw3_threads-lfftw3_omp |
 | FFTW2 gcc2.1.5 | pthread         | fftw2/2.1.5-gcc     | -lfftw, -lfftw_threads              |
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md
index 8160b4af6dfaba05e2c48b901f79a0fdfcee0929..0f21187f23982ea4eebba887fa786e7cef0467c6 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md
@@ -8,46 +8,46 @@ The GNU Scientific Library (GSL) provides a wide range of mathematical routines
 
 The library covers a wide range of topics in numerical computing. Routines are available for the following areas:
 
-                           Complex Numbers               Roots of Polynomials
+                           Complex Numbers          	Roots of Polynomials
 
-                           Special Functions             Vectors and Matrices
+                           Special Functions        	Vectors and Matrices
 
-                           Permutations                 Combinations
+                           Permutations            	Combinations
 
-                           Sorting                       BLAS Support
+                           Sorting                  	BLAS Support
 
-                           Linear Algebra                CBLAS Library
+                           Linear Algebra           	CBLAS Library
 
-                           Fast Fourier Transforms       Eigensystems
+                           Fast Fourier Transforms  	Eigensystems
 
-                           Random Numbers                Quadrature
+                           Random Numbers           	Quadrature
 
-                           Random Distributions          Quasi-Random Sequences
+                           Random Distributions     	Quasi-Random Sequences
 
-                           Histograms                    Statistics
+                           Histograms               	Statistics
 
-                           Monte Carlo Integration       N-Tuples
+                           Monte Carlo Integration  	N-Tuples
 
-                           Differential Equations        Simulated Annealing
+                           Differential Equations   	Simulated Annealing
 
                            Numerical Differentiation    Interpolation
 
-                           Series Acceleration           Chebyshev Approximations
+                           Series Acceleration      	Chebyshev Approximations
 
-                           Root-Finding                  Discrete Hankel Transforms
+                           Root-Finding             	Discrete Hankel Transforms
 
-                           Least-Squares Fitting         Minimization
+                           Least-Squares Fitting    	Minimization
 
-                           IEEE Floating-Point           Physical Constants
+                           IEEE Floating-Point      	Physical Constants
 
-                           Basis Splines                 Wavelets
+                           Basis Splines            	Wavelets
 
 ## Modules
 
 The GSL 1.16 is available on Anselm, compiled for GNU and Intel compiler. These variants are available via modules:
 
 | Module                | Compiler  |
-| --------------------* | --------* |
+| --------------------- | --------- |
 | gsl/1.16-gcc          | gcc 4.8.6 |
 | gsl/1.16-icc(default) | icc       |
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md
index 3ad0501ce064f6d945d1fd8a61e178423d574abe..42e05e01e70a551f1eb7f60ef7d66203320d8c81 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md
@@ -7,7 +7,7 @@ Hierarchical Data Format library. Serial and MPI parallel version.
 Versions **1.8.11** and **1.8.13** of HDF5 library are available on Anselm, compiled for **Intel MPI** and **OpenMPI** using **intel** and **gnu** compilers. These are available via modules:
 
 | Version               | Parallelization                   | module                     | C linker options      | C++ linker options      | Fortran linker options  |
-| --------------------* | --------------------------------* | -------------------------* | --------------------* | ----------------------* | ----------------------* |
+| --------------------- | --------------------------------- | -------------------------- | --------------------- | ----------------------- | ----------------------- |
 | HDF5 icc serial       | pthread                           | hdf5/1.8.11                | $HDF5_INC $HDF5_SHLIB | $HDF5_INC $HDF5_CPP_LIB | $HDF5_INC $HDF5_F90_LIB |
 | HDF5 icc parallel MPI | pthread, IntelMPI                 | hdf5-parallel/1.8.11       | $HDF5_INC $HDF5_SHLIB | Not supported           | $HDF5_INC $HDF5_F90_LIB |
 | HDF5 icc serial       | pthread                           | hdf5/1.8.13                | $HDF5_INC $HDF5_SHLIB | $HDF5_INC $HDF5_CPP_LIB | $HDF5_INC $HDF5_F90_LIB |
@@ -23,7 +23,7 @@ Versions **1.8.11** and **1.8.13** of HDF5 library are available on Anselm, comp
 The module sets up environment variables, required for linking and running HDF5 enabled applications. Make sure that the choice of HDF5 module is consistent with your choice of MPI library. Mixing MPI of different implementations may have unpredictable results.
 
 !!! Note
-     Be aware, that GCC version of **HDF5 1.8.11** has serious performance issues, since it's compiled with -O0 optimization flag. This version is provided only for testing of code compiled only by GCC and IS NOT recommended for production computations. For more information, please see: <http://www.hdfgroup.org/ftp/HDF5/prev-releases/ReleaseFiles/release5-1811>
+	Be aware, that GCC version of **HDF5 1.8.11** has serious performance issues, since it's compiled with -O0 optimization flag. This version is provided only for testing of code compiled only by GCC and IS NOT recommended for production computations. For more information, please see: <http://www.hdfgroup.org/ftp/HDF5/prev-releases/ReleaseFiles/release5-1811>
 
     All GCC versions of **HDF5 1.8.13** are not affected by the bug, are compiled with -O3 optimizations and are recommended for production computations.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
index 283d17a8c9648e4e1f565066e9e3bee77b8b3f01..ef2ee580914212b99eda25128ba0dc4063994b5b 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
@@ -13,10 +13,10 @@ To be able to compile and link code with MAGMA library user has to load followin
 To make compilation more user friendly module also sets these two environment variables:
 
 !!! Note
-     MAGMA_INC * contains paths to the MAGMA header files (to be used for compilation step)
+	MAGMA_INC - contains paths to the MAGMA header files (to be used for compilation step)
 
 !!! Note
-     MAGMA_LIBS * contains paths to MAGMA libraries (to be used for linking step).
+	MAGMA_LIBS - contains paths to MAGMA libraries (to be used for linking step).
 
 Compilation example:
 
@@ -31,16 +31,16 @@ Compilation example:
 MAGMA implementation for Intel MIC requires a MAGMA server running on accelerator prior to executing the user application. The server can be started and stopped using following scripts:
 
 !!! Note
-     To start MAGMA server use:
-     **$MAGMAROOT/start_magma_server**
+	To start MAGMA server use:
+	**$MAGMAROOT/start_magma_server**
 
 !!! Note
-     To stop the server use:
-     **$MAGMAROOT/stop_magma_server**
+	To stop the server use:
+	**$MAGMAROOT/stop_magma_server**
 
 !!! Note
-     For deeper understanding how the MAGMA server is started, see the following script:
-     **$MAGMAROOT/launch_anselm_from_mic.sh**
+	For deeper understanding how the MAGMA server is started, see the following script:
+	**$MAGMAROOT/launch_anselm_from_mic.sh**
 
 To test if the MAGMA server runs properly we can run one of examples that are part of the MAGMA installation:
 
@@ -54,16 +54,16 @@ To test if the MAGMA server runs properly we can run one of examples that are pa
 
       M     N     CPU GFlop/s (sec)   MAGMA GFlop/s (sec)   ||PA-LU||/(||A||*N)
     =========================================================================
-     1088  1088     --*   (  --*  )     13.93 (   0.06)     ---
-     2112  2112     --*   (  --*  )     77.85 (   0.08)     ---
-     3136  3136     --*   (  --*  )    183.21 (   0.11)     ---
-     4160  4160     --*   (  --*  )    227.52 (   0.21)     ---
-     5184  5184     --*   (  --*  )    258.61 (   0.36)     ---
-     6208  6208     --*   (  --*  )    333.12 (   0.48)     ---
-     7232  7232     --*   (  --*  )    416.52 (   0.61)     ---
-     8256  8256     --*   (  --*  )    446.97 (   0.84)     ---
-     9280  9280     --*   (  --*  )    461.15 (   1.16)     ---
-    10304 10304     --*   (  --*  )    500.70 (   1.46)     ---
+     1088  1088     ---   (  ---  )     13.93 (   0.06)     ---
+     2112  2112     ---   (  ---  )     77.85 (   0.08)     ---
+     3136  3136     ---   (  ---  )    183.21 (   0.11)     ---
+     4160  4160     ---   (  ---  )    227.52 (   0.21)     ---
+     5184  5184     ---   (  ---  )    258.61 (   0.36)     ---
+     6208  6208     ---   (  ---  )    333.12 (   0.48)     ---
+     7232  7232     ---   (  ---  )    416.52 (   0.61)     ---
+     8256  8256     ---   (  ---  )    446.97 (   0.84)     ---
+     9280  9280     ---   (  ---  )    461.15 (   1.16)     ---
+    10304 10304     ---   (  ---  )    500.70 (   1.46)     ---
 ```
 
 !!! Hint
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md
index 2e3fc0b916a432ea8a2ca358cca38b07c51e78d0..5ea0936f7140691ade232c63c65a752018f96537 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md
@@ -8,11 +8,11 @@ PETSc (Portable, Extensible Toolkit for Scientific Computation) is a suite of bu
 
 ## Resources
 
-*   [project webpage](http://www.mcs.anl.gov/petsc/)
-*   [documentation](http://www.mcs.anl.gov/petsc/documentation/)
-    *   [PETSc Users  Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)
-    *   [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)
-*   PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)
+-   [project webpage](http://www.mcs.anl.gov/petsc/)
+-   [documentation](http://www.mcs.anl.gov/petsc/documentation/)
+    -   [PETSc Users  Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)
+    -   [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)
+-   PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)
 
 ## Modules
 
@@ -36,25 +36,25 @@ All these libraries can be used also alone, without PETSc. Their static or share
 
 ### Libraries Linked to PETSc on Anselm (As of 11 April 2015)
 
-*   dense linear algebra
-    *   [Elemental](http://libelemental.org/)
-*   sparse linear system solvers
-    *   [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)
-    *   [MUMPS](http://mumps.enseeiht.fr/)
-    *   [PaStiX](http://pastix.gforge.inria.fr/)
-    *   [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
-    *   [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)
-    *   [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)
-*   input/output
-    *   [ExodusII](http://sourceforge.net/projects/exodusii/)
-    *   [HDF5](http://www.hdfgroup.org/HDF5/)
-    *   [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)
-*   partitioning
-    *   [Chaco](http://www.cs.sandia.gov/CRF/chac.html)
-    *   [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)
-    *   [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)
-    *   [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)
-*   preconditioners & multigrid
-    *   [Hypre](http://www.nersc.gov/users/software/programming-libraries/math-libraries/petsc/)
-    *   [Trilinos ML](http://trilinos.sandia.gov/packages/ml/)
-    *   [SPAI * Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)
+-   dense linear algebra
+    -   [Elemental](http://libelemental.org/)
+-   sparse linear system solvers
+    -   [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)
+    -   [MUMPS](http://mumps.enseeiht.fr/)
+    -   [PaStiX](http://pastix.gforge.inria.fr/)
+    -   [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
+    -   [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)
+    -   [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)
+-   input/output
+    -   [ExodusII](http://sourceforge.net/projects/exodusii/)
+    -   [HDF5](http://www.hdfgroup.org/HDF5/)
+    -   [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)
+-   partitioning
+    -   [Chaco](http://www.cs.sandia.gov/CRF/chac.html)
+    -   [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)
+    -   [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)
+    -   [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)
+-   preconditioners & multigrid
+    -   [Hypre](http://www.nersc.gov/users/software/programming-libraries/math-libraries/petsc/)
+    -   [Trilinos ML](http://trilinos.sandia.gov/packages/ml/)
+    -   [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md
index c86f0646273f33d39d23d62c5f9cb93e91a31fae..0fc553cd6e44ae92774deb9515fb216b9b79abc3 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md
@@ -10,13 +10,13 @@ Trilinos is a collection of software packages for the numerical solution of larg
 
 Current Trilinos installation on ANSELM contains (among others) the following main packages
 
-*   **Epetra** * core linear algebra package containing classes for manipulation with serial and distributed vectors, matrices, and graphs. Dense linear solvers are supported via interface to BLAS and LAPACK (Intel MKL on ANSELM). Its extension **EpetraExt** contains e.g. methods for matrix-matrix multiplication.
-*   **Tpetra** * next-generation linear algebra package. Supports 64-bit indexing and arbitrary data type using C++ templates.
-*   **Belos** * library of various iterative solvers (CG, block CG, GMRES, block GMRES etc.).
-*   **Amesos** * interface to direct sparse solvers.
-*   **Anasazi** * framework for large-scale eigenvalue algorithms.
-*   **IFPACK** * distributed algebraic preconditioner (includes e.g. incomplete LU factorization)
-*   **Teuchos** * common tools packages. This package contains classes for memory management, output, performance monitoring, BLAS and LAPACK wrappers etc.
+-   **Epetra** - core linear algebra package containing classes for manipulation with serial and distributed vectors, matrices, and graphs. Dense linear solvers are supported via interface to BLAS and LAPACK (Intel MKL on ANSELM). Its extension **EpetraExt** contains e.g. methods for matrix-matrix multiplication.
+-   **Tpetra** - next-generation linear algebra package. Supports 64-bit indexing and arbitrary data type using C++ templates.
+-   **Belos** - library of various iterative solvers (CG, block CG, GMRES, block GMRES etc.).
+-   **Amesos** - interface to direct sparse solvers.
+-   **Anasazi** - framework for large-scale eigenvalue algorithms.
+-   **IFPACK** - distributed algebraic preconditioner (includes e.g. incomplete LU factorization)
+-   **Teuchos** - common tools packages. This package contains classes for memory management, output, performance monitoring, BLAS and LAPACK wrappers etc.
 
 For the full list of Trilinos packages, descriptions of their capabilities, and user manuals see [http://trilinos.sandia.gov.](http://trilinos.sandia.gov)
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
index cb0e4554b5c1b8ae18dd866586bfc4a46f2f7868..062f0a69b253a7f6645b4e822e6dd017527a6d90 100644
--- a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
+++ b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
@@ -269,7 +269,7 @@ SAXPY function multiplies the vector x by the scalar alpha and adds it to the ve
 
         /* Check result against reference */
         for (i = 0; i < N; ++i)
-            printf("CPU res = %f t GPU res = %f t diff = %f n", h_Y_ref[i], h_Y[i], h_Y_ref[i] * h_Y[i]);
+            printf("CPU res = %f t GPU res = %f t diff = %f n", h_Y_ref[i], h_Y[i], h_Y_ref[i] - h_Y[i]);
 
         /* Memory clean up */
         free(h_X); free(h_Y); free(h_Y_ref);
@@ -282,8 +282,8 @@ SAXPY function multiplies the vector x by the scalar alpha and adds it to the ve
 
 !!! Note
     cuBLAS has its own function for data transfers between CPU and GPU memory:
-    * [cublasSetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublassetvector) * transfers data from CPU to GPU memory
-    * [cublasGetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublasgetvector) * transfers data from GPU to CPU memory
+    - [cublasSetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublassetvector) - transfers data from CPU to GPU memory
+    - [cublasGetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublasgetvector) - transfers data from GPU to CPU memory
 
 To compile the code using NVCC compiler a "-lcublas" compiler flag has to be specified:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md
index 0ee597ab8aac8099e25934db552f758821fec5a3..4db4854d169586a1850f0c28b858babc721e5fa9 100644
--- a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md
+++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md
@@ -64,7 +64,7 @@ In SAM, each alignment line has 11 mandatory fields and a variable number of opt
 corresponding information is unavailable.
 
 | ** No. ** | ** Name ** | ** Description **                                     |
-| --------* | ---------* | ----------------------------------------------------* |
+| --------- | ---------- | ----------------------------------------------------- |
 | 1         | QNAME      | Query NAME of the read or the read pai                |
 | 2         | FLAG       | Bitwise FLAG (pairing,strand,mate strand,etc.)        |
 | 3         | RNAME      | <p>Reference sequence NAME                            |
@@ -95,13 +95,13 @@ BAM is the binary representation of SAM and keeps exactly the same information a
 
 Some features
 
-*   Quality control
-    *   reads with N errors
-    *   reads with multiple mappings
-    *   strand bias
-    *   paired-end insert
-*   Filtering: by number of errors, number of hits
-    *   Comparator: stats, intersection, ...
+-   Quality control
+    -   reads with N errors
+    -   reads with multiple mappings
+    -   strand bias
+    -   paired-end insert
+-   Filtering: by number of errors, number of hits
+    -   Comparator: stats, intersection, ...
 
 ** Input: ** BAM file.
 
@@ -290,47 +290,47 @@ If we want to re-launch the pipeline from stage 4 until stage 20 we should use t
 
 The pipeline calls the following tools
 
-*   [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/), quality control tool for high throughput sequence data.
-*   [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at
+-   [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/), quality control tool for high throughput sequence data.
+-   [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at
       the Broad Institute to analyze high-throughput sequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance. Its robust architecture, powerful processing engine and high-performance computing features make it capable of taking on projects of any size.
-*   [hpg-aligner](https://github.com/opencb-hpg/hpg-aligner), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: _Burrows-Wheeler Transform_ (BWT) to speed-up mapping high-quality reads, and _Smith-Waterman_> (SW) to increase sensitivity when reads cannot be mapped using BWT.
-*   [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data.
-*   [hpg-variant](http://docs.bioinfo.cipf.es/projects/hpg-variant/wiki), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible.
-*   [picard](http://picard.sourceforge.net/), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported.
-*   [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
-*   [snpEff](http://snpeff.sourceforge.net/), Genetic variant annotation and effect prediction toolbox.
+-   [hpg-aligner](https://github.com/opencb-hpg/hpg-aligner), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: _Burrows-Wheeler Transform_ (BWT) to speed-up mapping high-quality reads, and _Smith-Waterman_> (SW) to increase sensitivity when reads cannot be mapped using BWT.
+-   [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data.
+-   [hpg-variant](http://docs.bioinfo.cipf.es/projects/hpg-variant/wiki), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible.
+-   [picard](http://picard.sourceforge.net/), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported.
+-   [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
+-   [snpEff](http://snpeff.sourceforge.net/), Genetic variant annotation and effect prediction toolbox.
 
 This listing show which tools are used in each step of the pipeline
 
-*   stage-00: fastqc
-*   stage-01: hpg_fastq
-*   stage-02: fastqc
-*   stage-03: hpg_aligner and samtools
-*   stage-04: samtools
-*   stage-05: samtools
-*   stage-06: fastqc
-*   stage-07: picard
-*   stage-08: fastqc
-*   stage-09: picard
-*   stage-10: gatk
-*   stage-11: gatk
-*   stage-12: gatk
-*   stage-13: gatk
-*   stage-14: gatk
-*   stage-15: gatk
-*   stage-16: samtools
-*   stage-17: samtools
-*   stage-18: fastqc
-*   stage-19: gatk
-*   stage-20: gatk
-*   stage-21: gatk
-*   stage-22: gatk
-*   stage-23: gatk
-*   stage-24: hpg-variant
-*   stage-25: hpg-variant
-*   stage-26: snpEff
-*   stage-27: snpEff
-*   stage-28: hpg-variant
+-   stage-00: fastqc
+-   stage-01: hpg_fastq
+-   stage-02: fastqc
+-   stage-03: hpg_aligner and samtools
+-   stage-04: samtools
+-   stage-05: samtools
+-   stage-06: fastqc
+-   stage-07: picard
+-   stage-08: fastqc
+-   stage-09: picard
+-   stage-10: gatk
+-   stage-11: gatk
+-   stage-12: gatk
+-   stage-13: gatk
+-   stage-14: gatk
+-   stage-15: gatk
+-   stage-16: samtools
+-   stage-17: samtools
+-   stage-18: fastqc
+-   stage-19: gatk
+-   stage-20: gatk
+-   stage-21: gatk
+-   stage-22: gatk
+-   stage-23: gatk
+-   stage-24: hpg-variant
+-   stage-25: hpg-variant
+-   stage-26: snpEff
+-   stage-27: snpEff
+-   stage-28: hpg-variant
 
 ## Interpretation
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/openfoam.md b/docs.it4i/anselm-cluster-documentation/software/openfoam.md
index 12d416f3e25e80e0646d53fde318da87123ab2b3..350340fbc31d1099001f44e5b69fb1de1a64bd4d 100644
--- a/docs.it4i/anselm-cluster-documentation/software/openfoam.md
+++ b/docs.it4i/anselm-cluster-documentation/software/openfoam.md
@@ -22,10 +22,10 @@ Naming convection of the installed versions is following:
 
 openfoam\<VERSION\>-\<COMPILER\>\<openmpiVERSION\>-\<PRECISION\>
 
-*   \<VERSION\> * version of openfoam
-*   \<COMPILER\> * version of used compiler
-*   \<openmpiVERSION\> * version of used openmpi/impi
-*   \<PRECISION\> * DP/SP – double/single precision
+-   \<VERSION\> - version of openfoam
+-   \<COMPILER\> - version of used compiler
+-   \<openmpiVERSION\> - version of used openmpi/impi
+-   \<PRECISION\> - DP/SP – double/single precision
 
 ### Available OpenFOAM Modules
 
@@ -38,7 +38,7 @@ To check available modules use
 In /opt/modules/modulefiles/engineering you can see installed engineering softwares:
 
 ```bash
-    -----------------------------------* /opt/modules/modulefiles/engineering -------------------------------------------------------------
+    ------------------------------------ /opt/modules/modulefiles/engineering -------------------------------------------------------------
     ansys/14.5.x               matlab/R2013a-COM                                openfoam/2.2.1-icc-impi4.1.1.036-DP
     comsol/43b-COM             matlab/R2013a-EDU                                openfoam/2.2.1-icc-openmpi1.6.5-DP
     comsol/43b-EDU             openfoam/2.2.1-gcc481-openmpi1.6.5-DP            paraview/4.0.1-gcc481-bullxmpi1.2.4.1-osmesa10.0
@@ -58,7 +58,7 @@ To create OpenFOAM environment on ANSELM give the commands:
 ```
 
 !!! Note
-     Please load correct module with your requirements “compiler * GCC/ICC, precision * DP/SP”.
+	Please load correct module with your requirements “compiler - GCC/ICC, precision - DP/SP”.
 
 Create a project directory within the $HOME/OpenFOAM directory named \<USER\>-\<OFversion\> and create a directory named run within it, e.g. by typing:
 
@@ -72,7 +72,7 @@ Project directory is now available by typing:
     $ cd /home/<USER>/OpenFOAM/<USER>-<OFversion>/run
 ```
 
-\<OFversion\> * for example \<2.2.1\>
+\<OFversion\> - for example \<2.2.1\>
 
 or
 
@@ -116,12 +116,12 @@ For information about job submission please [look here](../resource-allocation-a
 
 ## Running Applications in Parallel
 
-Run the second case for example external incompressible turbulent flow * case * motorBike.
+Run the second case for example external incompressible turbulent flow - case - motorBike.
 
 First we must run serial application bockMesh and decomposePar for preparation of parallel computation.
 
 !!! Note
-     Create a Bash scrip test.sh:
+	Create a Bash scrip test.sh:
 
 ```bash
     #!/bin/bash
@@ -146,7 +146,7 @@ Job submission
 This job create simple block mesh and domain decomposition. Check your decomposition, and submit parallel computation:
 
 !!! Note
-     Create a PBS script testParallel.pbs:
+	Create a PBS script testParallel.pbs:
 
 ```bash
     #!/bin/bash
diff --git a/docs.it4i/anselm-cluster-documentation/software/operating-system.md b/docs.it4i/anselm-cluster-documentation/software/operating-system.md
index 7459897a9db4a5c2e09beb8ba48c2e2ab040cd48..6ecbcdabd665b51018b453e7fd043c161dfef3fe 100644
--- a/docs.it4i/anselm-cluster-documentation/software/operating-system.md
+++ b/docs.it4i/anselm-cluster-documentation/software/operating-system.md
@@ -1,5 +1,5 @@
 # Operating System
 
-The operating system on Anselm is Linux * **bullx Linux Server release 6.x**
+The operating system on Anselm is Linux - **bullx Linux Server release 6.x**
 
 bullx Linux is based on Red Hat Enterprise Linux. bullx Linux is a Linux distribution provided by Bull and dedicated to HPC applications.
diff --git a/docs.it4i/anselm-cluster-documentation/storage.md b/docs.it4i/anselm-cluster-documentation/storage.md
index 76d088648845dc3ed2d064178f61f24d88155b95..67a08d875a9c13e397413bfc37b5e277be69178d 100644
--- a/docs.it4i/anselm-cluster-documentation/storage.md
+++ b/docs.it4i/anselm-cluster-documentation/storage.md
@@ -27,7 +27,7 @@ There is default stripe configuration for Anselm Lustre filesystems. However, us
 3.  stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
 
 !!! Note
-     Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
+	Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
 
 Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file access patterns. 
 
@@ -61,14 +61,14 @@ $ man lfs
 ### Hints on Lustre Stripping
 
 !!! Note
-     Increase the stripe_count for parallel I/O to the same file.
+	Increase the stripe_count for parallel I/O to the same file.
 
 When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs the file will be written to. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file.
 
 Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes.
 
 !!! Note
-     Using a large stripe size can improve performance when accessing very large files
+	Using a large stripe size can improve performance when accessing very large files
 
 Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file.
 
@@ -80,30 +80,30 @@ The  architecture of Lustre on Anselm is composed of two metadata servers (MDS)
 
  Configuration of the storages
 
-*   HOME Lustre object storage
-    *   One disk array NetApp E5400
-    *   22 OSTs
-    *   227 2TB NL-SAS 7.2krpm disks
-    *   22 groups of 10 disks in RAID6 (8+2)
-    *   7 hot-spare disks
-*   SCRATCH Lustre object storage
-    *   Two disk arrays NetApp E5400
-    *   10 OSTs
-    *   106 2TB NL-SAS 7.2krpm disks
-    *   10 groups of 10 disks in RAID6 (8+2)
-    *   6 hot-spare disks
-*   Lustre metadata storage
-    *   One disk array NetApp E2600
-    *   12 300GB SAS 15krpm disks
-    *   2 groups of 5 disks in RAID5
-    *   2 hot-spare disks
+-   HOME Lustre object storage
+    -   One disk array NetApp E5400
+    -   22 OSTs
+    -   227 2TB NL-SAS 7.2krpm disks
+    -   22 groups of 10 disks in RAID6 (8+2)
+    -   7 hot-spare disks
+-   SCRATCH Lustre object storage
+    -   Two disk arrays NetApp E5400
+    -   10 OSTs
+    -   106 2TB NL-SAS 7.2krpm disks
+    -   10 groups of 10 disks in RAID6 (8+2)
+    -   6 hot-spare disks
+-   Lustre metadata storage
+    -   One disk array NetApp E2600
+    -   12 300GB SAS 15krpm disks
+    -   2 groups of 5 disks in RAID5
+    -   2 hot-spare disks
 
 \###HOME
 
 The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
 
 !!! Note
-     The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
+	The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
 
 The HOME filesystem should not be used to archive data of past Projects or other unrelated data.
 
@@ -115,10 +115,10 @@ The HOME filesystem is realized as Lustre parallel filesystem and is available o
 Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for the HOME filesystem.
 
 !!! Note
-     Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
+	Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
 
 | HOME filesystem      |        |
-| -------------------* | -----* |
+| -------------------- | ------ |
 | Mountpoint           | /home  |
 | Capacity             | 320 TB |
 | Throughput           | 2 GB/s |
@@ -132,7 +132,7 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t
 The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
 
 !!! Note
-     The Scratch filesystem is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
+	The Scratch filesystem is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
 
     >Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files.
 
@@ -141,10 +141,10 @@ The SCRATCH filesystem is mounted in directory /scratch. Users may freely create
 The SCRATCH filesystem is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 10 OSTs dedicated for the SCRATCH filesystem.
 
 !!! Note
-     Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
+	Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
 
 | SCRATCH filesystem   |          |
-| -------------------* | -------* |
+| -------------------- | -------- |
 | Mountpoint           | /scratch |
 | Capacity             | 146TB    |
 | Throughput           | 6GB/s    |
@@ -167,10 +167,10 @@ Example for Lustre HOME directory:
 $ lfs quota /home
 Disk quotas for user user001 (uid 1234):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
-         /home  300096       0 250000000       *    2102       0  500000    -
+         /home  300096       0 250000000       -    2102       0  500000    -
 Disk quotas for group user001 (gid 1234):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
-        /home  300096       0       0       *    2102       0       0       -
+        /home  300096       0       0       -    2102       0       0       -
 ```
 
 In this example, we view current quota size limit of 250GB and 300MB currently used by user001.
@@ -181,10 +181,10 @@ Example for Lustre SCRATCH directory:
 $ lfs quota /scratch
 Disk quotas for user user001 (uid 1234):
      Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
-          /scratch       8       0 100000000000       *       3       0       0       -
+          /scratch       8       0 100000000000       -       3       0       0       -
 Disk quotas for group user001 (gid 1234):
  Filesystem kbytes quota limit grace files quota limit grace
- /scratch       8       0       0       *       3       0       0       -
+ /scratch       8       0       0       -       3       0       0       -
 ```
 
 In this example, we view current quota size limit of 100TB and 8KB currently used by user001.
@@ -229,7 +229,7 @@ ACLs on a Lustre file system work exactly like ACLs on any Linux file system. Th
 [vop999@login1.anselm ~]$ umask 027
 [vop999@login1.anselm ~]$ mkdir test
 [vop999@login1.anselm ~]$ ls -ld test
-drwxr-x--* 2 vop999 vop999 4096 Nov  5 14:17 test
+drwxr-x--- 2 vop999 vop999 4096 Nov  5 14:17 test
 [vop999@login1.anselm ~]$ getfacl test
 # file: test
 # owner: vop999
@@ -261,7 +261,7 @@ Default ACL mechanism can be used to replace setuid/setgid permissions on direct
 ### Local Scratch
 
 !!! Note
-     Every computational node is equipped with 330GB local scratch disk.
+	Every computational node is equipped with 330GB local scratch disk.
 
 Use local scratch in case you need to access large amount of small files during your calculation.
 
@@ -270,10 +270,10 @@ The local scratch disk is mounted as /lscratch and is accessible to user at /lsc
 The local scratch filesystem is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs that access large number of small files within the calculation must use the local scratch filesystem as their working directory. This is required for performance reasons, as frequent access to number of small files may overload the metadata servers (MDS) of the Lustre filesystem.
 
 !!! Note
-     The local scratch directory /lscratch/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
+	The local scratch directory /lscratch/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
 
 | local SCRATCH filesystem |                      |
-| -----------------------* | -------------------* |
+| ------------------------ | -------------------- |
 | Mountpoint               | /lscratch            |
 | Accesspoint              | /lscratch/$PBS_JOBID |
 | Capacity                 | 330GB                |
@@ -285,17 +285,17 @@ The local scratch filesystem is intended  for temporary scratch data generated d
 Every computational node is equipped with filesystem realized in memory, so called RAM disk.
 
 !!! Note
-     Use RAM disk in case you need really fast access to your data of limited size during your calculation. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
+	Use RAM disk in case you need really fast access to your data of limited size during your calculation. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
 
 The local RAM disk is mounted as /ramdisk and is accessible to user at /ramdisk/$PBS_JOBID directory.
 
 The local RAM disk filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk filesystem is limited. Be very careful, use of RAM disk filesystem is at the expense of operational memory.  It is not recommended to allocate large amount of memory and use large amount of data in RAM disk filesystem at the same time.
 
 !!! Note
-     The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
+	The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
 
 | RAM disk    |                                                                                                         |
-| ----------* | ------------------------------------------------------------------------------------------------------* |
+| ----------- | ------------------------------------------------------------------------------------------------------- |
 | Mountpoint  | /ramdisk                                                                                                |
 | Accesspoint | /ramdisk/$PBS_JOBID                                                                                     |
 | Capacity    | 60GB at compute nodes without accelerator, 90GB at compute nodes with accelerator, 500GB at fat nodes   |
@@ -309,7 +309,7 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir
 ## Summary
 
 | Mountpoint | Usage                     | Protocol | Net Capacity   | Throughput | Limitations | Access                  | Services                    |        |
-| ---------* | ------------------------* | -------* | -------------* | ---------* | ----------* | ----------------------* | --------------------------* | -----* |
+| ---------- | ------------------------- | -------- | -------------- | ---------- | ----------- | ----------------------- | --------------------------- | ------ |
 | /home      | home directory            | Lustre   | 320 TiB        | 2 GB/s     | Quota 250GB | Compute and login nodes | backed up                   |        |
 | /scratch   | cluster shared jobs' data | Lustre   | 146 TiB        | 6 GB/s     | Quota 100TB | Compute and login nodes | files older 90 days removed |        |
 | /lscratch  | node local jobs' data     | local    | 330 GB         | 100 MB/s   | none        | Compute nodes           | purged after job ends       |        |
@@ -321,7 +321,7 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir
 Do not use shared filesystems at IT4Innovations as a backup for large amount of data or long-term archiving purposes.
 
 !!! Note
-     The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/).
+	The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/).
 
 The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic.
 
@@ -340,14 +340,14 @@ The procedure to obtain the CESNET access is quick and trouble-free.
 ### Understanding CESNET Storage
 
 !!! Note
-     It is very important to understand the CESNET storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first.
+	It is very important to understand the CESNET storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first.
 
 Once registered for CESNET Storage, you may [access the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods.
 
 ### SSHFS Access
 
 !!! Note
-     SSHFS: The storage will be mounted like a local hard drive
+	SSHFS: The storage will be mounted like a local hard drive
 
 The SSHFS  provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
 
@@ -392,7 +392,7 @@ Once done, please remember to unmount the storage
 ### Rsync Access
 
 !!! Note
-     Rsync provides delta transfer for best performance, can resume interrupted transfers
+	Rsync provides delta transfer for best performance, can resume interrupted transfers
 
 Rsync is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination.  Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
 
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
index 300bb091ab7cf8d28cde217fa427214ac75aa30f..1882a7515ee49fcff3e51e38266f7d2248e6cc87 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
@@ -45,7 +45,7 @@ There are variety of X servers available for Windows environment. The commercial
 [XWin](http://x.cygwin.com/) X server by Cygwin
 
 | How to use Xwin                                                                                                                                                                                                         | How to use Xming                                                                                     |
-| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | ---------------------------------------------------------------------------------------------------* |
+| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
 | [Install Cygwin](http://x.cygwin.com/) Find and execute XWin.exe to start the X server on Windows desktop computer.[If no able to forward X11 using PuTTY to CygwinX](#if-no-able-to-forward-x11-using-putty-to-cygwinx) | Use Xlaunch to configure the Xming. Run Xming to start the X server on Windows desktop computer. |
 
 Read more on [http://www.math.umn.edu/systems_guide/putty_xwin32.html](http://www.math.umn.edu/systems_guide/putty_xwin32.shtml)
@@ -110,7 +110,7 @@ yourname@login1.cluster-namen.it4i.cz $ gnome-session &
 On older systems where Xephyr is not available, you may also try Xnest instead of Xephyr. Another option is to launch a new X server in a separate console, via:
 
 ```bash
-xinit /usr/bin/ssh -XT -i .ssh/path_to_your_key yourname@cluster-namen.it4i.cz gnome-session -* :1 vt12
+xinit /usr/bin/ssh -XT -i .ssh/path_to_your_key yourname@cluster-namen.it4i.cz gnome-session -- :1 vt12
 ```
 
 However this method does not seem to work with recent Linux distributions and you will need to manually source
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
index 6db6df34457f6aed1f2c51a72055036e04da5f5e..517076b07d5894e4b120e09900bd936d57cc7d96 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
@@ -13,45 +13,45 @@ We recommned you to download "**A Windows installer for everything except PuTTYt
 
     "Pageant" is optional.
 
-## PuTTY * How to Connect to the IT4Innovations Cluster
+## PuTTY - How to Connect to the IT4Innovations Cluster
 
-*   Run PuTTY
-*   Enter Host name and Save session fields with [Login address](../../../salomon/shell-and-data-access.md) and browse Connection *  SSH * Auth menu. The _Host Name_ input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**.
+-   Run PuTTY
+-   Enter Host name and Save session fields with [Login address](../../../salomon/shell-and-data-access.md) and browse Connection -  SSH - Auth menu. The _Host Name_ input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**.
 
 ![](../../../img/PuTTY_host_Salomon.png)
 
-*   Category * Connection *  SSH * Auth:
+-   Category - Connection -  SSH - Auth:
       Select Attempt authentication using Pageant.
       Select Allow agent forwarding.
       Browse and select your [private key](ssh-keys/) file.
 
 ![](../../../img/PuTTY_keyV.png)
 
-*   Return to Session page and Save selected configuration with _Save_ button.
+-   Return to Session page and Save selected configuration with _Save_ button.
 
 ![](../../../img/PuTTY_save_Salomon.png)
 
-*   Now you can log in using _Open_ button.
+-   Now you can log in using _Open_ button.
 
 ![](../../../img/PuTTY_open_Salomon.png)
 
-*   Enter your username if the _Host Name_ input is not in the format "username@salomon.it4i.cz".
-*   Enter passphrase for selected [private key](ssh-keys/) file if Pageant **SSH authentication agent is not used.**
+-   Enter your username if the _Host Name_ input is not in the format "username@salomon.it4i.cz".
+-   Enter passphrase for selected [private key](ssh-keys/) file if Pageant **SSH authentication agent is not used.**
 
 ## Another PuTTY Settings
 
-*   Category * Windows * Translation * Remote character set and select **UTF-8**.
-*   Category * Terminal * Features and select **Disable application keypad mode** (enable numpad)
-*   Save your configuration on Session page in to Default Settings with _Save_ button.
+-   Category - Windows - Translation - Remote character set and select **UTF-8**.
+-   Category - Terminal - Features and select **Disable application keypad mode** (enable numpad)
+-   Save your configuration on Session page in to Default Settings with _Save_ button.
 
 ## Pageant SSH Agent
 
 Pageant holds your private key in memory without needing to retype a passphrase on every login.
 
-*   Run Pageant.
-*   On Pageant Key List press _Add key_ and select your private key (id_rsa.ppk).
-*   Enter your passphrase.
-*   Now you have your private key in memory without needing to retype a passphrase on every login.
+-   Run Pageant.
+-   On Pageant Key List press _Add key_ and select your private key (id_rsa.ppk).
+-   Enter your passphrase.
+-   Now you have your private key in memory without needing to retype a passphrase on every login.
 
 ![](../../../img/PageantV.png)
 
@@ -63,11 +63,11 @@ PuTTYgen is the PuTTY key generator. You can load in an existing private key and
 
 You can change the password of your SSH key with "PuTTY Key Generator". Make sure to backup the key.
 
-*   Load your [private key](../shell-access-and-data-transfer/ssh-keys/) file with _Load_ button.
-*   Enter your current passphrase.
-*   Change key passphrase.
-*   Confirm key passphrase.
-*   Save your private key with _Save private key_ button.
+-   Load your [private key](../shell-access-and-data-transfer/ssh-keys/) file with _Load_ button.
+-   Enter your current passphrase.
+-   Change key passphrase.
+-   Confirm key passphrase.
+-   Save your private key with _Save private key_ button.
 
 ![](../../../img/PuttyKeygeneratorV.png)
 
@@ -75,33 +75,33 @@ You can change the password of your SSH key with "PuTTY Key Generator". Make sur
 
 You can generate an additional public/private key pair and insert public key into authorized_keys file for authentication with your own private key.
 
-*   Start with _Generate_ button.
+-   Start with _Generate_ button.
 
 ![](../../../img/PuttyKeygenerator_001V.png)
 
-*   Generate some randomness.
+-   Generate some randomness.
 
 ![](../../../img/PuttyKeygenerator_002V.png)
 
-*   Wait.
+-   Wait.
 
 ![](../../../img/PuttyKeygenerator_003V.png)
 
-*   Enter a _comment_ for your key using format 'username@organization.example.com'.
+-   Enter a _comment_ for your key using format 'username@organization.example.com'.
       Enter key passphrase.
       Confirm key passphrase.
       Save your new private key in "_.ppk" format with _Save private key\* button.
 
 ![](../../../img/PuttyKeygenerator_004V.png)
 
-*   Save the public key with _Save public key_ button.
+-   Save the public key with _Save public key_ button.
       You can copy public key out of the ‘Public key for pasting into authorized_keys file’ box.
 
 ![](../../../img/PuttyKeygenerator_005V.png)
 
-*   Export private key in OpenSSH format "id_rsa" using Conversion * Export OpenSSH key
+-   Export private key in OpenSSH format "id_rsa" using Conversion - Export OpenSSH key
 
 ![](../../../img/PuttyKeygenerator_006V.png)
 
-*   Now you can insert additional public key into authorized_keys file for authentication with your own private key.
+-   Now you can insert additional public key into authorized_keys file for authentication with your own private key.
       You must log in using ssh key received after registration. Then proceed to [How to add your own key](../shell-access-and-data-transfer/ssh-keys/).
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
index 52495740ea3bbedea27509864dc2ddb2ef04125c..4fbb8aabfb59ea2c3d1f0b64c303ab0c70e247e9 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
@@ -8,12 +8,12 @@ After logging in, you can see .ssh/ directory with SSH keys and authorized_keys
     $ cd /home/username/
     $ ls -la .ssh/
     total 24
-    drwx-----* 2 username username 4096 May 13 15:12 .
+    drwx------ 2 username username 4096 May 13 15:12 .
     drwxr-x---22 username username 4096 May 13 07:22 ..
-    -rw-r--r-* 1 username username  392 May 21  2014 authorized_keys
-    -rw------* 1 username username 1675 May 21  2014 id_rsa
-    -rw------* 1 username username 1460 May 21  2014 id_rsa.ppk
-    -rw-r--r-* 1 username username  392 May 21  2014 id_rsa.pub
+    -rw-r--r-- 1 username username  392 May 21  2014 authorized_keys
+    -rw------- 1 username username 1675 May 21  2014 id_rsa
+    -rw------- 1 username username 1460 May 21  2014 id_rsa.ppk
+    -rw-r--r-- 1 username username  392 May 21  2014 id_rsa.pub
 ```
 
 !!! Hint
@@ -21,9 +21,9 @@ After logging in, you can see .ssh/ directory with SSH keys and authorized_keys
 
 ## Access Privileges on .ssh Folder
 
-*   .ssh directory: 700 (drwx------)
-*   Authorized_keys, known_hosts and public key (.pub file): 644 (-rw-r--r--)
-*   Private key (id_rsa/id_rsa.ppk): 600 (-rw-------)
+-   .ssh directory: 700 (drwx------)
+-   Authorized_keys, known_hosts and public key (.pub file): 644 (-rw-r--r--)
+-   Private key (id_rsa/id_rsa.ppk): 600 (-rw-------)
 
 ```bash
     cd /home/username/
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md
index 3605df31d6cde89f19d1e02a0196ea4c437bc03a..f6c526e40b8dce0f55b6e44264476658734f8a22 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md
@@ -1,17 +1,17 @@
-# VPN * Connection fail in Win 8.1
+# VPN - Connection fail in Win 8.1
 
-## Failed to Initialize Connection Subsystem Win 8.1 * 02-10-15 MS Patch
+## Failed to Initialize Connection Subsystem Win 8.1 - 02-10-15 MS Patch
 
 AnyConnect users on Windows 8.1 will receive a "Failed to initialize connection subsystem" error after installing the Windows 8.1 02/10/15 security patch. This OS defect introduced with the 02/10/15 patch update will also impact WIndows 7 users with IE11. Windows Server 2008/2012 are also impacted by this defect, but neither is a supported OS for AnyConnect.
 
 ## Workaround
 
-*   Close the Cisco AnyConnect Window and the taskbar mini-icon
-*   Right click vpnui.exe in the 'Cisco AnyConnect Secure Mobility Client' folder. (C:Program Files (x86)CiscoCisco AnyConnect Secure Mobility Client)
-*   Click on the 'Run compatibility troubleshooter' button
-*   Choose 'Try recommended settings'
-*   The wizard suggests Windows 8 compatibility.
-*   Click 'Test Program'. This will open the program.
-*   Close
+-   Close the Cisco AnyConnect Window and the taskbar mini-icon
+-   Right click vpnui.exe in the 'Cisco AnyConnect Secure Mobility Client' folder. (C:Program Files (x86)CiscoCisco AnyConnect Secure Mobility Client)
+-   Click on the 'Run compatibility troubleshooter' button
+-   Choose 'Try recommended settings'
+-   The wizard suggests Windows 8 compatibility.
+-   Click 'Test Program'. This will open the program.
+-   Close
 
 ![](../../../img/vpnuiV.png)
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md
index d33b2e20b89686147f0b850c31c17748e762d32c..6c1e908d908e3124248255831e85f105b203a76d 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md
@@ -4,12 +4,12 @@
 
 For using resources and licenses which are located at IT4Innovations local network, it is necessary to VPN connect to this network. We use Cisco AnyConnect Secure Mobility Client, which is supported on the following operating systems:
 
-*   Windows XP
-*   Windows Vista
-*   Windows 7
-*   Windows 8
-*   Linux
-*   MacOS
+-   Windows XP
+-   Windows Vista
+-   Windows 7
+-   Windows 8
+-   Linux
+-   MacOS
 
 It is impossible to connect to VPN from other operating systems.
 
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md
index ad112d4518b18f184b18674322624dfa074d11f3..376f97240b1a9bc0bc970348f4f7eb8c7123b7e4 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md
@@ -3,18 +3,18 @@
 ## Accessing IT4Innovations Internal Resources via VPN
 
 !!! Note
-    **Failed to initialize connection subsystem Win 8.1 * 02-10-15 MS patch**
+    **Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS patch**
 
 Workaround can be found at [vpn-connection-fail-in-win-8.1](../../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.html)
 
 For using resources and licenses which are located at IT4Innovations local network, it is necessary to VPN connect to this network. We use Cisco AnyConnect Secure Mobility Client, which is supported on the following operating systems:
 
-*   Windows XP
-*   Windows Vista
-*   Windows 7
-*   Windows 8
-*   Linux
-*   MacOS
+-   Windows XP
+-   Windows Vista
+-   Windows 7
+-   Windows 8
+-   Linux
+-   MacOS
 
 It is impossible to connect to VPN from other operating systems.
 
diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md
index 3b55f2e11fe8f1604a9ae97d5a0c5a9ebbdbd66d..b4d4ceda0b0265fef832d8e5298f462fb4074acb 100644
--- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md
+++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md
@@ -8,10 +8,10 @@ IT4Innovations employs X.509 certificates for secure communication (e. g. creden
 
 There are different kinds of certificates, each with a different scope of use. We mention here:
 
-*   User (Private) certificates
-*   Certificate Authority (CA) certificates
-*   Host certificates
-*   Service certificates
+-   User (Private) certificates
+-   Certificate Authority (CA) certificates
+-   Host certificates
+-   Service certificates
 
 However, users need only manage User and CA certificates. Note that your user certificate is protected by an associated private key, and this **private key must never be disclosed**.
 
@@ -31,7 +31,7 @@ Yes, provided that the CA which provides this service is also a member of IGTF.
 
 ## Q: Does IT4Innovations Support the TERENA Certificate Service?
 
- Yes, ITInnovations supports TERENA eScience personal certificates. For more information, please visit [TCS * Trusted Certificate Service](https://tcs-escience-portal.terena.org/), where you also can find if your organisation/country can use this service
+ Yes, ITInnovations supports TERENA eScience personal certificates. For more information, please visit [TCS - Trusted Certificate Service](https://tcs-escience-portal.terena.org/), where you also can find if your organisation/country can use this service
 
 ## Q: What Format Should My Certificate Take?
 
diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
index ddbb2d792fc7e0ab11ee0e647fe5a26496978244..bc3a4c49b270d9c9f979d2c043dbd68025a05fff 100644
--- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
+++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
@@ -24,8 +24,8 @@ This is a preferred way of granting access to project resources. Please, use thi
 
 Log in to the [IT4I Extranet portal](https://extranet.it4i.cz) using IT4I credentials and go to the **Projects** section.
 
-*   **Users:** Please, submit your requests for becoming a project member.
-*   **Primary Investigators:** Please, approve or deny users' requests in the same section.
+-   **Users:** Please, submit your requests for becoming a project member.
+-   **Primary Investigators:** Please, approve or deny users' requests in the same section.
 
 ## Authorization by E-Mail (An Alternative Approach)
 
@@ -120,7 +120,7 @@ We accept personal certificates issued by any widely respected certification aut
 
 Certificate generation process is well-described here:
 
-*   [How to generate a personal TCS certificate in Mozilla Firefox web browser (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-gen)
+-   [How to generate a personal TCS certificate in Mozilla Firefox web browser (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-gen)
 
 A FAQ about certificates can be found here: [Certificates FAQ](certificates-faq/).
 
@@ -128,19 +128,19 @@ A FAQ about certificates can be found here: [Certificates FAQ](certificates-faq/
 
 Follow these steps **only** if you can not obtain your certificate in a standard way. In case you choose this procedure, please attach a **scan of photo ID** (personal ID or passport or drivers license) when applying for [login credentials](obtaining-login-credentials/#the-login-credentials).
 
-*   Go to [CAcert](www.cacert.org).
-    *   If there's a security warning, just acknowledge it.
-*   Click _Join_.
-*   Fill in the form and submit it by the _Next_ button.
-    *   Type in the e-mail address which you use for communication with us.
-    *   Don't forget your chosen _Pass Phrase_.
-*   You will receive an e-mail verification link. Follow it.
-*   After verifying, go to the CAcert's homepage and login using     _Password Login_.
-*   Go to _Client Certificates_ _New_.
-*   Tick _Add_ for your e-mail address and click the _Next_ button.
-*   Click the _Create Certificate Request_ button.
-*   You'll be redirected to a page from where you can download/install your certificate.
-    *   Simultaneously you'll get an e-mail with a link to the certificate.
+-   Go to [CAcert](www.cacert.org).
+    -   If there's a security warning, just acknowledge it.
+-   Click _Join_.
+-   Fill in the form and submit it by the _Next_ button.
+    -   Type in the e-mail address which you use for communication with us.
+    -   Don't forget your chosen _Pass Phrase_.
+-   You will receive an e-mail verification link. Follow it.
+-   After verifying, go to the CAcert's homepage and login using     _Password Login_.
+-   Go to _Client Certificates_ _New_.
+-   Tick _Add_ for your e-mail address and click the _Next_ button.
+-   Click the _Create Certificate Request_ button.
+-   You'll be redirected to a page from where you can download/install your certificate.
+    -   Simultaneously you'll get an e-mail with a link to the certificate.
 
 ## Installation of the Certificate Into Your Mail Client
 
@@ -148,13 +148,13 @@ The procedure is similar to the following guides:
 
 MS Outlook 2010
 
-*   [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380)
-*   [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp)
+-   [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380)
+-   [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp)
 
 Mozilla Thudnerbird
 
-*   [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate)
-*   [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp)
+-   [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate)
+-   [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp)
 
 ## End of User Account Lifecycle
 
@@ -162,8 +162,8 @@ User accounts are supported by membership in active Project(s) or by affiliation
 
 User will get 3 automatically generated warning e-mail messages of the pending removal:.
 
-*   First message will be sent 3 months before the removal
-*   Second message will be sent 1 month before the removal
-*   Third message will be sent 1 week before the removal.
+-   First message will be sent 3 months before the removal
+-   Second message will be sent 1 month before the removal
+-   Third message will be sent 1 week before the removal.
 
 The messages will inform about the projected removal date and will challenge the user to migrate her/his data
diff --git a/docs.it4i/index.md b/docs.it4i/index.md
index b5a5024ca64c0dbee0b00858dec3ed4349f591d5..1ed5efa206f87cc48a5e2c4bd25d1a3e513478f5 100644
--- a/docs.it4i/index.md
+++ b/docs.it4i/index.md
@@ -29,17 +29,17 @@ In many cases, you will run your own code on the cluster. In order to fully expl
 
 ## Terminology Frequently Used on These Pages
 
-*   **node:** a computer, interconnected by network to other computers * Computational nodes are powerful computers, designed and dedicated for executing demanding scientific computations.
-*   **core:** processor core, a unit of processor, executing computations
-*   **corehours:** wall clock hours of processor core time * Each node is equipped with **X** processor cores, provides **X** corehours per 1 wall clock hour.
-*   **job:** a calculation running on the supercomputer * The job allocates and utilizes resources of the supercomputer for certain time.
-*   **HPC:** High Performance Computing
-*   **HPC (computational) resources:** corehours, storage capacity, software licences
-*   **code:** a program
-*   **primary investigator (PI):** a person responsible for execution of computational project and utilization of computational resources allocated to that project
-*   **collaborator:** a person participating on execution of computational project and utilization of computational resources allocated to that project
-*   **project:** a computational project under investigation by the PI * The project is identified by the project ID. The computational resources are allocated and charged per project.
-*   **jobscript:** a script to be executed by the PBS Professional workload manager
+-   **node:** a computer, interconnected by network to other computers - Computational nodes are powerful computers, designed and dedicated for executing demanding scientific computations.
+-   **core:** processor core, a unit of processor, executing computations
+-   **corehours:** wall clock hours of processor core time - Each node is equipped with **X** processor cores, provides **X** corehours per 1 wall clock hour.
+-   **job:** a calculation running on the supercomputer - The job allocates and utilizes resources of the supercomputer for certain time.
+-   **HPC:** High Performance Computing
+-   **HPC (computational) resources:** corehours, storage capacity, software licences
+-   **code:** a program
+-   **primary investigator (PI):** a person responsible for execution of computational project and utilization of computational resources allocated to that project
+-   **collaborator:** a person participating on execution of computational project and utilization of computational resources allocated to that project
+-   **project:** a computational project under investigation by the PI - The project is identified by the project ID. The computational resources are allocated and charged per project.
+-   **jobscript:** a script to be executed by the PBS Professional workload manager
 
 ## Conventions
 
diff --git a/docs.it4i/modules-anselm.md b/docs.it4i/modules-anselm.md
index f716d7112c83a5b4a79a81da5ab28209fcf41ac4..83d973c8615c952aeeccee0c497581777d9bf97f 100644
--- a/docs.it4i/modules-anselm.md
+++ b/docs.it4i/modules-anselm.md
@@ -3,15 +3,15 @@
 ## Core
 
 | Module      | Description | Available versions     |
-| ----------* | ----------* | ---------------------* |
+| ----------- | ----------- | ---------------------- |
 | **lmod**    |             | <nobr>7.2.2.lua</nobr> |
 | **settarg** |             | <nobr>7.2.2.lua</nobr> |
 
 ## Bio
 
 | Module                                            | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  | Available versions                                                                                                                                                                                      |
-| ------------------------------------------------* | -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
-| **[almost](http://www-almost.ch.cam.ac.uk/site)** | all atom molecular simulation toolkit * is a fast and flexible molecular modeling environment that provides powerful and efficient algorithms for molecular simulation, homology modeling, de novo design and ab-initio calculations.                                                                                                                                                                                                                                                                        | <nobr>2.1.0-foss-2015g</br>2.1.0-foss-2016a</nobr>                                                                                                                                                      |
+| ------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| **[almost](http://www-almost.ch.cam.ac.uk/site)** | all atom molecular simulation toolkit - is a fast and flexible molecular modeling environment that provides powerful and efficient algorithms for molecular simulation, homology modeling, de novo design and ab-initio calculations.                                                                                                                                                                                                                                                                        | <nobr>2.1.0-foss-2015g</br>2.1.0-foss-2016a</nobr>                                                                                                                                                      |
 | **bowtie2**                                       |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | <nobr>2.2.3</nobr>                                                                                                                                                                                      |
 | **[GROMACS](http://www.gromacs.org)**             | GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.                                                                                                                                                                                                                                                                                                                                            | <nobr>5.1.2-intel-2015b-hybrid-single-cuda</br>5.1.2-intel-2016a-hybrid</br>5.1.2-intel-2015b-hybrid-single-CUDA-7.5-PLUMED-2.2.1</br>5.1.2-intel-2015b-hybrid-single-CUDA-7.5-PLUMED-2.2.1-test</nobr> |
 | **[PLUMED](http://www.plumed-code.org)**          | PLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes. | <nobr>2.3b-foss-2016a</nobr>                                                                                                                                                                            |
@@ -19,26 +19,26 @@
 ## Bullxde
 
 | Module      | Description | Available versions |
-| ----------* | ----------* | -----------------* |
+| ----------- | ----------- | ------------------ |
 | **bullxde** |             | <nobr>2.0</nobr>   |
 
 ## Bullxmpi
 
 | Module       | Description | Available versions            |
-| -----------* | ----------* | ----------------------------* |
+| ------------ | ----------- | ----------------------------- |
 | **bullxmpi** |             | <nobr>bullxmpi-1.2.4.3</nobr> |
 
 ## Chem
 
 | Module                                                                             | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  | Available versions                                                                           |
-| ---------------------------------------------------------------------------------* | -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------* |
+| ---------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------- |
 | **abinit**                                                                         |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | <nobr>7.10.1-icc-impi</br>7.10.1-gcc-openmpi</br>7.6.2</nobr>                                |
 | **cp2k-mpi**                                                                       |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | <nobr>2.5.1-gcc</nobr>                                                                       |
 | **lammps**                                                                         |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | <nobr>28Jun14</nobr>                                                                         |
 | **molpro**                                                                         |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | <nobr>2010.1-p45-intel</nobr>                                                                |
 | **namd**                                                                           |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | <nobr>2.8</nobr>                                                                             |
 | **nwchem**                                                                         |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | <nobr>6.1.1</br>6.3-rev2-patch1-openmpi</br>6.3-rev2-patch1-venus</br>6.3-rev2-patch1</nobr> |
-| **[ORCA](http://cec.mpg.de/forum/)**                                               | ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single* and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.                                                                                                         | <nobr>3_0_3-linux_x86-64</nobr>                                                              |
+| **[ORCA](http://cec.mpg.de/forum/)**                                               | ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.                                                                                                         | <nobr>3_0_3-linux_x86-64</nobr>                                                              |
 | **[PLUMED](http://www.plumed-code.org)**                                           | PLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes. | <nobr>2.2.1-intel-2015b</nobr>                                                               |
 | **[QuantumESPRESSO](http://www.pwscf.org/)**                                       | Quantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).                                                                                                                                                                                                                                                   | <nobr>5.4.0-intel-2017.00</nobr>                                                             |
 | **[xdrfile](http://www.gromacs.org/Developer_Zone/Programming_Guide/XTC_Library)** | XTC library                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  | <nobr>1.1.4-foss-2016a</br>1.1.4-foss-2015g</br>1.1.4-intel-2015b</nobr>                     |
@@ -46,7 +46,7 @@
 ## Compilers
 
 | Module                                                        | Description                                                                                                                                                        | Available versions                                                                                                               |
-| ------------------------------------------------------------* | -----------------------------------------------------------------------------------------------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------- |
 | **bupc**                                                      |                                                                                                                                                                    | <nobr>2.16.2</nobr>                                                                                                              |
 | **chicken**                                                   |                                                                                                                                                                    | <nobr>4.8.0.6</nobr>                                                                                                             |
 | **gcc**                                                       |                                                                                                                                                                    | <nobr>4.9.0</br>5.4.0</br>4.8.1</nobr>                                                                                           |
@@ -61,21 +61,21 @@
 ## Data
 
 | Module                                    | Description                                                                                                                                                                                                                                                                                                                                                                        | Available versions                                                             |
-| ----------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | -----------------------------------------------------------------------------* |
+| ----------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ |
 | **[GDAL](http://www.gdal.org/)**          | GDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It also comes with a variety of useful commandline utilities for data translation and processing. | <nobr>2.1.0-foss-2015g</br>1.9.2-foss-2015g</nobr>                             |
 | **[HDF5](http://www.hdfgroup.org/HDF5/)** | HDF5 is a unique technology suite that makes possible the management of extremely large and complex data collections.                                                                                                                                                                                                                                                              | <nobr>1.8.16-foss-2016a</br>1.8.16-intel-2016.01</br>1.8.16-intel-2015b</nobr> |
 
 ## Debugger
 
 | Module                                                                                | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              | Available versions                                               |
-| ------------------------------------------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | ---------------------------------------------------------------* |
-| **[Forge](http://www.allinea.com/products/develop-allinea-forge)**                    | Allinea Forge is the complete toolsuite for software development * with everything needed to debug, profile, optimize, edit and build C, C++ and FORTRAN applications on Linux for high performance * from single threads through to complex parallel HPC codes with MPI, OpenMP, threads or CUDA.                                                                                                                                                                                                                                                                                       | <nobr>6.0.6</br>5.7</br>6.1.2.lua</br>6.0.5</br>5.1-43967</nobr> |
-| **[PerformanceReports](http://www.allinea.com/products/allinea-performance-reports)** | Allinea Performance Reports are the most effective way to characterize and understand the performance of HPC application runs. One single-page HTML report elegantly answers a range of vital questions for any HPC site. * Is this application well-optimized for the system and the processors it is running on? * Does it benefit from running at this scale? * Are there I/O, networking or threading bottlenecks affecting performance? * Which hardware, software or configuration changes can we make to improve performance further. * How much energy did this application use? | <nobr>6.0.6</nobr>                                               |
+| ------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------- |
+| **[Forge](http://www.allinea.com/products/develop-allinea-forge)**                    | Allinea Forge is the complete toolsuite for software development - with everything needed to debug, profile, optimize, edit and build C, C++ and FORTRAN applications on Linux for high performance - from single threads through to complex parallel HPC codes with MPI, OpenMP, threads or CUDA.                                                                                                                                                                                                                                                                                       | <nobr>6.0.6</br>5.7</br>6.1.2.lua</br>6.0.5</br>5.1-43967</nobr> |
+| **[PerformanceReports](http://www.allinea.com/products/allinea-performance-reports)** | Allinea Performance Reports are the most effective way to characterize and understand the performance of HPC application runs. One single-page HTML report elegantly answers a range of vital questions for any HPC site. - Is this application well-optimized for the system and the processors it is running on? - Does it benefit from running at this scale? - Are there I/O, networking or threading bottlenecks affecting performance? - Which hardware, software or configuration changes can we make to improve performance further. - How much energy did this application use? | <nobr>6.0.6</nobr>                                               |
 
 ## Devel
 
 | Module                                                                            | Description                                                                                                                                                                                                                                                                                                                                                                                                                | Available versions                                                                                                                                                                                                                                                                                                                         |
-| --------------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| --------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
 | **[Autoconf](http://www.gnu.org/software/autoconf/)**                             | Autoconf is an extensible package of M4 macros that produce shell scripts to automatically configure software source code packages. These scripts can adapt the packages to many kinds of UNIX-like systems without manual user intervention. Autoconf creates a configuration script for a package from a template file that lists the operating system features that the package can use, in the form of M4 macro calls. | <nobr>2.69</br>2.69-GNU-4.9.3-2.25</br>2.69-intel-2015b</br>2.69-intel-2017.00</br>2.69-foss-2016a</br>2.69-GNU-5.1.0-2.25</nobr>                                                                                                                                                                                                          |
 | **[Automake](http://www.gnu.org/software/automake/automake.html)**                | Automake: GNU Standards-compliant Makefile generator                                                                                                                                                                                                                                                                                                                                                                       | <nobr>1.15-GNU-5.1.0-2.25</br>1.15-foss-2016a</br>1.15-GNU-4.9.3-2.25</br>1.15-intel-2015b</br>1.15</br>1.15-intel-2017.00</nobr>                                                                                                                                                                                                          |
 | **[Autotools](http://autotools.io)**                                              | This bundle collect the standard GNU build tools: Autoconf, Automake and libtool                                                                                                                                                                                                                                                                                                                                           | <nobr>20150215-GNU-4.9.3-2.25</br>20150215-intel-2017.00</br>20150215-GNU-5.1.0-2.25</br>20150215-intel-2015b</br>20150215-foss-2016a</br>20150215</nobr>                                                                                                                                                                                  |
@@ -107,7 +107,7 @@
 ## Engineering
 
 | Module               | Description | Available versions                                                                                                                                                    |
-| -------------------* | ----------* | --------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| -------------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **adams**            |             | <nobr>2013.2</nobr>                                                                                                                                                   |
 | **ansys**            |             | <nobr>15.0.x</br>16.0.x</br>14.5.x</nobr>                                                                                                                             |
 | **beopest**          |             | <nobr>12.0.1</br>13.3</br>12.2</nobr>                                                                                                                                 |
@@ -137,7 +137,7 @@
 ## Environments
 
 | Module           | Description | Available versions                          |
-| ---------------* | ----------* | ------------------------------------------* |
+| ---------------- | ----------- | ------------------------------------------- |
 | **bullxde**      |             | <nobr>2.0</nobr>                            |
 | **PrgEnv-gnu**   |             | <nobr>4.4.6</br>4.4.6-test</br>4.8.1</nobr> |
 | **PrgEnv-intel** |             | <nobr>15.0.3</br>13.5.192</br>14.0.1</nobr> |
@@ -145,7 +145,7 @@
 ## Lang
 
 | Module                                                    | Description                                                                                                                                                                                                                                                                                                                                                                                                                                            | Available versions                                                                                                                                                                                                                                                                          |
-| --------------------------------------------------------* | -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| --------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[Bison](http://www.gnu.org/software/bison)**            | Bison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.                                                                                                                                                                                                                                                                    | <nobr>3.0.4-foss-2016a</br>3.0.4-GCC-4.9.3</br>3.0.4</br>2.7-foss-2015g</br>3.0.2</br>3.0.4-GCCcore-5.4.0</br>2.7</br>3.0.4-intel-2015b</br>2.5-intel-2015b</br>3.0.4-GCC-4.9.3-binutils-2.25</br>3.0.4-GCC-5.1.0-binutils-2.25</br>3.0.4-GCCcore-4.9.3</br>3.0.4-GCCcore-5.3.0</nobr>      |
 | **[byacc](http://invisible-island.net/byacc/byacc.html)** | Berkeley Yacc (byacc) is generally conceded to be the best yacc variant available. In contrast to bison, it is written to avoid dependencies upon a particular compiler.                                                                                                                                                                                                                                                                               | <nobr>20150711-intel-2015b</br>20120526</br>20120526-foss-2016a</br>20120526-foss-2015g</br>20120526-intel-2015b</nobr>                                                                                                                                                                     |
 | **[flex](http://flex.sourceforge.net/)**                  | Flex (Fast Lexical Analyzer) is a tool for generating scanners. A scanner, sometimes called a tokenizer, is a program which recognizes lexical patterns in text.                                                                                                                                                                                                                                                                                       | <nobr>2.5.39-foss-2015g</br>2.5.39-GCC-5.1.0-binutils-2.25</br>2.6.0</br>2.5.39-GCC-4.9.3-binutils-2.25</br>2.5.39-GCC-4.9.3</br>2.6.0-GCCcore-5.3.0</br>2.5.35-intel-2015b</br>2.6.0-GCCcore-5.4.0</br>2.5.39</br>2.5.39-foss-2016a</br>2.5.39-GCCcore-4.9.3</br>2.5.39-intel-2015b</nobr> |
@@ -161,7 +161,7 @@
 ## Lib
 
 | Module                                                                                           | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  | Available versions                                                                                                                                                                                                                                                                                                                                 |
-| -----------------------------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[libdrm](http://dri.freedesktop.org)**                                                         | Direct Rendering Manager runtime library.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    | <nobr>2.4.27</nobr>                                                                                                                                                                                                                                                                                                                                |
 | **[libffi](http://sourceware.org/libffi/)**                                                      | The libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run-time.                                                                                                                                                                                                                                                                                                                                                                                | <nobr>3.2.1-foss-2016a</br>3.0.13</br>3.1-intel-2015b</br>3.0.13-intel-2015b</br>3.1-intel-2016.01</nobr>                                                                                                                                                                                                                                          |
 | **[libfontenc](http://www.freedesktop.org/wiki/Software/xlibs/)**                                | X11 font encoding library                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    | <nobr>1.1.3</nobr>                                                                                                                                                                                                                                                                                                                                 |
@@ -183,7 +183,7 @@
 ## Libraries
 
 | Module              | Description | Available versions                                                                                                                                                                                                                                                                           |
-| ------------------* | ----------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **adios**           |             | <nobr>1.8.0</nobr>                                                                                                                                                                                                                                                                           |
 | **boost**           |             | <nobr>1.56-icc-impi</br>1.56-gcc-openmpi</nobr>                                                                                                                                                                                                                                              |
 | **dataspaces**      |             | <nobr>1.4.0</nobr>                                                                                                                                                                                                                                                                           |
@@ -217,20 +217,20 @@
 ## Math
 
 | Module                                                                   | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          | Available versions                                                                                                                           |
-| -----------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[GMP](http://gmplib.org/)**                                            | GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers.                                                                                                                                                                                                                                                                                                                                                                                                                                | <nobr>6.1.0-intel-2015b</br>5.0.5</br>6.1.0-foss-2016a</br>6.0.0a</br>6.0.0a-intel-2015b</br>6.1.0-intel-2017.00</br>5.0.5-foss-2015g</nobr> |
 | **[ISL](http://isl.gforge.inria.fr/)**                                   | isl is a library for manipulating sets and relations of integer points bounded by linear constraints.                                                                                                                                                                                                                                                                                                                                                                                                                                                                | <nobr>0.15</nobr>                                                                                                                            |
 | **[MLD2P4](http://www.mld2p4.it)**                                       | MLD2P4 (Multi-Level Domain Decomposition Parallel Preconditioners Package based on PSBLAS) is a package of parallel algebraic multi-level preconditioners. It implements various versions of one-level additive and of multi-level additive and hybrid Schwarz algorithms. In the multi-level case, a purely algebraic approach is applied to generate coarse-level corrections, so that no geometric background is needed concerning the matrix to be preconditioned. The matrix is assumed to be square, real or complex, with a symmetric sparsity pattern.       | <nobr>2.0-rc4-GCC-4.9.3-2.25</nobr>                                                                                                          |
 | **[numpy](http://www.numpy.org)**                                        | NumPy is the fundamental package for scientific computing with Python. It contains among other things: a powerful N-dimensional array object, sophisticated (broadcasting) functions, tools for integrating C/C++ and Fortran code, useful linear algebra, Fourier transform, and random number capabilities. Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. | <nobr>1.8.2-intel-2015b-Python-2.7.9</br>1.8.2-intel-2015b-Python-2.7.11</br>1.8.2-intel-2016.01-Python-2.7.9</nobr>                         |
 | **[Octave](http://www.gnu.org/software/octave/)**                        | GNU Octave is a high-level interpreted language, primarily intended for numerical computations.                                                                                                                                                                                                                                                                                                                                                                                                                                                                      | <nobr>3.8.2-gimkl-2.11.5</br>4.0.0-foss-2015g</br>4.0.1-gimkl-2.11.5</nobr>                                                                  |
 | **[PSBLAS](http://people.uniroma2.it/salvatore.filippone/psblas/)**      | Most computationally intensive applications work on irregular and sparse domains that complicate their implementation on parallel machines. The major goal of the Parallel Sparse Basic Linear Algebra Subroutines (PSBLAS) project is to provide a framework to enable easy, efficient and portable implementations of iterative solvers for linear systems, while shielding the user from most details of their parallelization. The interface is designed keeping in view a Single Program Multiple Data programming model on distributed memory machines.        | <nobr>3.3.4-3-GCC-4.9.3-2.25</nobr>                                                                                                          |
-| **[PSBLAS-ext](http://people.uniroma2.it/salvatore.filippone/psblas/)**  | PSBLAS * Extended formats and NVIDIA GPU support                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | <nobr>1.0-4-GCC-4.9.3-2.25</nobr>                                                                                                            |
+| **[PSBLAS-ext](http://people.uniroma2.it/salvatore.filippone/psblas/)**  | PSBLAS - Extended formats and NVIDIA GPU support                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | <nobr>1.0-4-GCC-4.9.3-2.25</nobr>                                                                                                            |
 | **[ScientificPython](https://sourcesup.cru.fr/projects/scientific-py/)** | ScientificPython is a collection of Python modules for scientific computing. It contains support for geometry, mathematical functions, statistics, physical units, IO, visualization, and parallelization.                                                                                                                                                                                                                                                                                                                                                           | <nobr>2.9.4-intel-2016.01-Python-2.7.9</br>2.9.4-intel-2015b-Python-2.7.9</br>2.9.4-intel-2015b-Python-2.7.11</nobr>                         |
 
 ## Mpi
 
 | Module                                                         | Description                                                                                                                                                                                                                           | Available versions                                                                                                                                                                                               |
-| -------------------------------------------------------------* | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| -------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **bullxmpi**                                                   |                                                                                                                                                                                                                                       | <nobr>bullxmpi_1.2.4.1</nobr>                                                                                                                                                                                    |
 | **[impi](http://software.intel.com/en-us/intel-mpi-library/)** | The Intel(R) MPI Library for Linux\* OS is a multi-fabric message passing library based on ANL MPICH2 and OSU MVAPICH2. The Intel MPI Library for Linux OS implements the Message Passing Interface, version 2 (MPI-2) specification. | <nobr>2017.0.098-iccifort-2017.0.098-GCC-5.4.0-2.26</br>5.1.2.150-iccifort-2016.1.150-GCC-4.9.3-2.25</br>5.0.3.048-iccifort-2015.3.187-GNU-5.1.0-2.25</br>5.0.3.048</br>4.1.1.036</br>5.0.3.048-GCC-4.9.3</nobr> |
 | **lam**                                                        |                                                                                                                                                                                                                                       | <nobr>7.1.4-icc</nobr>                                                                                                                                                                                           |
@@ -242,7 +242,7 @@
 ## Numlib
 
 | Module                                                          | Description                                                                                                                                                                                                                                                                                                | Available versions                                                                                                                                                                                                                                                               |
-| --------------------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| --------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[Armadillo](http://arma.sourceforge.net/)**                   | Armadillo is an open-source C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions.                                               | <nobr>7.500.0-foss-2016a-Python-3.5.2</nobr>                                                                                                                                                                                                                                     |
 | **[arpack-ng](http://forge.scilab.org/index.php/p/arpack-ng/)** | ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.                                                                                                                                                                                                         | <nobr>3.3.0-foss-2016a</nobr>                                                                                                                                                                                                                                                    |
 | **[ATLAS](http://math-atlas.sourceforge.net)**                  | ATLAS (Automatically Tuned Linear Algebra Software) is the application of the AEOS (Automated Empirical Optimization of Software) paradigm, with the present emphasis on the Basic Linear Algebra Subprograms (BLAS), a widely used, performance-critical, linear algebra kernel library.                  | <nobr>3.10.1-GCC-4.9.3-2.25-LAPACK-3.4.2</nobr>                                                                                                                                                                                                                                  |
@@ -255,13 +255,13 @@
 ## Nvidia
 
 | Module   | Description | Available versions                     |
-| -------* | ----------* | -------------------------------------* |
+| -------- | ----------- | -------------------------------------- |
 | **cuda** |             | <nobr>7.5</br>6.5.14</br>6.0.37</nobr> |
 
 ## Omics
 
 | Module          | Description | Available versions                |
-| --------------* | ----------* | --------------------------------* |
+| --------------- | ----------- | --------------------------------- |
 | **fastqc**      |             | <nobr>0.11.2</nobr>               |
 | **gatk**        |             | <nobr>2.6-4</br>.2.6-4.swp</nobr> |
 | **hpg-aligner** |             | <nobr>1.0.0</nobr>                |
@@ -275,18 +275,18 @@
 ## Oscar-Modulefiles
 
 | Module | Description | Available versions |
-| -----* | ----------* | -----------------* |
+| ------ | ----------- | ------------------ |
 
 ## Oscar-Modules
 
 | Module            | Description | Available versions |
-| ----------------* | ----------* | -----------------* |
+| ----------------- | ----------- | ------------------ |
 | **oscar-modules** |             | <nobr>1.0.3</nobr> |
 
 ## Perf
 
 | Module                                           | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                        | Available versions |
-| -----------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | -----------------* |
+| ------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
 | **[OPARI2](http://www.score-p.org)**             | OPARI2, the successor of Forschungszentrum Juelich's OPARI, is a source-to-source instrumentation tool for OpenMP and hybrid codes. It surrounds OpenMP directives and runtime library calls with calls to the POMP2 measurement interface.                                                                                                                                                                                                                                        | <nobr>2.0</nobr>   |
 | **[OTF2](http://www.score-p.org)**               | The Open Trace Format 2 is a highly scalable, memory efficient event trace data format plus support library. It is the new standard trace format for Scalasca, Vampir, and TAU and is open for other tools.                                                                                                                                                                                                                                                                        | <nobr>2.0</nobr>   |
 | **[PAPI](http://icl.cs.utk.edu/projects/papi/)** | PAPI provides the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. In addition Component PAPI provides access to a collection of components that expose performance measurement opportunites across the hardware and software stack. | <nobr>5.4.3</nobr> |
@@ -295,7 +295,7 @@
 ## Phys
 
 | Module                             | Description                                                                                                   | Available versions                                                     |
-| ---------------------------------* | ------------------------------------------------------------------------------------------------------------* | ---------------------------------------------------------------------* |
+| ---------------------------------- | ------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------- |
 | **[phono3py](http://python.org/)** | Python is a programming language that lets you work more quickly and integrate your systems more effectively. | <nobr>1.11.7.8-intel-2015b-Python-2.7.11</nobr>                        |
 | **[phonopy](http://python.org/)**  | Python is a programming language that lets you work more quickly and integrate your systems more effectively. | <nobr>1.11.6.7-intel-2015b-Python-2.7.11</nobr>                        |
 | **VASP**                           |                                                                                                               | <nobr>5.4.1-intel-2015b-24Jun15</br>5.4.1-intel-2017.00-24Jun15</nobr> |
@@ -303,14 +303,14 @@
 ## Prace
 
 | Module     | Description | Available versions  |
-| ---------* | ----------* | ------------------* |
+| ---------- | ----------- | ------------------- |
 | **GLOBUS** |             | <nobr>globus</nobr> |
 | **PRACE**  |             | <nobr>prace</nobr>  |
 
 ## System
 
 | Module                                                                 | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          | Available versions                                                                                                                                                        |
-| ---------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ---------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[CUDA](https://developer.nvidia.com/cuda-toolkit)**                  | CUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.                                                                                                                                                                                                                                                                                  | <nobr>7.5.18</nobr>                                                                                                                                                       |
 | **[hwloc](http://www.open-mpi.org/projects/hwloc/)**                   | The Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently. | <nobr>1.11.0</br>1.11.3-GCC-5.3.0-2.26</br>1.11.1-iccifort-2015.3.187-GNU-4.9.3-2.25</br>1.11.0-GNU-4.9.3-2.25</br>1.11.2-GCC-4.9.3-2.25</br>1.11.0-GNU-5.1.0-2.25</nobr> |
 | **[libpciaccess](http://cgit.freedesktop.org/xorg/lib/libpciaccess/)** | Generic PCI access library.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          | <nobr>0.13.1</nobr>                                                                                                                                                       |
@@ -318,7 +318,7 @@
 ## Toolchain
 
 | Module                                                                          | Description                                                                                                                                        | Available versions                                                                                                               |
-| ------------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
 | **[foss]((none))**                                                              | GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK. | <nobr>2016a</br>2016.04</br>2015g</nobr>                                                                                         |
 | **[gimkl]((none))**                                                             | GNU Compiler Collection (GCC) based compiler toolchain, next to Intel MPI and Intel MKL (BLAS, (Sca)LAPACK, FFTW).                                 | <nobr>2.11.5</nobr>                                                                                                              |
 | **[gimpi]((none))**                                                             | GNU Compiler Collection (GCC) based compiler toolchain, next to Intel MPI.                                                                         | <nobr>2.11.5</nobr>                                                                                                              |
@@ -331,7 +331,7 @@
 ## Tools
 
 | Module                                                                | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                           | Available versions                                                                                                                                                                             |
-| --------------------------------------------------------------------* | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| --------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **advisor_xe**                                                        |                                                                                                                                                                                                                                                                                                                                                                                                                                                                       | <nobr>2013.5</br>2015.1.10.380555</nobr>                                                                                                                                                       |
 | **[APR](http://apr.apache.org/)**                                     | Apache Portable Runtime (APR) libraries.                                                                                                                                                                                                                                                                                                                                                                                                                              | <nobr>1.5.2-foss-2015g</br>1.5.2</nobr>                                                                                                                                                        |
 | **[APR-util](http://apr.apache.org/)**                                | Apache Portable Runtime (APR) util libraries.                                                                                                                                                                                                                                                                                                                                                                                                                         | <nobr>1.5.4</br>1.5.4-foss-2015g</nobr>                                                                                                                                                        |
@@ -399,7 +399,7 @@
 ## Virtualization
 
 | Module   | Description | Available versions                                         |
-| -------* | ----------* | ---------------------------------------------------------* |
+| -------- | ----------- | ---------------------------------------------------------- |
 | **qemu** |             | <nobr>2.1.2-vde2</br>2.1.2</br>2.1.0</br>2.1.0-vde2</nobr> |
 | **vde2** |             | <nobr>2.3.2</nobr>                                         |
 | **wine** |             | <nobr>1.7.29</nobr>                                        |
@@ -407,7 +407,7 @@
 ## Vis
 
 | Module                                                           | Description                                                                                                                                                                                                                                                                                                                | Available versions                                                                                                           |
-| ---------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------* |
+| ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- |
 | **[cairo](http://cairographics.org)**                            | Cairo is a 2D graphics library with support for multiple output devices. Currently supported output targets include the X Window System (via both Xlib and XCB), Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Experimental backends include OpenGL, BeOS, OS/2, and DirectFB                        | <nobr>1.12.18</nobr>                                                                                                         |
 | **[ffmpeg](https://www.ffmpeg.org/)**                            | A complete, cross-platform solution to record, convert and stream audio and video.                                                                                                                                                                                                                                         | <nobr>2.4</nobr>                                                                                                             |
 | **[fixesproto](http://www.freedesktop.org/wiki/Software/xlibs)** | X.org FixesProto protocol headers.                                                                                                                                                                                                                                                                                         | <nobr>5.0</nobr>                                                                                                             |
diff --git a/docs.it4i/modules-matrix.md b/docs.it4i/modules-matrix.md
index 239a9fbd34114bf13da2f1c7cc4c6d805bccbbdb..661e647953377f70a01c785e57622a79ebbc1baa 100644
--- a/docs.it4i/modules-matrix.md
+++ b/docs.it4i/modules-matrix.md
@@ -1,5 +1,5 @@
 | Module             | Versions                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           | Clusters                                                                                                                                                                                                                                                                  |
-| -----------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | abinit             | 7.10.1-gcc-openmpi</br>7.10.1-icc-impi</br>7.6.2                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   | `--A`</br>`--A`</br>`--A`                                                                                                                                                                                                                                                 |
 | ABINIT             | 7.10.1-foss-2015b</br>7.10.1-intel-2015b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           | `US-`</br>`US-`                                                                                                                                                                                                                                                           |
 | adams              | 2013.2                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             | `--A`                                                                                                                                                                                                                                                                     |
diff --git a/docs.it4i/modules-salomon-uv.md b/docs.it4i/modules-salomon-uv.md
index 042e1518c3264c519f3488cb446935b84d5d5bbf..1ce81e07568cd84cf29d319647d9d25d10316f8a 100644
--- a/docs.it4i/modules-salomon-uv.md
+++ b/docs.it4i/modules-salomon-uv.md
@@ -3,7 +3,7 @@
 ## Bio
 
 | Module                                                                        | Description                                                                                                                                                                                                                                                                                                                                                                                                                                        |
-| ----------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ----------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[FastQC](http://www.bioinformatics.babraham.ac.uk/projects/download.html)** | A quality control application for high throughput sequence data                                                                                                                                                                                                                                                                                                                                                                                    |
 | **[GATK](http://www.broadinstitute.org/gatk/)**                               | The Genome Analysis Toolkit or GATK is a software package developed at the Broad Institute to analyse next-generation resequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance. Its robust architecture, powerful processing engine and high-performance computing features make it capable of taking on projects of any size. |
 | **[SnpEff](http://snpeff.sourceforge.net/)**                                  | Genetic variant annotation and effect prediction toolbox.                                                                                                                                                                                                                                                                                                                                                                                          |
@@ -11,14 +11,14 @@
 ## Cae
 
 | Module                                   | Description                                                                                                                                                                                                                                      |
-| ---------------------------------------* | -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
 | **ANSYS**                                |                                                                                                                                                                                                                                                  |
 | **[OpenFOAM](http://www.openfoam.com/)** | OpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. |
 
 ## Chem
 
 | Module                                                                  | Description                                                                                                                                                                                                  |
-| ----------------------------------------------------------------------* | -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
 | **[ABINIT](http://www.abinit.org/)**                                    | Abinit is a plane wave pseudopotential code for doing condensed phase electronic structure calculations using DFT.                                                                                           |
 | **[Libint](https://sourceforge.net/p/libint/)**                         | Libint library is used to evaluate the traditional (electron repulsion) and certain novel two-body matrix elements (integrals) over Cartesian Gaussian functions used in modern atomic and molecular theory. |
 | **[libxc](http://www.tddft.org/programs/octopus/wiki/index.php/Libxc)** | Libxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.                |
@@ -26,7 +26,7 @@
 ## Compiler
 
 | Module                                                        | Description                                                                                                                                                        |
-| ------------------------------------------------------------* | -----------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
 | **[GCC](http://gcc.gnu.org/)**                                | The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...). |
 | **GCCcore**                                                   |                                                                                                                                                                    |
 | **[icc](http://software.intel.com/en-us/intel-compilers/)**   | C and C++ compiler from Intel                                                                                                                                      |
@@ -36,7 +36,7 @@
 ## Data
 
 | Module                                                     | Description                                                                                                                                                                                                                                                                                                                                                                        |
-| ---------------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ---------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[GDAL](http://www.gdal.org/)**                           | GDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It also comes with a variety of useful commandline utilities for data translation and processing. |
 | **[HDF5](http://www.hdfgroup.org/HDF5/)**                  | HDF5 is a unique technology suite that makes possible the management of extremely large and complex data collections.                                                                                                                                                                                                                                                              |
 | **[netCDF](http://www.unidata.ucar.edu/software/netcdf/)** | NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.                                                                                                                                                                                            |
@@ -44,7 +44,7 @@
 ## Devel
 
 | Module                                                             | Description                                                                                                                                                                                                                                                                                                                                                                                                                |
-| -----------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[Autoconf](http://www.gnu.org/software/autoconf/)**              | Autoconf is an extensible package of M4 macros that produce shell scripts to automatically configure software source code packages. These scripts can adapt the packages to many kinds of UNIX-like systems without manual user intervention. Autoconf creates a configuration script for a package from a template file that lists the operating system features that the package can use, in the form of M4 macro calls. |
 | **[Automake](http://www.gnu.org/software/automake/automake.html)** | Automake: GNU Standards-compliant Makefile generator                                                                                                                                                                                                                                                                                                                                                                       |
 | **[Autotools](http://autotools.io)**                               | This bundle collect the standard GNU build tools: Autoconf, Automake and libtool                                                                                                                                                                                                                                                                                                                                           |
@@ -60,7 +60,7 @@
 ## Lang
 
 | Module                                         | Description                                                                                                                                                                                                                                                                                                                                                                                                                                            |
-| ---------------------------------------------* | -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
 | **[Bison](http://www.gnu.org/software/bison)** | Bison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.                                                                                                                                                                                                                                                                    |
 | **[flex](http://flex.sourceforge.net/)**       | Flex (Fast Lexical Analyzer) is a tool for generating scanners. A scanner, sometimes called a tokenizer, is a program which recognizes lexical patterns in text.                                                                                                                                                                                                                                                                                       |
 | **[Java](http://java.com/)**                   | Java Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers.                                                                                                                                                                                                                                                                                                                                       |
@@ -74,7 +74,7 @@
 ## Lib
 
 | Module                                                                     | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
-| -------------------------------------------------------------------------* | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| -------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[libffi](http://sourceware.org/libffi/)**                                | The libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run-time.                                                                                                                                                                                                                                                                                                                                                       |
 | **[libjpeg-turbo](http://sourceforge.net/libjpeg-turbo/)**                 | libjpeg-turbo is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. libjpeg is a library that implements JPEG image encoding, decoding and transcoding.                                                                                                                                                                                                                                                                                                                                                  |
 | **[libpng](http://www.libpng.org/pub/png/libpng.html)**                    | libpng is the official PNG reference library                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
@@ -83,26 +83,26 @@
 | **[libxml2](http://xmlsoft.org/)**                                         | Libxml2 is the XML C parser and toolchain developed for the Gnome project (but usable outside of the Gnome platform).                                                                                                                                                                                                                                                                                                                                                                                                                                               |
 | **[PROJ](http://trac.osgeo.org/proj/)**                                    | Program proj is a standard Unix filter function which converts geographic longitude and latitude coordinates into cartesian coordinates                                                                                                                                                                                                                                                                                                                                                                                                                             |
 | **[tbb](http://software.intel.com/en-us/articles/intel-tbb/)**             | Intel Threading Building Blocks 4.0 (Intel TBB) is a widely used, award-winning C++ template library for creating reliable, portable, and scalable parallel applications. Use Intel TBB for a simple and rapid way of developing robust task-based parallel applications that scale to available processor cores, are compatible with multiple environments, and are easier to maintain. Intel TBB is the most proficient way to implement future-proof parallel applications that tap into the power and performance of multicore and manycore hardware platforms. |
-| **[zlib](http://www.zlib.net/)**                                           | zlib is designed to be a free, general-purpose, legally unencumbered -* that is, not covered by any patents -* lossless data-compression library for use on virtually any computer hardware and operating system.                                                                                                                                                                                                                                                                                                                                                   |
+| **[zlib](http://www.zlib.net/)**                                           | zlib is designed to be a free, general-purpose, legally unencumbered -- that is, not covered by any patents -- lossless data-compression library for use on virtually any computer hardware and operating system.                                                                                                                                                                                                                                                                                                                                                   |
 
 ## Math
 
 | Module                                                | Description                                                                                                                                                                       |
-| ----------------------------------------------------* | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ----------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **GMP**                                               |                                                                                                                                                                                   |
 | **[SCOTCH](http://gforge.inria.fr/projects/scotch/)** | Software package and libraries for sequential and parallel graph partitioning, static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning. |
 
 ## Mpi
 
 | Module                                                         | Description                                                                                                                                                                                                                           |
-| -------------------------------------------------------------* | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| -------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[impi](http://software.intel.com/en-us/intel-mpi-library/)** | The Intel(R) MPI Library for Linux\* OS is a multi-fabric message passing library based on ANL MPICH2 and OSU MVAPICH2. The Intel MPI Library for Linux OS implements the Message Passing Interface, version 2 (MPI-2) specification. |
 | **[OpenMPI](http://www.open-mpi.org/)**                        | The Open MPI Project is an open source MPI-2 implementation.                                                                                                                                                                          |
 
 ## Numlib
 
 | Module                                                 | Description                                                                                                                                                                                                                                                                                                |
-| -----------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[FFTW](http://www.fftw.org)**                        | FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data.                                                                                                                                   |
 | **[imkl](http://software.intel.com/en-us/intel-mkl/)** | Intel Math Kernel Library is a library of highly optimized, extensively threaded math routines for science, engineering, and financial applications that require maximum performance. Core math functions include BLAS, LAPACK, ScaLAPACK, Sparse Solvers, Fast Fourier Transforms, Vector Math, and more. |
 | **[OpenBLAS](http://xianyi.github.com/OpenBLAS/)**     | OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.                                                                                                                                                                                                                                 |
@@ -111,19 +111,19 @@
 ## Phys
 
 | Module                         | Description                                                                                                                                                                                                         |
-| -----------------------------* | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[VASP](http://www.vasp.at)** | The Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles. |
 
 ## System
 
 | Module                                               | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
-| ---------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ---------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[hwloc](http://www.open-mpi.org/projects/hwloc/)** | The Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently. |
 
 ## Toolchain
 
 | Module                                                                          | Description                                                                                                                                                                                                                                                                                  |
-| ------------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[foss]((none))**                                                              | GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.                                                                                                                                           |
 | **[GNU](http://www.gnu.org/software/)**                                         | Compiler-only toolchain with GCC and binutils.                                                                                                                                                                                                                                               |
 | **[gompi]((none))**                                                             | GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.                                                                                                                                                                                                   |
@@ -135,7 +135,7 @@
 ## Tools
 
 | Module                                                                | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
-| --------------------------------------------------------------------* | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| --------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[Bash](http://www.gnu.org/software/bash)**                          | Bash is an sh-compatible command language interpreter that executes commands read from the standard input or from a file. Bash also incorporates useful features from the Korn and C shells (ksh and csh).                                                                                                                                                                                                                                                            |
 | **[binutils](http://directory.fsf.org/project/binutils/)**            | binutils: GNU binary utilities                                                                                                                                                                                                                                                                                                                                                                                                                                        |
 | **[bzip2](http://www.bzip.org/)**                                     | bzip2 is a freely available, patent free, high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), whilst being around twice as fast at compression and six times faster at decompression.                                                                                                                                                                      |
@@ -158,7 +158,7 @@
 ## Vis
 
 | Module                                                            | Description                                                                                                                                                                                                                                      |
-| ----------------------------------------------------------------* | -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ----------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
 | **[gettext](http://www.gnu.org/software/gettext/)**               | GNU \`gettext' is an important step for the GNU Translation Project, as it is an asset on which we may build many other steps. This package offers to programmers, translators, and even users, a well integrated set of tools and documentation |
 | **[GLib](http://www.gtk.org/)**                                   | GLib is one of the base libraries of the GTK+ project                                                                                                                                                                                            |
 | **[Tk](http://www.tcl.tk/)**                                      | Tk is an open source, cross-platform widget toolchain that provides a library of basic elements for building a graphical user interface (GUI) in many different programming languages.                                                           |
diff --git a/docs.it4i/modules-salomon.md b/docs.it4i/modules-salomon.md
index 80d053b3d9347251cfb046cbf5a78b73a3ff2b59..7ef6398e700ca885e1e969d5fd4051f2eecb883f 100644
--- a/docs.it4i/modules-salomon.md
+++ b/docs.it4i/modules-salomon.md
@@ -3,17 +3,17 @@
 ## Core
 
 | Module      | Description |
-| ----------* | ----------* |
+| ----------- | ----------- |
 | **lmod**    |             |
 | **settarg** |             |
 
 ## Bio
 
 | Module                                                                  | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
-| ----------------------------------------------------------------------* | -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
-| **[almost](http://www-almost.ch.cam.ac.uk/site)**                       | all atom molecular simulation toolkit * is a fast and flexible molecular modeling environment that provides powerful and efficient algorithms for molecular simulation, homology modeling, de novo design and ab-initio calculations.                                                                                                                                                                                                                                                                        |
+| ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| **[almost](http://www-almost.ch.cam.ac.uk/site)**                       | all atom molecular simulation toolkit - is a fast and flexible molecular modeling environment that provides powerful and efficient algorithms for molecular simulation, homology modeling, de novo design and ab-initio calculations.                                                                                                                                                                                                                                                                        |
 | **[Amber](http://ambermd.org)**                                         | A set of molecular mechanical force fields for the simulation of biomolecules                                                                                                                                                                                                                                                                                                                                                                                                                                |
-| **[BCFtools](http://www.htslib.org/)**                                  | Samtools is a suite of programs for interacting with high-throughput sequencing data. BCFtools * Reading/writing BCF2/VCF/gVCF files and calling/filtering/summarising SNP and short indel sequence variants                                                                                                                                                                                                                                                                                                 |
+| **[BCFtools](http://www.htslib.org/)**                                  | Samtools is a suite of programs for interacting with high-throughput sequencing data. BCFtools - Reading/writing BCF2/VCF/gVCF files and calling/filtering/summarising SNP and short indel sequence variants                                                                                                                                                                                                                                                                                                 |
 | **[BWA](http://bio-bwa.sourceforge.net/)**                              | Burrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome.                                                                                                                                                                                                                                                                                                                                          |
 | **[FastQC](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/)** | FastQC is a quality control application for high throughput sequence data. It reads in sequence data in a variety of formats and can either provide an interactive application to review the results of several different QC checks, or create an HTML based report which can be integrated into a pipeline.                                                                                                                                                                                                 |
 | **[GATK](http://www.broadinstitute.org/gatk/)**                         | The Genome Analysis Toolkit or GATK is a software package developed at the Broad Institute to analyse next-generation resequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance. Its robust architecture, powerful processing engine and high-performance computing features make it capable of taking on projects of any size.                                                           |
@@ -29,7 +29,7 @@
 ## Cae
 
 | Module                                   | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
-| ---------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ---------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **Adams**                                |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
 | **ANSYS**                                |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
 | **COMSOL**                               |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
@@ -42,7 +42,7 @@
 ## Chem
 
 | Module                                                                             | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
-| ---------------------------------------------------------------------------------* | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ---------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[ABINIT](http://www.abinit.org/)**                                               | Abinit is a plane wave pseudopotential code for doing condensed phase electronic structure calculations using DFT.                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
 | **[CP2K](http://www.cp2k.org/)**                                                   | CP2K is a freely available (GPL) program, written in Fortran 95, to perform atomistic and molecular simulations of solid state, liquid, molecular and biological systems. It provides a general framework for different methods such as e.g. density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW), and classical pair and many-body potentials.                                                                                                                                                                                                |
 | **[LAMMPS](http://lammps.sandia.gov)**                                             | LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Has potentials for solid-state materials (metals, semiconductors) and soft matter (biomolecules, polymers) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.                                                                                                                                                                 |
@@ -52,7 +52,7 @@
 | **Molpro**                                                                         |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
 | **[NAMD](http://www.ks.uiuc.edu/Research/namd/)**                                  | NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
 | **[NWChem](http://www.nwchem-sw.org)**                                             | NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters. NWChem software can handle: biomolecules, nanostructures, and solid-state; from quantum to classical, and all combinations; Gaussian basis functions or plane-waves; scaling from one to thousands of processors; properties and relativity. |
-| **[ORCA](http://cec.mpg.de/forum/)**                                               | ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single* and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.                                                                                                                                                                            |
+| **[ORCA](http://cec.mpg.de/forum/)**                                               | ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.                                                                                                                                                                            |
 | **[QuantumESPRESSO](http://www.pwscf.org/)**                                       | Quantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).                                                                                                                                                                                                                                                                                                                      |
 | **[S4MPLE](http://infochim.u-strasbg.fr/spip.php?rubrique152)**                    | S4MPLE (Sampler For Multiple Protein-Ligand Entities) is a flexible molecular modeling tool, supporting empirical force field-driven conformational sampling and geometry optimization heuristics using a hybrid genetic algorithm (GA).                                                                                                                                                                                                                                                                                                                                        |
 | **Scipion**                                                                        |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
@@ -61,21 +61,21 @@
 ## Compiler
 
 | Module                                                        | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
-| ------------------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[BerkeleyUPC](http://upc.lbl.gov)**                         | The goal of the Berkeley UPC compiler group is to develop a portable, high performance implementation of UPC for large-scale multiprocessors, PC clusters, and clusters of shared memory multiprocessors.                                                                                                                                                                                                                                                                                            |
-| **[Clang](http://clang.llvm.org/)**                           | C, C++, Objective-C compiler, based on LLVM. Does not include C++ standard library -* use libstdc++ from GCC.                                                                                                                                                                                                                                                                                                                                                                                        |
+| **[Clang](http://clang.llvm.org/)**                           | C, C++, Objective-C compiler, based on LLVM. Does not include C++ standard library -- use libstdc++ from GCC.                                                                                                                                                                                                                                                                                                                                                                                        |
 | **[GCC](http://gcc.gnu.org/)**                                | The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).                                                                                                                                                                                                                                                                                                                                   |
 | **[GCCcore](http://gcc.gnu.org/)**                            | The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).                                                                                                                                                                                                                                                                                                                                   |
 | **[icc](http://software.intel.com/en-us/intel-compilers/)**   | C and C++ compiler from Intel                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
 | **[ifort](http://software.intel.com/en-us/intel-compilers/)** | Fortran compiler from Intel                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
-| **[LLVM](http://llvm.org/)**                                  | The LLVM Core libraries provide a modern source* and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation ("LLVM IR"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator. |
+| **[LLVM](http://llvm.org/)**                                  | The LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation ("LLVM IR"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator. |
 | **[OpenCoarrays](http://www.opencoarrays.org/)**              | A transport layer for coarray Fortran compilers.                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
 | **PGI**                                                       |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
 
 ## Data
 
 | Module                                                             | Description                                                                                                                                                                                                                                                                                                                                                                        |
-| -----------------------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[GDAL](http://www.gdal.org/)**                                   | GDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It also comes with a variety of useful commandline utilities for data translation and processing. |
 | **[h5py](http://www.h5py.org/)**                                   | HDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a versatile, mature scientific software library designed for the fast, flexible storage of enormous amounts of data.                                                                                                                                      |
 | **[HDF5](http://www.hdfgroup.org/HDF5/)**                          | HDF5 is a unique technology suite that makes possible the management of extremely large and complex data collections.                                                                                                                                                                                                                                                              |
@@ -85,18 +85,18 @@
 ## Debugger
 
 | Module                                                                                | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
-| ------------------------------------------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **aislinn**                                                                           |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
 | **DDT**                                                                               |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
-| **[Forge](http://www.allinea.com/products/develop-allinea-forge)**                    | Allinea Forge is the complete toolsuite for software development * with everything needed to debug, profile, optimize, edit and build C, C++ and FORTRAN applications on Linux for high performance * from single threads through to complex parallel HPC codes with MPI, OpenMP, threads or CUDA.                                                                                                                                                                                                                                                                                       |
-| **[PerformanceReports](http://www.allinea.com/products/allinea-performance-reports)** | Allinea Performance Reports are the most effective way to characterize and understand the performance of HPC application runs. One single-page HTML report elegantly answers a range of vital questions for any HPC site. * Is this application well-optimized for the system and the processors it is running on? * Does it benefit from running at this scale? * Are there I/O, networking or threading bottlenecks affecting performance? * Which hardware, software or configuration changes can we make to improve performance further. * How much energy did this application use? |
+| **[Forge](http://www.allinea.com/products/develop-allinea-forge)**                    | Allinea Forge is the complete toolsuite for software development - with everything needed to debug, profile, optimize, edit and build C, C++ and FORTRAN applications on Linux for high performance - from single threads through to complex parallel HPC codes with MPI, OpenMP, threads or CUDA.                                                                                                                                                                                                                                                                                       |
+| **[PerformanceReports](http://www.allinea.com/products/allinea-performance-reports)** | Allinea Performance Reports are the most effective way to characterize and understand the performance of HPC application runs. One single-page HTML report elegantly answers a range of vital questions for any HPC site. - Is this application well-optimized for the system and the processors it is running on? - Does it benefit from running at this scale? - Are there I/O, networking or threading bottlenecks affecting performance? - Which hardware, software or configuration changes can we make to improve performance further. - How much energy did this application use? |
 | **TotalView**                                                                         |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
 | **[Valgrind](http://valgrind.org/downloads/)**                                        | Valgrind: Debugging and profiling tools                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
 
 ## Devel
 
 | Module                                                                            | Description                                                                                                                                                                                                                                                                                                                                                                                                                |
-| --------------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| --------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[ant](http://ant.apache.org/)**                                                 | Apache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications.                                                                                                                                                                                  |
 | **[Autoconf](http://www.gnu.org/software/autoconf/)**                             | Autoconf is an extensible package of M4 macros that produce shell scripts to automatically configure software source code packages. These scripts can adapt the packages to many kinds of UNIX-like systems without manual user intervention. Autoconf creates a configuration script for a package from a template file that lists the operating system features that the package can use, in the form of M4 macro calls. |
 | **[Automake](http://www.gnu.org/software/automake/automake.html)**                | Automake: GNU Standards-compliant Makefile generator                                                                                                                                                                                                                                                                                                                                                                       |
@@ -136,15 +136,15 @@
 ## Geo
 
 | Module                                              | Description                                                                                                                                                                                                                                                                                                                                                                                                                  |
-| --------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| --------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[DCW](http://gmt.soest.hawaii.edu/projects/gmt)** | country polygons for GMT                                                                                                                                                                                                                                                                                                                                                                                                     |
 | **[GMT](http://gmt.soest.hawaii.edu/)**             | GMT is an open source collection of about 80 command-line tools for manipulating geographic and Cartesian data sets (including filtering, trend fitting, gridding, projecting, etc.) and producing PostScript illustrations ranging from simple x-y plots via contour maps to artificially illuminated surfaces and 3D perspective views; the GMT supplements add another 40 more specialized and discipline-specific tools. |
-| **[PROJ_4](http://proj.osgeo.org)**                 | PROJ.4 * Cartographic Projections Library originally written by Gerald Evenden then of the USGS.                                                                                                                                                                                                                                                                                                                             |
+| **[PROJ_4](http://proj.osgeo.org)**                 | PROJ.4 - Cartographic Projections Library originally written by Gerald Evenden then of the USGS.                                                                                                                                                                                                                                                                                                                             |
 
 ## Lang
 
 | Module                                                              | Description                                                                                                                                                                                                                                                                                                                                                                                                                                            |
-| ------------------------------------------------------------------* | -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
 | **[Bison](http://www.gnu.org/software/bison)**                      | Bison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.                                                                                                                                                                                                                                                                    |
 | **[byacc](http://invisible-island.net/byacc/byacc.html)**           | Berkeley Yacc (byacc) is generally conceded to be the best yacc variant available. In contrast to bison, it is written to avoid dependencies upon a particular compiler.                                                                                                                                                                                                                                                                               |
 | **[flex](http://flex.sourceforge.net/)**                            | Flex (Fast Lexical Analyzer) is a tool for generating scanners. A scanner, sometimes called a tokenizer, is a program which recognizes lexical patterns in text.                                                                                                                                                                                                                                                                                       |
@@ -167,7 +167,7 @@
 ## Lib
 
 | Module                                                                                           | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
-| -----------------------------------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[FOX](http://fox-toolkit.org)**                                                                | FOX is a C++ based Toolkit for developing Graphical User Interfaces easily and effectively. It offers a wide, and growing, collection of Controls, and provides state of the art facilities such as drag and drop, selection, as well as OpenGL widgets for 3D graphical manipulation.                                                                                                                                                                                                                                                                                                       |
 | **[libdrm](http://dri.freedesktop.org)**                                                         | Direct Rendering Manager runtime library.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
 | **[libffi](http://sourceware.org/libffi/)**                                                      | The libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run-time.                                                                                                                                                                                                                                                                                                                                                                                |
@@ -197,16 +197,16 @@
 | **[spatialindex](https://libspatialindex.github.io/index.html)**                                 | The purpose of this library is to provide: _ An extensible framework that will support robust spatial indexing methods. _ Support for sophisticated spatial queries. Range, point location, nearest neighbor and k-nearest neighbor as well as parametric queries (defined by spatial constraints) should be easy to deploy and run. \* Easy to use interfaces for inserting, deleting and updating information.                                                                                                                                                                             |
 | **[SpatiaLite](https://www.gaia-gis.it/fossil/libspatialite/index)**                             | SpatiaLite is an open source library intended to extend the SQLite core to support fully fledged Spatial SQL capabilities.                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
 | **[tbb](http://software.intel.com/en-us/articles/intel-tbb/)**                                   | Intel Threading Building Blocks 4.0 (Intel TBB) is a widely used, award-winning C++ template library for creating reliable, portable, and scalable parallel applications. Use Intel TBB for a simple and rapid way of developing robust task-based parallel applications that scale to available processor cores, are compatible with multiple environments, and are easier to maintain. Intel TBB is the most proficient way to implement future-proof parallel applications that tap into the power and performance of multicore and manycore hardware platforms.                          |
-| **[zlib](http://www.zlib.net/)**                                                                 | zlib is designed to be a free, general-purpose, legally unencumbered -* that is, not covered by any patents -* lossless data-compression library for use on virtually any computer hardware and operating system.                                                                                                                                                                                                                                                                                                                                                                            |
+| **[zlib](http://www.zlib.net/)**                                                                 | zlib is designed to be a free, general-purpose, legally unencumbered -- that is, not covered by any patents -- lossless data-compression library for use on virtually any computer hardware and operating system.                                                                                                                                                                                                                                                                                                                                                                            |
 
 ## Math
 
 | Module                                                                   | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
-| -----------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[FIAT](https://bitbucket.org/fenics-project/fiat)**                    | The FInite element Automatic Tabulator FIAT supports generation of arbitrary order instances of the Lagrange elements on lines, triangles, and tetrahedra. It is also capable of generating arbitrary order instances of Jacobi-type quadrature rules on the same element shapes.                                                                                                                                                                                                                                                                                    |
-| **[GEOS](http://trac.osgeo.org/geos)**                                   | GEOS (Geometry Engine * Open Source) is a C++ port of the Java Topology Suite (JTS)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+| **[GEOS](http://trac.osgeo.org/geos)**                                   | GEOS (Geometry Engine - Open Source) is a C++ port of the Java Topology Suite (JTS)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
 | **[GMP](http://gmplib.org/)**                                            | GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers.                                                                                                                                                                                                                                                                                                                                                                                                                                |
-| **[Harminv](http://ab-initio.mit.edu/wiki/index.php/Harminv)**           | Harminv is a free program (and accompanying library) to solve the problem of harmonic inversion * given a discrete-time, finite-length signal that consists of a sum of finitely-many sinusoids (possibly exponentially decaying) in a given bandwidth, it determines the frequencies, decay constants, amplitudes, and phases of those sinusoids.                                                                                                                                                                                                                   |
+| **[Harminv](http://ab-initio.mit.edu/wiki/index.php/Harminv)**           | Harminv is a free program (and accompanying library) to solve the problem of harmonic inversion - given a discrete-time, finite-length signal that consists of a sum of finitely-many sinusoids (possibly exponentially decaying) in a given bandwidth, it determines the frequencies, decay constants, amplitudes, and phases of those sinusoids.                                                                                                                                                                                                                   |
 | **[ISL](http://isl.gforge.inria.fr/)**                                   | isl is a library for manipulating sets and relations of integer points bounded by linear constraints.                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
 | **[METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)**       | METIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.                                                                                                                                                                                                                                                          |
 | **MPC**                                                                  |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
@@ -221,7 +221,7 @@
 ## Mpi
 
 | Module                                                               | Description                                                                                                                                                                                                                           |
-| -------------------------------------------------------------------* | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| -------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[impi](http://software.intel.com/en-us/intel-mpi-library/)**       | The Intel(R) MPI Library for Linux\* OS is a multi-fabric message passing library based on ANL MPICH2 and OSU MVAPICH2. The Intel MPI Library for Linux OS implements the Message Passing Interface, version 2 (MPI-2) specification. |
 | **[MPI_NET](http://www.osl.iu.edu/research/mpi.net/)**               | MPI.NET is a high-performance, easy-to-use implementation of the Message Passing Interface (MPI) for Microsoft's .NET environment                                                                                                     |
 | **[MPICH](http://www.mpich.org/)**                                   | MPICH v3.x is an open source high-performance MPI 3.0 implementation. It does not support InfiniBand (use MVAPICH2 with InfiniBand devices).                                                                                          |
@@ -232,7 +232,7 @@
 ## Numlib
 
 | Module                                                                       | Description                                                                                                                                                                                                                                                                                                |
-| ---------------------------------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ---------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[Armadillo](http://arma.sourceforge.net/)**                                | Armadillo is an open-source C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions.                                               |
 | **[arpack-ng](http://forge.scilab.org/index.php/p/arpack-ng/)**              | ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.                                                                                                                                                                                                         |
 | **[FFTW](http://www.fftw.org)**                                              | FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data.                                                                                                                                   |
@@ -248,8 +248,8 @@
 ## Perf
 
 | Module                                                              | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
-| ------------------------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
-| **[Advisor](https://software.intel.com/intel-advisor-xe)**          | Vectorization Optimization and Thread Prototyping * Vectorize & thread code or performance “dies” * Easy workflow + data + tips = faster code faster * Prioritize, Prototype & Predict performance gain                                                                                                                                                                                                                                                                            |
+| ------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| **[Advisor](https://software.intel.com/intel-advisor-xe)**          | Vectorization Optimization and Thread Prototyping - Vectorize & thread code or performance “dies” - Easy workflow + data + tips = faster code faster - Prioritize, Prototype & Predict performance gain                                                                                                                                                                                                                                                                            |
 | **[Cube](http://www.scalasca.org/software/cube-4.x/download.html)** | Cube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity.                                                                        |
 | **[ipp](http://software.intel.com/en-us/articles/intel-ipp/)**      | Intel Integrated Performance Primitives (Intel IPP) is an extensive library of multicore-ready, highly optimized software functions for multimedia, data processing, and communications applications. Intel IPP offers thousands of optimized functions covering frequently used fundamental algorithms.                                                                                                                                                                           |
 | **MAP**                                                             |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
@@ -266,7 +266,7 @@
 ## Phys
 
 | Module                                                   | Description                                                                                                                                  |
-| -------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------* |
+| -------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[Meep](http://ab-initio.mit.edu/wiki/index.php/Meep)** | Meep (or MEEP) is a free finite-difference time-domain (FDTD) simulation software package developed at MIT to model electromagnetic systems. |
 | **[phono3py](http://python.org/)**                       | Python is a programming language that lets you work more quickly and integrate your systems more effectively.                                |
 | **[phonopy](http://python.org/)**                        | Python is a programming language that lets you work more quickly and integrate your systems more effectively.                                |
@@ -276,7 +276,7 @@
 ## System
 
 | Module                                                                 | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
-| ---------------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ---------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[eudev](https://wiki.gentoo.org/wiki/Project:Eudev)**                | eudev is a fork of systemd-udev with the goal of obtaining better compatibility with existing software such as OpenRC and Upstart, older kernels, various toolchains and anything else required by users and various distributions.                                                                                                                                                                                                                                                                                                                                                                                  |
 | **[hwloc](http://www.open-mpi.org/projects/hwloc/)**                   | The Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently. |
 | **[libpciaccess](http://cgit.freedesktop.org/xorg/lib/libpciaccess/)** | Generic PCI access library.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
@@ -284,7 +284,7 @@
 ## Toolchain
 
 | Module                                                                          | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
-| ------------------------------------------------------------------------------* | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[foss]((none))**                                                              | GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
 | **[GNU](http://www.gnu.org/software/)**                                         | Compiler-only toolchain with GCC and binutils.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
 | **[gompi]((none))**                                                             | GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
@@ -293,12 +293,12 @@
 | **[iimpi](http://software.intel.com/en-us/intel-cluster-toolkit-compiler/)**    | Intel C/C++ and Fortran compilers, alongside Intel MPI.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
 | **[intel](http://software.intel.com/en-us/intel-cluster-toolkit-compiler/)**    | Intel Cluster Toolkit Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MPI & Intel MKL.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
 | **[PRACE](http://www.prace-ri.eu/PRACE-Common-Production)**                     | The PRACE Common Production Environment (PCPE) is a set of software tools and libraries that are planned to be available on all PRACE execution sites. The PCPE also defines a set of environment variables that try to make compilation on all sites as homogeneous and simple as possible.                                                                                                                                                                                                                                                                                                                                                                                      |
-| **[prace](http://www.prace-ri.eu/PRACE-Common-Production)**                     | \***\* PRACE Common Production Environment (PCPE) \*\*** Initialisation of the PRACE common production environment. This allows you to assume that the following tools/libraries are available by default in your PATH/environment. _ Fortran, C, C++ Compilers _ MPI _ BLAS, LAPACK, BLACS, ScaLAPACK _ FFTW _ HDF5, NetCDF The compiler commands on are: _ mpif90 * Fortran compiler _ mpicc * C compiler _ mpicxx * C++ compiler For more information on the PCPE please see the documentation at: <http://www.prace-ri.eu/PRACE-Common-Production> For help using this system, please see Local User Guide available at: <http://prace-ri.eu/Best-Practice-Guide-Anselm-HTML> |
+| **[prace](http://www.prace-ri.eu/PRACE-Common-Production)**                     | \***\* PRACE Common Production Environment (PCPE) \*\*** Initialisation of the PRACE common production environment. This allows you to assume that the following tools/libraries are available by default in your PATH/environment. _ Fortran, C, C++ Compilers _ MPI _ BLAS, LAPACK, BLACS, ScaLAPACK _ FFTW _ HDF5, NetCDF The compiler commands on are: _ mpif90 - Fortran compiler _ mpicc - C compiler _ mpicxx - C++ compiler For more information on the PCPE please see the documentation at: <http://www.prace-ri.eu/PRACE-Common-Production> For help using this system, please see Local User Guide available at: <http://prace-ri.eu/Best-Practice-Guide-Anselm-HTML> |
 
 ## Tools
 
 | Module                                                                                   | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
-| ---------------------------------------------------------------------------------------* | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ---------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[APR](http://apr.apache.org/)**                                                        | Apache Portable Runtime (APR) libraries.                                                                                                                                                                                                                                                                                                                                                                                                                                           |
 | **[APR-util](http://apr.apache.org/)**                                                   | Apache Portable Runtime (APR) util libraries.                                                                                                                                                                                                                                                                                                                                                                                                                                      |
 | **[Bash](http://www.gnu.org/software/bash)**                                             | Bash is an sh-compatible command language interpreter that executes commands read from the standard input or from a file. Bash also incorporates useful features from the Korn and C shells (ksh and csh).                                                                                                                                                                                                                                                                         |
@@ -343,7 +343,7 @@
 ## Vis
 
 | Module                                                            | Description                                                                                                                                                                                                                                                                                                                |
-| ----------------------------------------------------------------* | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------* |
+| ----------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
 | **[cairo](http://cairographics.org)**                             | Cairo is a 2D graphics library with support for multiple output devices. Currently supported output targets include the X Window System (via both Xlib and XCB), Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Experimental backends include OpenGL, BeOS, OS/2, and DirectFB                        |
 | **[ffmpeg](https://www.ffmpeg.org/)**                             | A complete, cross-platform solution to record, convert and stream audio and video.                                                                                                                                                                                                                                         |
 | **[fixesproto](http://www.freedesktop.org/wiki/Software/xlibs)**  | X.org FixesProto protocol headers.                                                                                                                                                                                                                                                                                         |
@@ -371,7 +371,7 @@
 | **[libXrender](http://www.freedesktop.org/wiki/Software/xlibs)**  | X11 client-side library                                                                                                                                                                                                                                                                                                    |
 | **[libXt](http://www.freedesktop.org/wiki/Software/xlibs)**       | libXt provides the X Toolkit Intrinsics, an abstract widget library upon which other toolkits are based. Xt is the basis for many toolkits, including the Athena widgets (Xaw), and LessTif (a Motif implementation).                                                                                                      |
 | **matplotlib**                                                    |                                                                                                                                                                                                                                                                                                                            |
-| **[Mesa](http://www.mesa3d.org/)**                                | Mesa is an open-source implementation of the OpenGL specification * a system for rendering interactive 3D graphics.                                                                                                                                                                                                        |
+| **[Mesa](http://www.mesa3d.org/)**                                | Mesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.                                                                                                                                                                                                        |
 | **[motif](http://motif.ics.com/)**                                | Motif refers to both a graphical user interface (GUI) specification and the widget toolkit for building applications that follow that specification under the X Window System on Unix and other POSIX-compliant systems. It was the standard toolkit for the Common Desktop Environment and thus for Unix.                 |
 | **[OpenCV](http://opencv.org/)**                                  | OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products.                                     |
 | **[OpenDX](http://www.opendx.org)**                               | Open source visualization software package based on IBM's Visualization Data Explorer.                                                                                                                                                                                                                                     |
diff --git a/docs.it4i/pbspro.md b/docs.it4i/pbspro.md
index e89ddfe72d54ff6b0e3fce2ab53f47bb2c6bbac5..9dd4ccdab63753ad2fe475a90a857e9c50cdd4df 100644
--- a/docs.it4i/pbspro.md
+++ b/docs.it4i/pbspro.md
@@ -1,4 +1,4 @@
-*   ![pdf](img/pdf.png)[PBS Pro Programmer's Guide](http://www.pbsworks.com/pdfs/PBSProgramGuide13.0.pdf)
-*   ![pdf](img/pdf.png)[PBS Pro Quick Start Guide](http://www.pbsworks.com/pdfs/PBSQuickStartGuide13.0.pdf)
-*   ![pdf](img/pdf.png)[PBS Pro Reference Guide](http://www.pbsworks.com/pdfs/PBSReferenceGuide13.0.pdf)
-*   ![pdf](img/pdf.png)[PBS Pro User's Guide](http://www.pbsworks.com/pdfs/PBSUserGuide13.0.pdf)
+-   ![pdf](img/pdf.png)[PBS Pro Programmer's Guide](http://www.pbsworks.com/pdfs/PBSProgramGuide13.0.pdf)
+-   ![pdf](img/pdf.png)[PBS Pro Quick Start Guide](http://www.pbsworks.com/pdfs/PBSQuickStartGuide13.0.pdf)
+-   ![pdf](img/pdf.png)[PBS Pro Reference Guide](http://www.pbsworks.com/pdfs/PBSReferenceGuide13.0.pdf)
+-   ![pdf](img/pdf.png)[PBS Pro User's Guide](http://www.pbsworks.com/pdfs/PBSUserGuide13.0.pdf)
diff --git a/docs.it4i/salomon/7d-enhanced-hypercube.md b/docs.it4i/salomon/7d-enhanced-hypercube.md
index 158a2bab6f743f718627a8378f8ce1d64acf6651..11151010082595be1cb46a98156bc86ef6c4d96c 100644
--- a/docs.it4i/salomon/7d-enhanced-hypercube.md
+++ b/docs.it4i/salomon/7d-enhanced-hypercube.md
@@ -5,9 +5,9 @@
 ![](../img/7D_Enhanced_hypercube.png)
 
 | Node type                            | Count | Short name       | Long name                | Rack  |
-| -----------------------------------* | ----* | ---------------* | -----------------------* | ----* |
-| M-Cell compute nodes w/o accelerator | 576   | cns1 -cns576     | r1i0n0 * r4i7n17         | 1-4   |
-| compute nodes MIC accelerated        | 432   | cns577 * cns1008 | r21u01n577 * r37u31n1008 | 21-38 |
+| ------------------------------------ | ----- | ---------------- | ------------------------ | ----- |
+| M-Cell compute nodes w/o accelerator | 576   | cns1 -cns576     | r1i0n0 - r4i7n17         | 1-4   |
+| compute nodes MIC accelerated        | 432   | cns577 - cns1008 | r21u01n577 - r37u31n1008 | 21-38 |
 
 ### IB Topology
 
diff --git a/docs.it4i/salomon/capacity-computing.md b/docs.it4i/salomon/capacity-computing.md
index d7137ef12c948c37651cd9c1099ae9c97330d669..b8c0cd66e7739a73a5fd5332f54eab694b4bb6ea 100644
--- a/docs.it4i/salomon/capacity-computing.md
+++ b/docs.it4i/salomon/capacity-computing.md
@@ -7,11 +7,11 @@ In many cases, it is useful to submit huge (100+) number of computational jobs i
 However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1500 per job array**
 
 !!! Note
-     Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
+	Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
 
-*   Use [Job arrays](capacity-computing.md#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
-*   Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
-*   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
+-   Use [Job arrays](capacity-computing.md#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
+-   Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
+-   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
 
 ## Policy
 
@@ -21,13 +21,13 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 ## Job Arrays
 
 !!! Note
-     Huge number of jobs may be easily submitted and managed as a job array.
+	Huge number of jobs may be easily submitted and managed as a job array.
 
 A job array is a compact representation of many jobs, called subjobs. The subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
 
-*   each subjob has a unique index, $PBS_ARRAY_INDEX
-*   job Identifiers of subjobs only differ by their indices
-*   the state of subjobs can differ (R,Q,...etc.)
+-   each subjob has a unique index, $PBS_ARRAY_INDEX
+-   job Identifiers of subjobs only differ by their indices
+-   the state of subjobs can differ (R,Q,...etc.)
 
 All subjobs within a job array have the same scheduling priority and schedule as independent jobs. Entire job array is submitted through a single qsub command and may be managed by qdel, qalter, qhold, qrls and qsig commands as a single job.
 
@@ -39,7 +39,7 @@ Example:
 
 Assume we have 900 input files with name beginning with "file" (e. g. file001, ..., file900). Assume we would like to use each of these input files with program executable myprog.x, each as a separate job.
 
-First, we create a tasklist file (or subjobs list), listing all tasks (subjobs) * all input files in our example:
+First, we create a tasklist file (or subjobs list), listing all tasks (subjobs) - all input files in our example:
 
 ```bash
 $ find . -name 'file*' > tasklist
@@ -103,8 +103,8 @@ $ qstat -a 506493[].isrv5
 isrv5:
                                                             Req'd  Req'd   Elap
 Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
---------------* -------* -*  |---|---| -----* --* --* -----* ----* * -----
-12345[].dm2     user2    qprod    xx          13516   1  24    -*  00:50 B 00:02
+--------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
+12345[].dm2     user2    qprod    xx          13516   1  24    --  00:50 B 00:02
 ```
 
 The status B means that some subjobs are already running.
@@ -117,14 +117,14 @@ $ qstat -a 12345[1-100].isrv5
 isrv5:
                                                             Req'd  Req'd   Elap
 Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
---------------* -------* -*  |---|---| -----* --* --* -----* ----* * -----
-12345[1].isrv5    user2    qprod    xx          13516   1  24    -*  00:50 R 00:02
-12345[2].isrv5    user2    qprod    xx          13516   1  24    -*  00:50 R 00:02
-12345[3].isrv5    user2    qprod    xx          13516   1  24    -*  00:50 R 00:01
-12345[4].isrv5    user2    qprod    xx          13516   1  24    -*  00:50 Q   --
+--------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
+12345[1].isrv5    user2    qprod    xx          13516   1  24    --  00:50 R 00:02
+12345[2].isrv5    user2    qprod    xx          13516   1  24    --  00:50 R 00:02
+12345[3].isrv5    user2    qprod    xx          13516   1  24    --  00:50 R 00:01
+12345[4].isrv5    user2    qprod    xx          13516   1  24    --  00:50 Q   --
      .             .        .      .             .    .   .     .    .   .    .
      ,             .        .      .             .    .   .     .    .   .    .
-12345[100].isrv5  user2    qprod    xx          13516   1  24    -*  00:50 Q   --
+12345[100].isrv5  user2    qprod    xx          13516   1  24    --  00:50 Q   --
 ```
 
 Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted.
@@ -152,7 +152,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/)
 ## GNU Parallel
 
 !!! Note
-     Use GNU parallel to run many single core tasks on one node.
+	Use GNU parallel to run many single core tasks on one node.
 
 GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on  Anselm.
 
@@ -171,7 +171,7 @@ Example:
 
 Assume we have 101 input files with name beginning with "file" (e. g. file001, ..., file101). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
 
-First, we create a tasklist file, listing all tasks * all input files in our example:
+First, we create a tasklist file, listing all tasks - all input files in our example:
 
 ```bash
 $ find . -name 'file*' > tasklist
@@ -224,12 +224,12 @@ In this example, we submit a job of 101 tasks. 24 input files will be processed
 ## Job Arrays and GNU Parallel
 
 !!! Note
-     Combine the Job arrays and GNU parallel for best throughput of single core jobs
+	Combine the Job arrays and GNU parallel for best throughput of single core jobs
 
 While job arrays are able to utilize all available computational nodes, the GNU parallel can be used to efficiently run multiple single-core jobs on single node. The two approaches may be combined to utilize all available (current and future) resources to execute single core jobs.
 
 !!! Note
-     Every subjob in an array runs GNU parallel to utilize all cores on the node
+	Every subjob in an array runs GNU parallel to utilize all cores on the node
 
 ### GNU Parallel, Shared jobscript
 
@@ -239,7 +239,7 @@ Example:
 
 Assume we have 992 input files with name beginning with "file" (e. g. file001, ..., file992). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
 
-First, we create a tasklist file, listing all tasks * all input files in our example:
+First, we create a tasklist file, listing all tasks - all input files in our example:
 
 ```bash
 $ find . -name 'file*' > tasklist
@@ -267,7 +267,7 @@ SCR=/scratch/work/user/$USER/$PBS_JOBID/$PARALLEL_SEQ
 mkdir -p $SCR ; cd $SCR || exit
 
 # get individual task from tasklist with index from PBS JOB ARRAY and index form Parallel
-IDX=$(($PBS_ARRAY_INDEX + $PARALLEL_SEQ * 1))
+IDX=$(($PBS_ARRAY_INDEX + $PARALLEL_SEQ - 1))
 TASK=$(sed -n "${IDX}p" $PBS_O_WORKDIR/tasklist)
 [ -z "$TASK" ] && exit
 
@@ -284,7 +284,7 @@ cp output $PBS_O_WORKDIR/$TASK.out
 In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node.  Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks  in numtasks file is reached.
 
 !!! Note
-     Select  subjob walltime and number of tasks per subjob  carefully
+	Select  subjob walltime and number of tasks per subjob  carefully
 
 When deciding this values, think about following guiding rules :
 
diff --git a/docs.it4i/salomon/compute-nodes.md b/docs.it4i/salomon/compute-nodes.md
index f572c33ebbd0cd520adcc9edfc5a225fe3e02c92..ddcc4dc559013716f1e6eb30e85a011fb6940ebe 100644
--- a/docs.it4i/salomon/compute-nodes.md
+++ b/docs.it4i/salomon/compute-nodes.md
@@ -9,22 +9,22 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 
 ### Compute Nodes Without Accelerator
 
-*   codename "grafton"
-*   576 nodes
-*   13 824 cores in total
-*   two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
-*   128 GB of physical memory per node
+-   codename "grafton"
+-   576 nodes
+-   13 824 cores in total
+-   two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
+-   128 GB of physical memory per node
 
 ![cn_m_cell](../img/cn_m_cell)
 
 ### Compute Nodes With MIC Accelerator
 
-*   codename "perrin"
-*   432 nodes
-*   10 368 cores in total
-*   two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
-*   128 GB of physical memory per node
-*   MIC accelerator 2 x Intel Xeon Phi 7120P per node, 61-cores, 16 GB per accelerator
+-   codename "perrin"
+-   432 nodes
+-   10 368 cores in total
+-   two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
+-   128 GB of physical memory per node
+-   MIC accelerator 2 x Intel Xeon Phi 7120P per node, 61-cores, 16 GB per accelerator
 
 ![cn_mic](../img/cn_mic-1)
 
@@ -34,19 +34,19 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 
 ### Uv 2000
 
-*   codename "UV2000"
-*   1 node
-*   112 cores in total
-*   14 x Intel Xeon E5-4627v2, 8-core, 3.3 GHz processors, in 14 NUMA nodes
-*   3328 GB of physical memory per node
-*   1 x NVIDIA GM200 (GeForce GTX TITAN X), 12 GB RAM
+-   codename "UV2000"
+-   1 node
+-   112 cores in total
+-   14 x Intel Xeon E5-4627v2, 8-core, 3.3 GHz processors, in 14 NUMA nodes
+-   3328 GB of physical memory per node
+-   1 x NVIDIA GM200 (GeForce GTX TITAN X), 12 GB RAM
 
 ![](../img/uv-2000.jpeg)
 
 ### Compute Nodes Summary
 
 | Node type                  | Count | Memory           | Cores                               |
-| -------------------------* | ----* | ---------------* | ----------------------------------* |
+| -------------------------- | ----- | ---------------- | ----------------------------------- |
 | Nodes without accelerator  | 576   | 128 GB           | 24 @ 2.5Ghz                         |
 | Nodes with MIC accelerator | 432   | 128 GB<p>32GB\\  | <p>24 @ 2.5Ghz<p>61 @  1.238 GHz\\  |
 | UV2000 SMP node            | 1     | 3328GB\\         | <p>112 @ 3.3GHz\\                   |
@@ -57,22 +57,22 @@ Salomon is equipped with Intel Xeon processors Intel Xeon E5-2680v3. Processors
 
 ### Intel Xeon E5-2680v3 Processor
 
-*   12-core
-*   speed: 2.5 GHz, up to 3.3 GHz using Turbo Boost Technology
-*   peak performance:  19.2 GFLOP/s per core
-*   caches:
-    *   Intel® Smart Cache:  30 MB
-*   memory bandwidth at the level of the processor: 68 GB/s
+-   12-core
+-   speed: 2.5 GHz, up to 3.3 GHz using Turbo Boost Technology
+-   peak performance:  19.2 GFLOP/s per core
+-   caches:
+    -   Intel® Smart Cache:  30 MB
+-   memory bandwidth at the level of the processor: 68 GB/s
 
 ### MIC Accelerator Intel Xeon Phi 7120P Processor
 
-*   61-core
-*   speed:  1.238
+-   61-core
+-   speed:  1.238
     GHz, up to 1.333 GHz using Turbo Boost Technology
-*   peak performance:  18.4 GFLOP/s per core
-*   caches:
-    *   L2:  30.5 MB
-*   memory bandwidth at the level of the processor:  352 GB/s
+-   peak performance:  18.4 GFLOP/s per core
+-   caches:
+    -   L2:  30.5 MB
+-   memory bandwidth at the level of the processor:  352 GB/s
 
 ## Memory Architecture
 
@@ -80,28 +80,28 @@ Memory is equally distributed across all CPUs and cores for optimal performance.
 
 ### Compute Node Without Accelerator
 
-*   2 sockets
-*   Memory Controllers are integrated into processors.
-    *   8 DDR4 DIMMs per node
-    *   4 DDR4 DIMMs per CPU
-    *   1 DDR4 DIMMs per channel
-*   Populated memory: 8 x 16 GB DDR4 DIMM >2133 MHz
+-   2 sockets
+-   Memory Controllers are integrated into processors.
+    -   8 DDR4 DIMMs per node
+    -   4 DDR4 DIMMs per CPU
+    -   1 DDR4 DIMMs per channel
+-   Populated memory: 8 x 16 GB DDR4 DIMM >2133 MHz
 
 ### Compute Node With MIC Accelerator
 
 2 sockets
 Memory Controllers are integrated into processors.
 
-*   8 DDR4 DIMMs per node
-*   4 DDR4 DIMMs per CPU
-*   1 DDR4 DIMMs per channel
+-   8 DDR4 DIMMs per node
+-   4 DDR4 DIMMs per CPU
+-   1 DDR4 DIMMs per channel
 
 Populated memory: 8 x 16 GB DDR4 DIMM 2133 MHz
 MIC Accelerator Intel Xeon Phi 7120P Processor
 
-*   2 sockets
-*   Memory Controllers are are connected via an
+-   2 sockets
+-   Memory Controllers are are connected via an
     Interprocessor Network (IPN) ring.
-    *   16 GDDR5 DIMMs per node
-    *   8 GDDR5 DIMMs per CPU
-    *   2 GDDR5 DIMMs per channel
+    -   16 GDDR5 DIMMs per node
+    -   8 GDDR5 DIMMs per CPU
+    -   2 GDDR5 DIMMs per channel
diff --git a/docs.it4i/salomon/environment-and-modules.md b/docs.it4i/salomon/environment-and-modules.md
index 19704e02b315c8c7a338be1638060a4fc95d2aa2..c1adc49d2af400553a84bc2df9b4c6de625dee06 100644
--- a/docs.it4i/salomon/environment-and-modules.md
+++ b/docs.it4i/salomon/environment-and-modules.md
@@ -16,7 +16,7 @@ fi
 alias qs='qstat -a'
 module load intel/2015b
 
-# Display information to standard output * only in interactive ssh session
+# Display information to standard output - only in interactive ssh session
 if [ -n "$SSH_TTY" ]
 then
  module list # Display loaded modules
@@ -24,7 +24,7 @@ fi
 ```
 
 !!! Note
-     Do not run commands outputting to standard output (echo, module list, etc) in .bashrc  for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example.
+	Do not run commands outputting to standard output (echo, module list, etc) in .bashrc  for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example.
 
 ### Application Modules
 
@@ -57,7 +57,7 @@ Application modules on Salomon cluster are built using [EasyBuild](http://hpcuge
 ```
 
 !!! Note
-     The modules set up the application paths, library paths and environment variables for running particular application.
+	The modules set up the application paths, library paths and environment variables for running particular application.
 
 The modules may be loaded, unloaded and switched, according to momentary needs.
 
@@ -107,14 +107,14 @@ The EasyBuild framework prepares the build environment for the different toolcha
 
 Recent releases of EasyBuild include out-of-the-box toolchain support for:
 
-*   various compilers, including GCC, Intel, Clang, CUDA
-*   common MPI libraries, such as Intel MPI, MPICH, MVAPICH2, Open MPI
-*   various numerical libraries, including ATLAS, Intel MKL, OpenBLAS, ScaLAPACK, FFTW
+-   various compilers, including GCC, Intel, Clang, CUDA
+-   common MPI libraries, such as Intel MPI, MPICH, MVAPICH2, Open MPI
+-   various numerical libraries, including ATLAS, Intel MKL, OpenBLAS, ScaLAPACK, FFTW
 
 On Salomon, we have currently following toolchains installed:
 
 | Toolchain | Module(s)                                      |
-| --------* | ---------------------------------------------* |
+| --------- | ---------------------------------------------- |
 | GCC       | GCC                                            |
 | ictce     | icc, ifort, imkl, impi                         |
 | intel     | GCC, icc, ifort, imkl, impi                    |
diff --git a/docs.it4i/salomon/hardware-overview.md b/docs.it4i/salomon/hardware-overview.md
index e1e3c0118da72bf5fc986bebf493a0ac8234ab74..c20234030c70a4a6dc9e8ba36f86b1097437313d 100644
--- a/docs.it4i/salomon/hardware-overview.md
+++ b/docs.it4i/salomon/hardware-overview.md
@@ -13,7 +13,7 @@ The parameters are summarized in the following tables:
 ## General Information
 
 | **In general**                              |                                             |
-| ------------------------------------------* | ------------------------------------------* |
+| ------------------------------------------- | ------------------------------------------- |
 | Primary purpose                             | High Performance Computing                  |
 | Architecture of compute nodes               | x86-64                                      |
 | Operating system                            | CentOS 6.x Linux                            |
@@ -32,8 +32,8 @@ The parameters are summarized in the following tables:
 ## Compute Nodes
 
 | Node            | Count | Processor                         | Cores | Memory | Accelerator                                   |
-| --------------* | ----* | --------------------------------* | ----* | -----* | --------------------------------------------* |
-| w/o accelerator | 576   | 2 x Intel Xeon E5-2680v3, 2.5 GHz | 24    | 128 GB | *                                             |
+| --------------- | ----- | --------------------------------- | ----- | ------ | --------------------------------------------- |
+| w/o accelerator | 576   | 2 x Intel Xeon E5-2680v3, 2.5 GHz | 24    | 128 GB | -                                             |
 | MIC accelerated | 432   | 2 x Intel Xeon E5-2680v3, 2.5 GHz | 24    | 128 GB | 2 x Intel Xeon Phi 7120P, 61 cores, 16 GB RAM |
 
 For more details please refer to the [Compute nodes](compute-nodes/).
@@ -43,7 +43,7 @@ For more details please refer to the [Compute nodes](compute-nodes/).
 For remote visualization two nodes with NICE DCV software are available each configured:
 
 | Node          | Count | Processor                         | Cores | Memory | GPU Accelerator               |
-| ------------* | ----* | --------------------------------* | ----* | -----* | ----------------------------* |
+| ------------- | ----- | --------------------------------- | ----- | ------ | ----------------------------- |
 | visualization | 2     | 2 x Intel Xeon E5-2695v3, 2.3 GHz | 28    | 512 GB | NVIDIA QUADRO K5000, 4 GB RAM |
 
 ## SGI Uv 2000
@@ -51,7 +51,7 @@ For remote visualization two nodes with NICE DCV software are available each con
 For large memory computations a special SMP/NUMA SGI UV 2000 server is available:
 
 | Node   | Count | Processor                                   | Cores | Memory                | Extra HW                                                                 |
-| -----* | ----* | ------------------------------------------* | ----* | --------------------* | -----------------------------------------------------------------------* |
+| ------ | ----- | ------------------------------------------- | ----- | --------------------- | ------------------------------------------------------------------------ |
 | UV2000 | 1     | 14 x Intel Xeon E5-4627v2, 3.3 GHz, 8 cores | 112   | 3328 GB DDR3@1866 MHz | 2 x 400GB local SSD</br>1x NVIDIA GM200 (GeForce GTX TITAN X), 12 GB RAM |
 
 ![](../img/uv-2000.jpeg)
diff --git a/docs.it4i/salomon/ib-single-plane-topology.md b/docs.it4i/salomon/ib-single-plane-topology.md
index 196b13dac16b0eb6dd62152b019193d64151ad41..7ba5d80bc336e41ee5823fde72ea5807f0bd40b2 100644
--- a/docs.it4i/salomon/ib-single-plane-topology.md
+++ b/docs.it4i/salomon/ib-single-plane-topology.md
@@ -1,31 +1,31 @@
 # IB single-plane topology
 
-A complete M-Cell assembly consists of four compute racks. Each rack contains 4 x physical IRUs * Independent rack units. Using one dual socket node per one blade slot leads to 8 logical IRUs. Each rack contains 4 x 2 SGI ICE X IB Premium Blades.
+A complete M-Cell assembly consists of four compute racks. Each rack contains 4 x physical IRUs - Independent rack units. Using one dual socket node per one blade slot leads to 8 logical IRUs. Each rack contains 4 x 2 SGI ICE X IB Premium Blades.
 
 The SGI ICE X IB Premium Blade provides the first level of interconnection via dual 36-port Mellanox FDR InfiniBand ASIC switch with connections as follows:
 
-*   9 ports from each switch chip connect to the unified backplane, to connect the 18 compute node slots
-*   3 ports on each chip provide connectivity between the chips
-*   24 ports from each switch chip connect to the external bulkhead, for a total of 48
+-   9 ports from each switch chip connect to the unified backplane, to connect the 18 compute node slots
+-   3 ports on each chip provide connectivity between the chips
+-   24 ports from each switch chip connect to the external bulkhead, for a total of 48
 
-### IB Single-Plane Topology * ICEX M-Cell
+### IB Single-Plane Topology - ICEX M-Cell
 
 Each color in each physical IRU represents one dual-switch ASIC switch.
 
-[IB single-plane topology * ICEX Mcell.pdf](<../src/IB single-plane topology * ICEX Mcell.pdf>)
+[IB single-plane topology - ICEX Mcell.pdf](<../src/IB single-plane topology - ICEX Mcell.pdf>)
 
-![../src/IB single-plane topology * ICEX Mcell.pdf](../img/IBsingleplanetopologyICEXMcellsmall.png)
+![../src/IB single-plane topology - ICEX Mcell.pdf](../img/IBsingleplanetopologyICEXMcellsmall.png)
 
-### IB Single-Plane Topology * Accelerated Nodes
+### IB Single-Plane Topology - Accelerated Nodes
 
 Each of the 3 inter-connected D racks are equivalent to one half of M-Cell rack. 18 x D rack with MIC accelerated nodes [r21-r38] are equivalent to 3 M-Cell racks as shown in a diagram [7D Enhanced Hypercube](7d-enhanced-hypercube/).
 
 As shown in a diagram ![IB Topology](../img/Salomon_IB_topology.png)
 
-*   Racks 21, 22, 23, 24, 25, 26 are equivalent to one M-Cell rack.
-*   Racks 27, 28, 29, 30, 31, 32 are equivalent to one M-Cell rack.
-*   Racks 33, 34, 35, 36, 37, 38 are equivalent to one M-Cell rack.
+-   Racks 21, 22, 23, 24, 25, 26 are equivalent to one M-Cell rack.
+-   Racks 27, 28, 29, 30, 31, 32 are equivalent to one M-Cell rack.
+-   Racks 33, 34, 35, 36, 37, 38 are equivalent to one M-Cell rack.
 
-[IB single-plane topology * Accelerated nodes.pdf](<../src/IB single-plane topology * Accelerated nodes.pdf>)
+[IB single-plane topology - Accelerated nodes.pdf](<../src/IB single-plane topology - Accelerated nodes.pdf>)
 
-![../src/IB single-plane topology * Accelerated nodes.pdf](../img/IBsingleplanetopologyAcceleratednodessmall.png)
+![../src/IB single-plane topology - Accelerated nodes.pdf](../img/IBsingleplanetopologyAcceleratednodessmall.png)
diff --git a/docs.it4i/salomon/job-priority.md b/docs.it4i/salomon/job-priority.md
index 5e2fc6868892a283d73969682f7ad4033f4df674..090d6ff31be97364b89139fbce9f6de3b52e0914 100644
--- a/docs.it4i/salomon/job-priority.md
+++ b/docs.it4i/salomon/job-priority.md
@@ -37,7 +37,7 @@ Usage counts allocated core-hours (`ncpus x walltime`). Usage is decayed, or cut
 # Jobs Queued in Queue qexp Are Not Calculated to Project's Usage.
 
 !!! Note
-     Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/rsweb/salomon/projects>.
+	Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/rsweb/salomon/projects>.
 
 Calculated fair-share priority can be also seen as Resource_List.fairshare attribute of a job.
 
@@ -66,9 +66,9 @@ The scheduler makes a list of jobs to run in order of execution priority. Schedu
 It means, that jobs with lower execution priority can be run before jobs with higher execution priority.
 
 !!! Note
-     It is **very beneficial to specify the walltime** when submitting jobs.
+	It is **very beneficial to specify the walltime** when submitting jobs.
 
-Specifying more accurate walltime enables better scheduling, better execution times and better resource usage. Jobs with suitable (small) walltime could be backfilled * and overtake job(s) with higher priority.
+Specifying more accurate walltime enables better scheduling, better execution times and better resource usage. Jobs with suitable (small) walltime could be backfilled - and overtake job(s) with higher priority.
 
 ### Job Placement
 
diff --git a/docs.it4i/salomon/job-submission-and-execution.md b/docs.it4i/salomon/job-submission-and-execution.md
index 8a0ec3b22affa427af3327c90520e5aba5de4e5c..b5d83ad1460c4c1493133640ef6fad314628dc32 100644
--- a/docs.it4i/salomon/job-submission-and-execution.md
+++ b/docs.it4i/salomon/job-submission-and-execution.md
@@ -12,7 +12,7 @@ When allocating computational resources for the job, please specify
 6.  Jobscript or interactive switch
 
 !!! Note
-     Use the **qsub** command to submit your job to a queue for allocation of the computational resources.
+	Use the **qsub** command to submit your job to a queue for allocation of the computational resources.
 
 Submit the job using the qsub command:
 
@@ -23,7 +23,7 @@ $ qsub -A Project_ID -q queue -l select=x:ncpus=y,walltime=[[hh:]mm:]ss[.ms] job
 The qsub submits the job into the queue, in another words the qsub command creates a request to the PBS Job manager for allocation of specified resources. The resources will be allocated when available, subject to above described policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
 
 !!! Note
-     PBS statement nodes (qsub -l nodes=nodespec) is not supported on Salomon cluster.
+	PBS statement nodes (qsub -l nodes=nodespec) is not supported on Salomon cluster.
 
 ### Job Submission Examples
 
@@ -72,7 +72,7 @@ In this example, we allocate 4 nodes, with 24 cores per node (totalling 96 cores
 ### UV2000 SMP
 
 !!! Note
-     14 NUMA nodes available on UV2000
+	14 NUMA nodes available on UV2000
     Per NUMA node allocation.
     Jobs are isolated by cpusets.
 
@@ -109,7 +109,7 @@ $ qsub -m n
 ### Placement by Name
 
 !!! Note
-     Not useful for ordinary computing, suitable for node testing/bechmarking and management tasks.
+	Not useful for ordinary computing, suitable for node testing/bechmarking and management tasks.
 
 Specific nodes may be selected using PBS resource attribute host (for hostnames):
 
@@ -127,16 +127,16 @@ In this example, we allocate nodes r24u35n680 and r24u36n681, all 24 cores per n
 
 ### Placement by Network Location
 
-Network location of allocated nodes in the [InifiBand network](network/) influences efficiency of network communication between nodes of job. Nodes on the same InifiBand switch communicate faster with lower latency than distant nodes. To improve communication efficiency of jobs, PBS scheduler on Salomon is configured to allocate nodes * from currently available resources * which are as close as possible in the network topology.
+Network location of allocated nodes in the [InifiBand network](network/) influences efficiency of network communication between nodes of job. Nodes on the same InifiBand switch communicate faster with lower latency than distant nodes. To improve communication efficiency of jobs, PBS scheduler on Salomon is configured to allocate nodes - from currently available resources - which are as close as possible in the network topology.
 
-For communication intensive jobs it is possible to set stricter requirement * to require nodes directly connected to the same InifiBand switch or to require nodes located in the same dimension group of the InifiBand network.
+For communication intensive jobs it is possible to set stricter requirement - to require nodes directly connected to the same InifiBand switch or to require nodes located in the same dimension group of the InifiBand network.
 
 ### Placement by InifiBand Switch
 
 Nodes directly connected to the same InifiBand switch can communicate most efficiently. Using the same switch prevents hops in the network and provides for unbiased, most efficient network communication. There are 9 nodes directly connected to every InifiBand switch.
 
 !!! Note
-     We recommend allocating compute nodes of a single switch when the best possible computational network performance is required to run job efficiently.
+	We recommend allocating compute nodes of a single switch when the best possible computational network performance is required to run job efficiently.
 
 Nodes directly connected to the one InifiBand switch can be allocated using node grouping on PBS resource attribute switch. 
 
@@ -149,7 +149,7 @@ $ qsub -A OPEN-0-0 -q qprod -l select=9:ncpus=24 -l place=group=switch ./myjob
 ### Placement by Specific InifiBand Switch
 
 !!! Note
-     Not useful for ordinary computing, suitable for testing and management tasks.
+	Not useful for ordinary computing, suitable for testing and management tasks.
 
 Nodes directly connected to the specific InifiBand switch can be selected using the PBS resource attribute _switch_.
 
@@ -192,7 +192,7 @@ r37u34n972
 Nodes located in the same dimension group may be allocated using node grouping on PBS resource attribute ehc\_[1-7]d .
 
 | Hypercube dimension | node_group_key | #nodes per group |
-| ------------------* | -------------* | ---------------* |
+| ------------------- | -------------- | ---------------- |
 | 1D                  | ehc_1d         | 18               |
 | 2D                  | ehc_2d         | 36               |
 | 3D                  | ehc_3d         | 72               |
@@ -234,7 +234,7 @@ r1i0n11
 ## Job Management
 
 !!! Note
-     Check status of your jobs using the **qstat** and **check-pbs-jobs** commands
+	Check status of your jobs using the **qstat** and **check-pbs-jobs** commands
 
 ```bash
 $ qstat -a
@@ -251,10 +251,10 @@ $ qstat -a
 srv11:
                                                             Req'd  Req'd   Elap
 Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
---------------* -------* -*  |---|---| -----* --* --* -----* ----* * -----
-16287.isrv5     user1    qlong    job1         6183   4  64    -*  144:0 R 38:25
-16468.isrv5     user1    qlong    job2         8060   4  64    -*  144:0 R 17:44
-16547.isrv5     user2    qprod    job3x       13516   2  32    -*  48:00 R 00:58
+--------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
+16287.isrv5     user1    qlong    job1         6183   4  64    --  144:0 R 38:25
+16468.isrv5     user1    qlong    job2         8060   4  64    --  144:0 R 17:44
+16547.isrv5     user2    qprod    job3x       13516   2  32    --  48:00 R 00:58
 ```
 
 In this example user1 and user2 are running jobs named job1, job2 and job3x. The jobs job1 and job2 are using 4 nodes, 16 cores per node each. The job1 already runs for 38 hours and 25 minutes, job2 for 17 hours 44 minutes. The job1 already consumed 64 x 38.41 = 2458.6 core hours. The job3x already consumed 0.96 x 32 = 30.93 core hours. These consumed core hours will be accounted on the respective project accounts, regardless of whether the allocated cores were actually used for computations.
@@ -313,7 +313,7 @@ Run loop 3
 In this example, we see actual output (some iteration loops) of the job 35141.dm2
 
 !!! Note
-     Manage your queued or running jobs, using the **qhold**, **qrls**, **qdel,** **qsig** or **qalter** commands
+	Manage your queued or running jobs, using the **qhold**, **qrls**, **qdel,** **qsig** or **qalter** commands
 
 You may release your allocation at any time, using qdel command
 
@@ -338,12 +338,12 @@ $ man pbs_professional
 ### Jobscript
 
 !!! Note
-     Prepare the jobscript to run batch jobs in the PBS queue system
+	Prepare the jobscript to run batch jobs in the PBS queue system
 
 The Jobscript is a user made script, controlling sequence of commands for executing the calculation. It is often written in bash, other scripts may be used as well. The jobscript is supplied to PBS **qsub** command as an argument and executed by the PBS Professional workload manager.
 
 !!! Note
-     The jobscript or interactive shell is executed on first of the allocated nodes.
+	The jobscript or interactive shell is executed on first of the allocated nodes.
 
 ```bash
 $ qsub -q qexp -l select=4:ncpus=24 -N Name0 ./myjob
@@ -352,15 +352,15 @@ $ qstat -n -u username
 isrv5:
                                                             Req'd  Req'd   Elap
 Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
---------------* -------* -*  |---|---| -----* --* --* -----* ----* * -----
-15209.isrv5     username qexp     Name0        5530   4  96    -*  01:00 R 00:00
+--------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
+15209.isrv5     username qexp     Name0        5530   4  96    --  01:00 R 00:00
    r21u01n577/0*24+r21u02n578/0*24+r21u03n579/0*24+r21u04n580/0*24
 ```
 
 In this example, the nodes r21u01n577, r21u02n578, r21u03n579, r21u04n580 were allocated for 1 hour via the qexp queue. The jobscript myjob will be executed on the node r21u01n577, while the nodes r21u02n578, r21u03n579, r21u04n580 are available for use as well.
 
 !!! Note
-     The jobscript or interactive shell is by default executed in home directory
+	The jobscript or interactive shell is by default executed in home directory
 
 ```bash
 $ qsub -q qexp -l select=4:ncpus=24 -I
@@ -374,7 +374,7 @@ $ pwd
 In this example, 4 nodes were allocated interactively for 1 hour via the qexp queue. The interactive shell is executed in the home directory.
 
 !!! Note
-     All nodes within the allocation may be accessed via ssh.  Unallocated nodes are not accessible to user.
+	All nodes within the allocation may be accessed via ssh.  Unallocated nodes are not accessible to user.
 
 The allocated nodes are accessible via ssh from login nodes. The nodes may access each other via ssh as well.
 
@@ -406,7 +406,7 @@ In this example, the hostname program is executed via pdsh from the interactive
 ### Example Jobscript for MPI Calculation
 
 !!! Note
-     Production jobs must use the /scratch directory for I/O
+	Production jobs must use the /scratch directory for I/O
 
 The recommended way to run production jobs is to change to /scratch directory early in the jobscript, copy all inputs to /scratch, execute the calculations and copy outputs to home directory.
 
@@ -438,12 +438,12 @@ exit
 In this example, some directory on the /home holds the input file input and executable mympiprog.x . We create a directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI programm mympiprog.x and copy the output file back to the /home directory. The mympiprog.x is executed as one process per node, on all allocated nodes.
 
 !!! Note
-     Consider preloading inputs and executables onto [shared scratch](storage/) before the calculation starts.
+	Consider preloading inputs and executables onto [shared scratch](storage/) before the calculation starts.
 
 In some cases, it may be impractical to copy the inputs to scratch and outputs to home. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such a case, it is users responsibility to preload the input files on shared /scratch before the job submission and retrieve the outputs manually, after all calculations are finished.
 
 !!! Note
-     Store the qsub options within the jobscript. Use **mpiprocs** and **ompthreads** qsub options to control the MPI job execution.
+	Store the qsub options within the jobscript. Use **mpiprocs** and **ompthreads** qsub options to control the MPI job execution.
 
 ### Example Jobscript for MPI Calculation With Preloaded Inputs
 
@@ -477,7 +477,7 @@ HTML commented section #2 (examples need to be reworked)
 ### Example Jobscript for Single Node Calculation
 
 !!! Note
-     Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
+	Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
 
 Example jobscript for single node calculation, using [local scratch](storage/) on the node:
 
diff --git a/docs.it4i/salomon/network.md b/docs.it4i/salomon/network.md
index 933bc79b27d9408a152d6cc6c86854595b8c4ee4..005e6d6600147d5b7ff344017c377368338d295f 100644
--- a/docs.it4i/salomon/network.md
+++ b/docs.it4i/salomon/network.md
@@ -21,8 +21,8 @@ $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
 $ qstat -n -u username
                                                             Req'd  Req'd   Elap
 Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
---------------* -------* -*  |---|---| -----* --* --* -----* ----* * -----
-15209.isrv5     username qexp     Name0        5530   4  96    -*  01:00 R 00:00
+--------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
+15209.isrv5     username qexp     Name0        5530   4  96    --  01:00 R 00:00
    r4i1n0/0*24+r4i1n1/0*24+r4i1n2/0*24+r4i1n3/0*24
 ```
 
diff --git a/docs.it4i/salomon/prace.md b/docs.it4i/salomon/prace.md
index c6793030083a49cce63a1b9893cba4653bb07a37..4c0f22f746830d77a1fc1a05ac599ac868f20c52 100644
--- a/docs.it4i/salomon/prace.md
+++ b/docs.it4i/salomon/prace.md
@@ -28,11 +28,11 @@ The user will need a valid certificate and to be present in the PRACE LDAP (plea
 
 Most of the information needed by PRACE users accessing the Salomon TIER-1 system can be found here:
 
-*   [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
-*   [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
-*   [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
-*   [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
-*   [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
+-   [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
+-   [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
+-   [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
+-   [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
+-   [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
 
 Before you start to use any of the services don't forget to create a proxy certificate from your certificate:
 
@@ -53,7 +53,7 @@ To access Salomon cluster, two login nodes running GSI SSH service are available
 It is recommended to use the single DNS name salomon-prace.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
 
 | Login address                | Port | Protocol | Login node                       |
-| ---------------------------* | ---* | -------* | -------------------------------* |
+| ---------------------------- | ---- | -------- | -------------------------------- |
 | salomon-prace.it4i.cz        | 2222 | gsissh   | login1, login2, login3 or login4 |
 | login1-prace.salomon.it4i.cz | 2222 | gsissh   | login1                           |
 | login2-prace.salomon.it4i.cz | 2222 | gsissh   | login2                           |
@@ -75,7 +75,7 @@ When logging from other PRACE system, the prace_service script can be used:
 It is recommended to use the single DNS name salomon.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
 
 | Login address                | Port | Protocol | Login node                       |
-| ---------------------------* | ---* | -------* | -------------------------------* |
+| ---------------------------- | ---- | -------- | -------------------------------- |
 | salomon.it4i.cz              | 2222 | gsissh   | login1, login2, login3 or login4 |
 | login1.salomon.it4i.cz       | 2222 | gsissh   | login1                           |
 | login2-prace.salomon.it4i.cz | 2222 | gsissh   | login2                           |
@@ -130,7 +130,7 @@ There's one control server and three backend servers for striping and/or backup
 **Access from PRACE network:**
 
 | Login address                 | Port | Node role                   |
-| ----------------------------* | ---* | --------------------------* |
+| ----------------------------- | ---- | --------------------------- |
 | gridftp-prace.salomon.it4i.cz | 2812 | Front end /control server   |
 | lgw1-prace.salomon.it4i.cz    | 2813 | Backend / data mover server |
 | lgw2-prace.salomon.it4i.cz    | 2813 | Backend / data mover server |
@@ -163,7 +163,7 @@ Or by using  prace_service script:
 **Access from public Internet:**
 
 | Login address           | Port | Node role                   |
-| ----------------------* | ---* | --------------------------* |
+| ----------------------- | ---- | --------------------------- |
 | gridftp.salomon.it4i.cz | 2812 | Front end /control server   |
 | lgw1.salomon.it4i.cz    | 2813 | Backend / data mover server |
 | lgw2.salomon.it4i.cz    | 2813 | Backend / data mover server |
@@ -196,7 +196,7 @@ Or by using  prace_service script:
 Generally both shared file systems are available through GridFTP:
 
 | File system mount point | Filesystem | Comment                                                        |
-| ----------------------* | ---------* | -------------------------------------------------------------* |
+| ----------------------- | ---------- | -------------------------------------------------------------- |
 | /home                   | Lustre     | Default HOME directories of users in format /home/prace/login/ |
 | /scratch                | Lustre     | Shared SCRATCH mounted on the whole cluster                    |
 
@@ -206,7 +206,7 @@ More information about the shared file systems is available [here](storage/).
     `prace` directory is used for PRACE users on the SCRATCH file system.
 
 | Data type                    | Default path                    |
-| ---------------------------* | ------------------------------* |
+| ---------------------------- | ------------------------------- |
 | large project files          | /scratch/work/user/prace/login/ |
 | large scratch/temporary data | /scratch/temp/                  |
 
@@ -233,7 +233,7 @@ General information about the resource allocation, job queuing and job execution
 For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree".
 
 | queue                         | Active project | Project resources | Nodes                      | priority | authorization | walltime  |
-| ----------------------------* | -------------* | ----------------* | -------------------------* | -------* | ------------* | --------* |
+| ----------------------------- | -------------- | ----------------- | -------------------------- | -------- | ------------- | --------- |
 | **qexp** Express queue        | no             | none required     | 32 nodes, max 8 per user   | 150      | no            | 1 / 1 h   |
 | **qprace** Production queue   | yes            | >0                | 1006 nodes, max 86 per job | 0        | no            | 24 / 48 h |
 | **qfree** Free resource queue | yes            | none required     | 752 nodes, max 86 per job  | -1024    | no            | 12 / 12 h |
@@ -249,13 +249,13 @@ PRACE users should check their project accounting using the [PRACE Accounting To
 Users who have undergone the full local registration procedure (including signing the IT4Innovations Acceptable Use Policy) and who have received local password may check at any time, how many core-hours have been consumed by themselves and their projects using the command "it4ifree". You need to know your user password to use the command and that the displayed core hours are "system core hours" which differ from PRACE "standardized core hours".
 
 !!! Note
-     The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
+	The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
 
 ```bash
     $ it4ifree
     Password:
          PID    Total   Used   ...by me Free
-       -------* ------* -----* -------* -------
+       -------- ------- ------ -------- -------
        OPEN-0-0 1500000 400644   225265 1099356
        DD-13-1    10000   2606     2606    7394
 ```
diff --git a/docs.it4i/salomon/resource-allocation-and-job-execution.md b/docs.it4i/salomon/resource-allocation-and-job-execution.md
index 4e12656d1a35674c440a6fddeffc95c0afe1887c..489e9de698de0eec0b036684254d1269448d1c75 100644
--- a/docs.it4i/salomon/resource-allocation-and-job-execution.md
+++ b/docs.it4i/salomon/resource-allocation-and-job-execution.md
@@ -6,22 +6,22 @@ To run a [job](job-submission-and-execution/), [computational resources](resourc
 
 The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](job-priority/) at Salomon ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are available to Anselm users:
 
-*   **qexp**, the Express queue
-*   **qprod**, the Production queue
-*   **qlong**, the Long queue
-*   **qmpp**, the Massively parallel queue
-*   **qfat**, the queue to access SMP UV2000 machine
-*   **qfree**, the Free resource utilization queue
+-   **qexp**, the Express queue
+-   **qprod**, the Production queue
+-   **qlong**, the Long queue
+-   **qmpp**, the Massively parallel queue
+-   **qfat**, the queue to access SMP UV2000 machine
+-   **qfree**, the Free resource utilization queue
 
 !!! Note
-     Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/>
+	Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/>
 
 Read more on the [Resource Allocation Policy](resources-allocation-policy/) page.
 
 ## Job Submission and Execution
 
 !!! Note
-     Use the **qsub** command to submit your jobs.
+	Use the **qsub** command to submit your jobs.
 
 The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 24 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
 
diff --git a/docs.it4i/salomon/resources-allocation-policy.md b/docs.it4i/salomon/resources-allocation-policy.md
index 019655c8d4eaae28b964dcfd621c6ee4edce8daa..5d97c4bdf47074f50a074d08c46cf1db620656fb 100644
--- a/docs.it4i/salomon/resources-allocation-policy.md
+++ b/docs.it4i/salomon/resources-allocation-policy.md
@@ -5,10 +5,10 @@
 The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
 
 !!! Note
-     Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/>
+	Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/>
 
 | queue                           | active project | project resources | nodes                                                         | min ncpus | priority | authorization | walltime  |
-| ------------------------------* | -------------* | ----------------* | ------------------------------------------------------------* | --------* | -------* | ------------* | --------* |
+| ------------------------------- | -------------- | ----------------- | ------------------------------------------------------------- | --------- | -------- | ------------- | --------- |
 | **qexe** Express queue          | no             | none required     | 32 nodes, max 8 per user                                      | 24        | 150      | no            | 1 / 1h    |
 | **qprod** Production queue      | yes            | > 0               | 1006 nodes, max 86 per job                                    | 24        | 0        | no            | 24 / 48h  |
 | **qlong** Long queue            | yes            | > 0               | 256 nodes, max 40 per job, only non-accelerated nodes allowed | 24        | 0        | no            | 72 / 144h |
@@ -18,18 +18,18 @@ The resources are allocated to the job in a fair-share fashion, subject to const
 | **qviz** Visualization queue    | yes            | none required     | 2 (with NVIDIA Quadro K5000)                                  | 4         | 150      | no            | 1 / 8h    |
 
 !!! Note
-     **The qfree queue is not free of charge**. [Normal accounting](resources-allocation-policy/#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
+	**The qfree queue is not free of charge**. [Normal accounting](resources-allocation-policy/#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
 
-*   **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
-*   **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
-*   **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 336 nodes without acceleration may be accessed via the qlong queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time * 3 \* 48 h)
-*   **qmpp**, the massively parallel queue. This queue is intended for massively parallel runs. It is required that active project with nonzero remaining resources is specified to enter the qmpp. All nodes may be accessed via the qmpp queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it.  The maximum runtime in qmpp is 4 hours. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
-*   **qfat**, the UV2000 queue. This queue is dedicated to access the fat SGI UV2000 SMP machine. The machine (uv1) has 112 Intel IvyBridge cores at 3.3GHz and 3.25TB RAM. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
-*   **qfree**, the Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 24 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
-*   **qviz**, the Visualization queue: Intended for pre-/post-processing using OpenGL accelerated graphics. Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 73 GB of RAM and 1/7 of the GPU capacity (default "chunk"). If more GPU power or RAM is required, it is recommended to allocate more chunks (with 4 cores each) up to one whole node per user, so that all 28 cores, 512 GB RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
+-   **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
+-   **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
+-   **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 336 nodes without acceleration may be accessed via the qlong queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 \* 48 h)
+-   **qmpp**, the massively parallel queue. This queue is intended for massively parallel runs. It is required that active project with nonzero remaining resources is specified to enter the qmpp. All nodes may be accessed via the qmpp queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it.  The maximum runtime in qmpp is 4 hours. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
+-   **qfat**, the UV2000 queue. This queue is dedicated to access the fat SGI UV2000 SMP machine. The machine (uv1) has 112 Intel IvyBridge cores at 3.3GHz and 3.25TB RAM. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
+-   **qfree**, the Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 24 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
+-   **qviz**, the Visualization queue: Intended for pre-/post-processing using OpenGL accelerated graphics. Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 73 GB of RAM and 1/7 of the GPU capacity (default "chunk"). If more GPU power or RAM is required, it is recommended to allocate more chunks (with 4 cores each) up to one whole node per user, so that all 28 cores, 512 GB RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
 
 !!! Note
-     To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
+	To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
 
 ### Notes
 
@@ -42,7 +42,7 @@ Salomon users may check current queue configuration at <https://extranet.it4i.cz
 ### Queue Status
 
 !!! Note
-     Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)
+	Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)
 
 ![RSWEB Salomon](../img/rswebsalomon.png "RSWEB Salomon")
 
@@ -120,7 +120,7 @@ The resources that are currently subject to accounting are the core-hours. The c
 ### Check Consumed Resources
 
 !!! Note
-     The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
+	The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
 
 User may check at any time, how many core-hours have been consumed by himself/herself and his/her projects. The command is available on clusters' login nodes.
 
@@ -128,7 +128,7 @@ User may check at any time, how many core-hours have been consumed by himself/he
 $ it4ifree
 Password:
      PID    Total   Used   ...by me Free
-   -------* ------* -----* -------* -------
+   -------- ------- ------ -------- -------
    OPEN-0-0 1500000 400644   225265 1099356
    DD-13-1    10000   2606     2606    7394
 ```
diff --git a/docs.it4i/salomon/shell-and-data-access.md b/docs.it4i/salomon/shell-and-data-access.md
index 7bc90a965e8a95a324f5be96d2e749bc9d9f6a2e..4b01e65ddc3c048291cee2007db2d3836e2f055b 100644
--- a/docs.it4i/salomon/shell-and-data-access.md
+++ b/docs.it4i/salomon/shell-and-data-access.md
@@ -5,10 +5,10 @@
 The Salomon cluster is accessed by SSH protocol via login nodes login1, login2, login3 and login4 at address salomon.it4i.cz. The login nodes may be addressed specifically, by prepending the login node name to the address.
 
 !!! Note
-     The alias salomon.it4i.cz is currently not available through VPN connection. Please use loginX.salomon.it4i.cz when connected to VPN.
+	The alias salomon.it4i.cz is currently not available through VPN connection. Please use loginX.salomon.it4i.cz when connected to VPN.
 
 | Login address          | Port | Protocol | Login node                            |
-| ---------------------* | ---* | -------* | ------------------------------------* |
+| ---------------------- | ---- | -------- | ------------------------------------- |
 | salomon.it4i.cz        | 22   | ssh      | round-robin DNS record for login[1-4] |
 | login1.salomon.it4i.cz | 22   | ssh      | login1                                |
 | login1.salomon.it4i.cz | 22   | ssh      | login1                                |
@@ -18,9 +18,9 @@ The Salomon cluster is accessed by SSH protocol via login nodes login1, login2,
 The authentication is by the [private key](../get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
 
 !!! Note
-     Please verify SSH fingerprints during the first logon. They are identical on all login nodes:
-     f6:28:98:e4:f9:b2:a6:8f:f2:f4:2d:0a:09:67:69:80 (DSA)
-     70:01:c9:9a:5d:88:91:c7:1b:c0:84:d1:fa:4e:83:5c (RSA)
+	Please verify SSH fingerprints during the first logon. They are identical on all login nodes:
+	f6:28:98:e4:f9:b2:a6:8f:f2:f4:2d:0a:09:67:69:80 (DSA)
+	70:01:c9:9a:5d:88:91:c7:1b:c0:84:d1:fa:4e:83:5c (RSA)
 
 Private key authentication:
 
@@ -57,14 +57,14 @@ Last login: Tue Jul  9 15:57:38 2013 from your-host.example.com
 ```
 
 !!! Note
-     The environment is **not** shared between login nodes, except for [shared filesystems](storage/).
+	The environment is **not** shared between login nodes, except for [shared filesystems](storage/).
 
 ## Data Transfer
 
 Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols.
 
 | Address                | Port | Protocol  |
-| ---------------------* | ---* | --------* |
+| ---------------------- | ---- | --------- |
 | salomon.it4i.cz        | 22   | scp, sftp |
 | login1.salomon.it4i.cz | 22   | scp, sftp |
 | login2.salomon.it4i.cz | 22   | scp, sftp |
@@ -114,14 +114,14 @@ More information about the shared file systems is available [here](storage/).
 Outgoing connections, from Salomon Cluster login nodes to the outside world, are restricted to following ports:
 
 | Port | Protocol |
-| ---* | -------* |
+| ---- | -------- |
 | 22   | ssh      |
 | 80   | http     |
 | 443  | https    |
 | 9418 | git      |
 
 !!! Note
-     Please use **ssh port forwarding** and proxy servers to connect from Salomon to all other remote ports.
+	Please use **ssh port forwarding** and proxy servers to connect from Salomon to all other remote ports.
 
 Outgoing connections, from Salomon Cluster compute nodes are restricted to the internal network. Direct connections form compute nodes to outside world are cut.
 
@@ -130,7 +130,7 @@ Outgoing connections, from Salomon Cluster compute nodes are restricted to the i
 ### Port Forwarding From Login Nodes
 
 !!! Note
-     Port forwarding allows an application running on Salomon to connect to arbitrary remote host and port.
+	Port forwarding allows an application running on Salomon to connect to arbitrary remote host and port.
 
 It works by tunneling the connection from Salomon back to users workstation and forwarding from the workstation to the remote host.
 
@@ -171,7 +171,7 @@ In this example, we assume that port forwarding from login1:6000 to remote.host.
 Port forwarding is static, each single port is mapped to a particular port on remote host. Connection to other remote host, requires new forward.
 
 !!! Note
-     Applications with inbuilt proxy support, experience unlimited access to remote hosts, via single proxy server.
+	Applications with inbuilt proxy support, experience unlimited access to remote hosts, via single proxy server.
 
 To establish local proxy server on your workstation, install and run SOCKS proxy server software. On Linux, sshd demon provides the functionality. To establish SOCKS proxy server listening on port 1080 run:
 
@@ -191,9 +191,9 @@ Now, configure the applications proxy settings to **localhost:6000**. Use port f
 
 ## Graphical User Interface
 
-*   The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
-*   The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
+-   The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
+-   The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
 
 ## VPN Access
 
-*   Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/).
+-   Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/).
diff --git a/docs.it4i/salomon/software/ansys/ansys-fluent.md b/docs.it4i/salomon/software/ansys/ansys-fluent.md
index b45d9cbe07ca506edc929c4e2bf4f4552a91bfd5..aefcfbf77da974a6ec14110ef571e7dcf1f8ba36 100644
--- a/docs.it4i/salomon/software/ansys/ansys-fluent.md
+++ b/docs.it4i/salomon/software/ansys/ansys-fluent.md
@@ -129,7 +129,7 @@ To run ANSYS Fluent in batch mode with user's config file you can utilize/modify
  #Default arguments for all jobs
  fluent_args="-ssh -g -i $input $fluent_args"
 
- echo "---------* Going to start a fluent job with the following settings:
+ echo "---------- Going to start a fluent job with the following settings:
  Input: $input
  Case: $case
  Output: $outfile
diff --git a/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md b/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md
index 7f9fe2db8faeafdb01a2a0837e0d640b2f4defdd..70448b8903b0ab067691d3d7d4442f141b7a084e 100644
--- a/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md
+++ b/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md
@@ -1,7 +1,7 @@
 # ANSYS MAPDL
 
 **[ANSYS Multiphysics](http://www.ansys.com/products/multiphysics)**
-software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high* and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver.
+software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high- and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver.
 
 To run ANSYS MAPDL in batch mode you can utilize/modify the default mapdl.pbs script and execute it via the qsub command.
 
diff --git a/docs.it4i/salomon/software/ansys/licensing.md b/docs.it4i/salomon/software/ansys/licensing.md
index 5642f34af84c07c1f366f3b3bf002e105908b4b1..ba4405f1a2ca525a338f2aadc09fc893a6ff1958 100644
--- a/docs.it4i/salomon/software/ansys/licensing.md
+++ b/docs.it4i/salomon/software/ansys/licensing.md
@@ -2,9 +2,9 @@
 
 ## ANSYS Licence Can Be Used By:
 
-*   all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB * Technical University of Ostrava, users are CE IT4Innovations third parties * CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology * Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
-*   all persons who have a valid license
-*   students of the Technical University
+-   all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB - Technical University of Ostrava, users are CE IT4Innovations third parties - CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
+-   all persons who have a valid license
+-   students of the Technical University
 
 ## ANSYS Academic Research
 
@@ -16,8 +16,8 @@ The licence intended to be used for science and research, publications, students
 
 ## Available Versions
 
-*   16.1
-*   17.0
+-   16.1
+-   17.0
 
 ## License Preferences
 
diff --git a/docs.it4i/salomon/software/ansys/workbench.md b/docs.it4i/salomon/software/ansys/workbench.md
index 2308df2c332420ac254467602a9a4a3d605c480d..8ed07d789dea69798e68c177ac1612a3e391ec88 100644
--- a/docs.it4i/salomon/software/ansys/workbench.md
+++ b/docs.it4i/salomon/software/ansys/workbench.md
@@ -2,7 +2,7 @@
 
 ## Workbench Batch Mode
 
-It is possible to run Workbench scripts in batch mode. You need to configure solvers of individual components to run in parallel mode. Open your project in Workbench. Then, for example, in Mechanical, go to Tools * Solve Process Settings ...
+It is possible to run Workbench scripts in batch mode. You need to configure solvers of individual components to run in parallel mode. Open your project in Workbench. Then, for example, in Mechanical, go to Tools - Solve Process Settings ...
 
 ![](../../../img/AMsetPar1.png)
 
diff --git a/docs.it4i/salomon/software/chemistry/molpro.md b/docs.it4i/salomon/software/chemistry/molpro.md
index 1c88f44e2255c7b4e057aea27c1ae911a6254bb5..ca9258766d6ab0e51c31ef50b02717c976ae4cc7 100644
--- a/docs.it4i/salomon/software/chemistry/molpro.md
+++ b/docs.it4i/salomon/software/chemistry/molpro.md
@@ -19,7 +19,7 @@ Currently on Anselm is installed version 2010.1, patch level 45, parallel versio
 Compilation parameters are default:
 
 | Parameter                          | Value        |
-| ---------------------------------* | -----------* |
+| ---------------------------------- | ------------ |
 | max number of atoms                | 200          |
 | max number of valence orbitals     | 300          |
 | max number of basis functions      | 4095         |
@@ -33,7 +33,7 @@ Compilation parameters are default:
 Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details.
 
 !!! Note
-     The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option  mpiprocs=16:ompthreads=1 to PBS.
+	The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option  mpiprocs=16:ompthreads=1 to PBS.
 
 You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem.
 
diff --git a/docs.it4i/salomon/software/chemistry/nwchem.md b/docs.it4i/salomon/software/chemistry/nwchem.md
index be4e95f060601b302b8a4a9a677672256001ba5e..5ed6e3ccf9041476adaf12a722b91076950b7967 100644
--- a/docs.it4i/salomon/software/chemistry/nwchem.md
+++ b/docs.it4i/salomon/software/chemistry/nwchem.md
@@ -12,8 +12,8 @@ NWChem aims to provide its users with computational chemistry tools that are sca
 
 The following versions are currently installed:
 
-*   NWChem/6.3.revision2-2013-10-17-Python-2.7.8, current release. Compiled with Intel compilers, MKL and Intel MPI
-*   NWChem/6.5.revision26243-intel-2015b-2014-09-10-Python-2.7.8
+-   NWChem/6.3.revision2-2013-10-17-Python-2.7.8, current release. Compiled with Intel compilers, MKL and Intel MPI
+-   NWChem/6.5.revision26243-intel-2015b-2014-09-10-Python-2.7.8
 
 For a current list of installed versions, execute:
 
@@ -41,5 +41,5 @@ The recommend to use version 6.5. Version 6.3 fails on Salomon nodes with accele
 
 Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
 
-*   MEMORY : controls the amount of memory NWChem will use
-*   SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
+-   MEMORY : controls the amount of memory NWChem will use
+-   SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
diff --git a/docs.it4i/salomon/software/chemistry/phono3py.md b/docs.it4i/salomon/software/chemistry/phono3py.md
index 3c017a38ee62abc2c4356815f91ec6eeea20ba66..d680731167baaa9ccd70d6cafa63a781101f2681 100644
--- a/docs.it4i/salomon/software/chemistry/phono3py.md
+++ b/docs.it4i/salomon/software/chemistry/phono3py.md
@@ -5,7 +5,7 @@
 This GPL software calculates phonon-phonon interactions via the third order force constants. It allows to obtain lattice thermal conductivity, phonon lifetime/linewidth, imaginary part of self energy at the lowest order, joint density of states (JDOS) and weighted-JDOS. For details see Phys. Rev. B 91, 094306 (2015) and <http://atztogo.github.io/phono3py/index.html>
 
 !!! Note
-     Load the phono3py/0.9.14-ictce-7.3.5-Python-2.7.9 module
+	Load the phono3py/0.9.14-ictce-7.3.5-Python-2.7.9 module
 
 ```bash
 $ module load phono3py/0.9.14-ictce-7.3.5-Python-2.7.9
@@ -109,41 +109,41 @@ $ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" --sigma 0.1 --wgp
 $ grep grid_point ir_grid_points.yaml
 num_reduced_ir_grid_points: 35
 ir_grid_points:  # [address, weight]
-* grid_point: 0
-* grid_point: 1
-* grid_point: 2
-* grid_point: 3
-* grid_point: 4
-* grid_point: 10
-* grid_point: 11
-* grid_point: 12
-* grid_point: 13
-* grid_point: 20
-* grid_point: 21
-* grid_point: 22
-* grid_point: 30
-* grid_point: 31
-* grid_point: 40
-* grid_point: 91
-* grid_point: 92
-* grid_point: 93
-* grid_point: 94
-* grid_point: 101
-* grid_point: 102
-* grid_point: 103
-* grid_point: 111
-* grid_point: 112
-* grid_point: 121
-* grid_point: 182
-* grid_point: 183
-* grid_point: 184
-* grid_point: 192
-* grid_point: 193
-* grid_point: 202
-* grid_point: 273
-* grid_point: 274
-* grid_point: 283
-* grid_point: 364
+- grid_point: 0
+- grid_point: 1
+- grid_point: 2
+- grid_point: 3
+- grid_point: 4
+- grid_point: 10
+- grid_point: 11
+- grid_point: 12
+- grid_point: 13
+- grid_point: 20
+- grid_point: 21
+- grid_point: 22
+- grid_point: 30
+- grid_point: 31
+- grid_point: 40
+- grid_point: 91
+- grid_point: 92
+- grid_point: 93
+- grid_point: 94
+- grid_point: 101
+- grid_point: 102
+- grid_point: 103
+- grid_point: 111
+- grid_point: 112
+- grid_point: 121
+- grid_point: 182
+- grid_point: 183
+- grid_point: 184
+- grid_point: 192
+- grid_point: 193
+- grid_point: 202
+- grid_point: 273
+- grid_point: 274
+- grid_point: 283
+- grid_point: 364
 ```
 
 one finds which grid points needed to be calculated, for instance using following
diff --git a/docs.it4i/salomon/software/compilers.md b/docs.it4i/salomon/software/compilers.md
index aa5303b40232c8b4657ff33a95f3fe20cd20bf97..d493d62f006a7bf81af8aff93f3637471dff8be9 100644
--- a/docs.it4i/salomon/software/compilers.md
+++ b/docs.it4i/salomon/software/compilers.md
@@ -4,22 +4,22 @@ Available compilers, including GNU, INTEL and UPC compilers
 
 There are several compilers for different programming languages available on the cluster:
 
-*   C/C++
-*   Fortran 77/90/95/HPF
-*   Unified Parallel C
-*   Java
+-   C/C++
+-   Fortran 77/90/95/HPF
+-   Unified Parallel C
+-   Java
 
 The C/C++ and Fortran compilers are provided by:
 
 Opensource:
 
-*   GNU GCC
-*   Clang/LLVM
+-   GNU GCC
+-   Clang/LLVM
 
 Commercial licenses:
 
-*   Intel
-*   PGI
+-   Intel
+-   PGI
 
 ## Intel Compilers
 
@@ -81,8 +81,8 @@ For more information about the possibilities of the compilers, please see the ma
 
 UPC is supported by two compiler/runtime implementations:
 
-*   GNU * SMP/multi-threading support only
-*   Berkley * multi-node support as well as SMP/multi-threading support
+-   GNU - SMP/multi-threading support only
+-   Berkley - multi-node support as well as SMP/multi-threading support
 
 ### GNU UPC Compiler
 
@@ -99,7 +99,7 @@ Simple program to test the compiler
 ```bash
     $ cat count.upc
 
-    /* hello.upc * a simple UPC example */
+    /* hello.upc - a simple UPC example */
     #include <upc.h>
     #include <stdio.h>
 
@@ -108,7 +108,7 @@ Simple program to test the compiler
         printf("Welcome to GNU UPC!!!n");
       }
       upc_barrier;
-      printf(" * Hello from thread %in", MYTHREAD);
+      printf(" - Hello from thread %in", MYTHREAD);
       return 0;
     }
 ```
@@ -148,7 +148,7 @@ Example UPC code:
 ```bash
     $ cat hello.upc
 
-    /* hello.upc * a simple UPC example */
+    /* hello.upc - a simple UPC example */
     #include <upc.h>
     #include <stdio.h>
 
@@ -157,7 +157,7 @@ Example UPC code:
         printf("Welcome to Berkeley UPC!!!n");
       }
       upc_barrier;
-      printf(" * Hello from thread %in", MYTHREAD);
+      printf(" - Hello from thread %in", MYTHREAD);
       return 0;
     }
 ```
diff --git a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
index fd40c1e4aefe6acfc79aff06425ebf5ee7594fe5..d3c84a193a723d9042ba788ef687cde5290992be 100644
--- a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
+++ b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
@@ -4,11 +4,11 @@
 
 [COMSOL](http://www.comsol.com) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical applications.
 
-*   [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
-*   [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
-*   [CFD Module](http://www.comsol.com/cfd-module),
-*   [Acoustics Module](http://www.comsol.com/acoustics-module),
-*   and [many others](http://www.comsol.com/products)
+-   [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
+-   [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
+-   [CFD Module](http://www.comsol.com/cfd-module),
+-   [Acoustics Module](http://www.comsol.com/acoustics-module),
+-   and [many others](http://www.comsol.com/products)
 
 COMSOL also allows an interface support for equation-based modelling of partial differential equations.
 
@@ -16,9 +16,9 @@ COMSOL also allows an interface support for equation-based modelling of partial
 
 On the clusters COMSOL is available in the latest stable version. There are two variants of the release:
 
-*   **Non commercial** or so called >**EDU variant**>, which can be used for research and educational purposes.
+-   **Non commercial** or so called >**EDU variant**>, which can be used for research and educational purposes.
 
-*   **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU  variant** available. More about licensing will be posted here soon.
+-   **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU  variant** available. More about licensing will be posted here soon.
 
 To load the of COMSOL load the module
 
diff --git a/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md b/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
index a774bb8cf1d877cfc851b4b9b7f4697319a5ea5e..41972882c7d3154e6474953e91aaf250a3b2b91b 100644
--- a/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
+++ b/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
@@ -2,9 +2,9 @@
 
 ## Comsol Licence Can Be Used By:
 
-*   all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB * Technical University of Ostrava, users are CE IT4Innovations third parties * CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology * Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
-*   all persons who have a valid license
-*   students of the Technical University
+-   all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB - Technical University of Ostrava, users are CE IT4Innovations third parties - CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
+-   all persons who have a valid license
+-   students of the Technical University
 
 ## Comsol EDU Network Licence
 
@@ -16,4 +16,4 @@ The licence intended to be used for science and research, publications, students
 
 ## Available Versions
 
-*   ver. 51
+-   ver. 51
diff --git a/docs.it4i/salomon/software/debuggers/Introduction.md b/docs.it4i/salomon/software/debuggers/Introduction.md
index 7e5dd61558cb4750fd45df1190c4903de2b38e89..a5c9cfb60154fbaf13faebaf15a508597b40703f 100644
--- a/docs.it4i/salomon/software/debuggers/Introduction.md
+++ b/docs.it4i/salomon/software/debuggers/Introduction.md
@@ -19,7 +19,7 @@ Read more at the [Intel Debugger](../intel-suite/intel-debugger/) page.
 
 ## Allinea Forge (DDT/MAP)
 
-Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel Xeon Phi accelerators. DDT provides all the standard debugging features (stack trace, breakpoints, watches, view variables, threads etc.) for every thread running as part of your program, or for every process * even if these processes are distributed across a cluster using an MPI implementation.
+Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel Xeon Phi accelerators. DDT provides all the standard debugging features (stack trace, breakpoints, watches, view variables, threads etc.) for every thread running as part of your program, or for every process - even if these processes are distributed across a cluster using an MPI implementation.
 
 ```bash
     $ module load Forge
@@ -41,7 +41,7 @@ Read more at the [Allinea Performance Reports](allinea-performance-reports/) pag
 
 ## RougeWave Totalview
 
-TotalView is a source* and machine-level debugger for multi-process, multi-threaded programs. Its wide range of tools provides ways to analyze, organize, and test programs, making it easy to isolate and identify problems in individual threads and processes in programs of great complexity.
+TotalView is a source- and machine-level debugger for multi-process, multi-threaded programs. Its wide range of tools provides ways to analyze, organize, and test programs, making it easy to isolate and identify problems in individual threads and processes in programs of great complexity.
 
 ```bash
     $ module load TotalView/8.15.4-6-linux-x86-64
diff --git a/docs.it4i/salomon/software/debuggers/aislinn.md b/docs.it4i/salomon/software/debuggers/aislinn.md
index 32549717e8eeae7a764c8fd0b20fa0f7a5b0f070..c881c078c7b351badd3c06512eb9f2fdf846d739 100644
--- a/docs.it4i/salomon/software/debuggers/aislinn.md
+++ b/docs.it4i/salomon/software/debuggers/aislinn.md
@@ -1,12 +1,12 @@
 # Aislinn
 
-*   Aislinn is a dynamic verifier for MPI programs. For a fixed input it covers all possible runs with respect to nondeterminism introduced by MPI. It allows to detect bugs (for sure) that occurs very rare in normal runs.
-*   Aislinn detects problems like invalid memory accesses, deadlocks, misuse of MPI, and resource leaks.
-*   Aislinn is open-source software; you can use it without any licensing limitations.
-*   Web page of the project: <http://verif.cs.vsb.cz/aislinn/>
+-   Aislinn is a dynamic verifier for MPI programs. For a fixed input it covers all possible runs with respect to nondeterminism introduced by MPI. It allows to detect bugs (for sure) that occurs very rare in normal runs.
+-   Aislinn detects problems like invalid memory accesses, deadlocks, misuse of MPI, and resource leaks.
+-   Aislinn is open-source software; you can use it without any licensing limitations.
+-   Web page of the project: <http://verif.cs.vsb.cz/aislinn/>
 
 !!! Note
-     Aislinn is software developed at IT4Innovations and some parts are still considered experimental. If you have any questions or experienced any problems, please contact the author: <mailto:stanislav.bohm@vsb.cz>.
+	Aislinn is software developed at IT4Innovations and some parts are still considered experimental. If you have any questions or experienced any problems, please contact the author: <mailto:stanislav.bohm@vsb.cz>.
 
 ### Usage
 
@@ -28,7 +28,7 @@ int main(int argc, char **argv) {
               int data;
               MPI_Recv(&data, 1, MPI_INT, MPI_ANY_SOURCE, 1,
                       MPI_COMM_WORLD, MPI_STATUS_IGNORE);
-              mem1[data] = 10; //                <---------* Possible invalid memory write
+              mem1[data] = 10; //                <---------- Possible invalid memory write
               MPI_Recv(&data, 1, MPI_INT, MPI_ANY_SOURCE, 1,
                       MPI_COMM_WORLD, MPI_STATUS_IGNORE);
               mem2[data] = 10;
@@ -83,20 +83,20 @@ At the beginning of the report there are some basic summaries of the verificatio
 
 It shows us:
 
-*   Error occurs in process 0 in test.cpp on line 16.
-*   Stdout and stderr streams are empty. (The program does not write anything).
-*   The last part shows MPI calls for each process that occurs in the invalid run. The more detailed information about each call can be obtained by mouse cursor.
+-   Error occurs in process 0 in test.cpp on line 16.
+-   Stdout and stderr streams are empty. (The program does not write anything).
+-   The last part shows MPI calls for each process that occurs in the invalid run. The more detailed information about each call can be obtained by mouse cursor.
 
 ### Limitations
 
 Since the verification is a non-trivial process there are some of limitations.
 
-*   The verified process has to terminate in all runs, i.e. we cannot answer the halting problem.
-*   The verification is a computationally and memory demanding process. We put an effort to make it efficient and it is an important point for further research. However covering all runs will be always more demanding than techniques that examines only a single run. The good practise is to start with small instances and when it is feasible, make them bigger. The Aislinn is good to find bugs that are hard to find because they occur very rarely (only in a rare scheduling). Such bugs often do not need big instances.
-*   Aislinn expects that your program is a "standard MPI" program, i.e. processes communicate only through MPI, the verified program does not interacts with the system in some unusual ways (e.g. opening sockets).
+-   The verified process has to terminate in all runs, i.e. we cannot answer the halting problem.
+-   The verification is a computationally and memory demanding process. We put an effort to make it efficient and it is an important point for further research. However covering all runs will be always more demanding than techniques that examines only a single run. The good practise is to start with small instances and when it is feasible, make them bigger. The Aislinn is good to find bugs that are hard to find because they occur very rarely (only in a rare scheduling). Such bugs often do not need big instances.
+-   Aislinn expects that your program is a "standard MPI" program, i.e. processes communicate only through MPI, the verified program does not interacts with the system in some unusual ways (e.g. opening sockets).
 
 There are also some limitations bounded to the current version and they will be removed in the future:
 
-*   All files containing MPI calls have to be recompiled by MPI implementation provided by Aislinn. The files that does not contain MPI calls, they do not have to recompiled. Aislinn MPI implementation supports many commonly used calls from MPI-2 and MPI-3 related to point-to-point communication, collective communication, and communicator management. Unfortunately, MPI-IO and one-side communication is not implemented yet.
-*   Each MPI can use only one thread (if you use OpenMP, set OMP_NUM_THREADS to 1).
-*   There are some limitations for using files, but if the program just reads inputs and writes results, it is ok.
+-   All files containing MPI calls have to be recompiled by MPI implementation provided by Aislinn. The files that does not contain MPI calls, they do not have to recompiled. Aislinn MPI implementation supports many commonly used calls from MPI-2 and MPI-3 related to point-to-point communication, collective communication, and communicator management. Unfortunately, MPI-IO and one-side communication is not implemented yet.
+-   Each MPI can use only one thread (if you use OpenMP, set OMP_NUM_THREADS to 1).
+-   There are some limitations for using files, but if the program just reads inputs and writes results, it is ok.
diff --git a/docs.it4i/salomon/software/debuggers/allinea-ddt.md b/docs.it4i/salomon/software/debuggers/allinea-ddt.md
index 54b3b41fc02dea606cdc299d0c06df9d1b870923..bde30948226667d5723067710523566ccd6538ca 100644
--- a/docs.it4i/salomon/software/debuggers/allinea-ddt.md
+++ b/docs.it4i/salomon/software/debuggers/allinea-ddt.md
@@ -1,8 +1,8 @@
 # Allinea Forge (DDT,MAP)
 
-Allinea Forge consist of two tools * debugger DDT and profiler MAP.
+Allinea Forge consist of two tools - debugger DDT and profiler MAP.
 
-Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel Xeon Phi accelerators. DDT provides all the standard debugging features (stack trace, breakpoints, watches, view variables, threads etc.) for every thread running as part of your program, or for every process * even if these processes are distributed across a cluster using an MPI implementation.
+Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel Xeon Phi accelerators. DDT provides all the standard debugging features (stack trace, breakpoints, watches, view variables, threads etc.) for every thread running as part of your program, or for every process - even if these processes are distributed across a cluster using an MPI implementation.
 
 Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed for profiling parallel code, which uses pthreads, OpenMP or MPI.
 
@@ -10,13 +10,13 @@ Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed for profil
 
 On Anselm users can debug OpenMP or MPI code that runs up to 64 parallel processes. In case of debugging GPU or Xeon Phi accelerated codes the limit is 8 accelerators. These limitation means that:
 
-*   1 user can debug up 64 processes, or
-*   32 users can debug 2 processes, etc.
+-   1 user can debug up 64 processes, or
+-   32 users can debug 2 processes, etc.
 
 In case of debugging on accelerators:
 
-*   1 user can debug on up to 8 accelerators, or
-*   8 users can debug on single accelerator.
+-   1 user can debug on up to 8 accelerators, or
+-   8 users can debug on single accelerator.
 
 ## Compiling Code to Run With DDT
 
@@ -48,9 +48,9 @@ $ mpif90 -g -O0 -o test_debug test.f
 Before debugging, you need to compile your code with theses flags:
 
 !!! Note
-     \* **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
+	\- **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
 
-    * * **O0** : Suppress all optimizations.
+    - - **O0** : Suppress all optimizations.
 
 ## Starting a Job With DDT
 
diff --git a/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
index bb6f08fc46323f87fcf0b394141d8f7ce3c2739d..14f54e72e7a0cce19406f236a647d89cc1246d52 100644
--- a/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
+++ b/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
@@ -4,10 +4,10 @@
 
 Intel *®* VTune™ Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
 
-*   Hotspot analysis
-*   Locks and waits analysis
-*   Low level specific counters, such as branch analysis and memory bandwidth
-*   Power usage analysis * frequency and sleep states.
+-   Hotspot analysis
+-   Locks and waits analysis
+-   Low level specific counters, such as branch analysis and memory bandwidth
+-   Power usage analysis - frequency and sleep states.
 
 ![](../../../img/vtune-amplifier.png)
 
@@ -51,7 +51,7 @@ VTune Amplifier also allows a form of remote analysis. In this mode, data for an
 The command line will look like this:
 
 ```bash
-    /apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -collect advanced-hotspots -app-working-dir /home/sta545/tmp -* /home/sta545/tmp/sgemm
+    /apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -collect advanced-hotspots -app-working-dir /home/sta545/tmp -- /home/sta545/tmp/sgemm
 ```
 
 Copy the line to clipboard and then you can paste it in your jobscript or in command line. After the collection is run, open the GUI once again, click the menu button in the upper right corner, and select "Open > Result...". The GUI will load the results from the run.
@@ -69,20 +69,20 @@ This mode is useful for native Xeon Phi applications launched directly on the ca
 This mode is useful for applications that are launched from the host and use offload, OpenCL or mpirun. In *Analysis Target* window, select *Intel Xeon Phi coprocessor (native)*, choose path to the binaryand MIC card to run on.
 
 !!! Note
-     If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
+	If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
 
 You may also use remote analysis to collect data from the MIC and then analyze it in the GUI later :
 
 Native launch:
 
 ```bash
-    $ /apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -target-system mic-native:0 -collect advanced-hotspots -* /home/sta545/tmp/vect-add-mic
+    $ /apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -target-system mic-native:0 -collect advanced-hotspots -- /home/sta545/tmp/vect-add-mic
 ```
 
 Host launch:
 
 ```bash
-    $ /apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -target-system mic-host-launch:0 -collect advanced-hotspots -* /home/sta545/tmp/sgemm
+    $ /apps/all/VTune/2016_update1/vtune_amplifier_xe_2016.1.1.434111/bin64/amplxe-cl -target-system mic-host-launch:0 -collect advanced-hotspots -- /home/sta545/tmp/sgemm
 ```
 
 You can obtain this command line by pressing the "Command line..." button on Analysis Type screen.
diff --git a/docs.it4i/salomon/software/debuggers/total-view.md b/docs.it4i/salomon/software/debuggers/total-view.md
index f93d51ae6435da263fe40f88eca49e19828929ac..508350571a558fc7b564a6800574d85b9447a917 100644
--- a/docs.it4i/salomon/software/debuggers/total-view.md
+++ b/docs.it4i/salomon/software/debuggers/total-view.md
@@ -46,7 +46,7 @@ Compile the code:
 Before debugging, you need to compile your code with theses flags:
 
 !!! Note
-     **-g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
+	**-g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
 
     **-O0** : Suppress all optimizations.
 
@@ -76,7 +76,7 @@ To debug a serial code use:
     totalview test_debug
 ```
 
-### Debugging a Parallel Code * Option 1
+### Debugging a Parallel Code - Option 1
 
 To debug a parallel code compiled with **OpenMPI** you need to setup your TotalView environment:
 
@@ -130,7 +130,7 @@ At this point the main TotalView GUI window will appear and you can insert the b
 
 ![](../../../img/TightVNC_login.png)
 
-### Debugging a Parallel Code * Option 2
+### Debugging a Parallel Code - Option 2
 
 Other option to start new parallel debugging session from a command line is to let TotalView to execute mpirun by itself. In this case user has to specify a MPI implementation used to compile the source code.
 
diff --git a/docs.it4i/salomon/software/debuggers/valgrind.md b/docs.it4i/salomon/software/debuggers/valgrind.md
index fe10695594d73b838b95da12735bdfac160d9d5a..a5d52269cc0e835f77752fc0ed8be3d3afe40b24 100644
--- a/docs.it4i/salomon/software/debuggers/valgrind.md
+++ b/docs.it4i/salomon/software/debuggers/valgrind.md
@@ -8,20 +8,20 @@ Valgind is an extremely useful tool for debugging memory errors such as [off-by-
 
 The main tools available in Valgrind are :
 
-*   **Memcheck**, the original, must used and default tool. Verifies memory access in you program and can detect use of unitialized memory, out of bounds memory access, memory leaks, double free, etc.
-*   **Massif**, a heap profiler.
-*   **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications.
-*   **Cachegrind**, a cache profiler.
-*   **Callgrind**, a callgraph analyzer.
-*   For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/).
+-   **Memcheck**, the original, must used and default tool. Verifies memory access in you program and can detect use of unitialized memory, out of bounds memory access, memory leaks, double free, etc.
+-   **Massif**, a heap profiler.
+-   **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications.
+-   **Cachegrind**, a cache profiler.
+-   **Callgrind**, a callgraph analyzer.
+-   For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/).
 
 ## Installed Versions
 
 There are two versions of Valgrind available on the cluster.
 
-*   Version 3.8.1, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support. Also, it does not support AVX2 instructions, debugging of an AVX2-enabled executable with this version will fail
-*   Version 3.11.0 built by ICC with support for Intel MPI, available in module Valgrind/3.11.0-intel-2015b. After loading the module, this version replaces the default valgrind.
-*   Version 3.11.0 built by GCC with support for Open MPI, module Valgrind/3.11.0-foss-2015b
+-   Version 3.8.1, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support. Also, it does not support AVX2 instructions, debugging of an AVX2-enabled executable with this version will fail
+-   Version 3.11.0 built by ICC with support for Intel MPI, available in module Valgrind/3.11.0-intel-2015b. After loading the module, this version replaces the default valgrind.
+-   Version 3.11.0 built by GCC with support for Open MPI, module Valgrind/3.11.0-foss-2015b
 
 ## Usage
 
@@ -36,7 +36,7 @@ For example, lets look at this C code, which has two problems:
     {
        int* x = malloc(10 * sizeof(int));
        x[10] = 0; // problem 1: heap block overrun
-    }             // problem 2: memory leak -* x not freed
+    }             // problem 2: memory leak -- x not freed
 
     int main(void)
     {
@@ -90,7 +90,7 @@ If no Valgrind options are specified, Valgrind defaults to running Memcheck tool
     ==12652== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 6 from 6)
 ```
 
-In the output we can see that Valgrind has detected both errors * the off-by-one memory access at line 5 and a memory leak of 40 bytes. If we want a detailed analysis of the memory leak, we need to run Valgrind with  --leak-check=full option:
+In the output we can see that Valgrind has detected both errors - the off-by-one memory access at line 5 and a memory leak of 40 bytes. If we want a detailed analysis of the memory leak, we need to run Valgrind with  --leak-check=full option:
 
 ```bash
     $ valgrind --leak-check=full ./valgrind-example
@@ -179,7 +179,7 @@ Lets look at this MPI example:
     }
 ```
 
-There are two errors * use of uninitialized memory and invalid length of the buffer. Lets debug it with valgrind :
+There are two errors - use of uninitialized memory and invalid length of the buffer. Lets debug it with valgrind :
 
 ```bash
     $ module add intel impi
diff --git a/docs.it4i/salomon/software/intel-suite/intel-advisor.md b/docs.it4i/salomon/software/intel-suite/intel-advisor.md
index 66da99cedfdf991dc208c9ea4c4160dbcbc44c0f..77975c016e750cd816ca9821acbe57bb5d18ccae 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-advisor.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-advisor.md
@@ -6,7 +6,7 @@ is tool aiming to assist you in vectorization and threading of your code. You ca
 
 The following versions are currently available on Salomon as modules:
 
-2016 Update 2 * Advisor/2016_update2
+2016 Update 2 - Advisor/2016_update2
 
 ## Usage
 
@@ -26,6 +26,6 @@ In the left pane, you can switch between Vectorization and Threading workflows.
 
 ## References
 
-1.  [Intel® Advisor 2015 Tutorial: Find Where to Add Parallelism * C++ Sample](https://software.intel.com/en-us/intel-advisor-tutorial-vectorization-windows-cplusplus)
+1.  [Intel® Advisor 2015 Tutorial: Find Where to Add Parallelism - C++ Sample](https://software.intel.com/en-us/intel-advisor-tutorial-vectorization-windows-cplusplus)
 2.  [Product page](https://software.intel.com/en-us/intel-advisor-xe)
 3.  [Documentation](https://software.intel.com/en-us/intel-advisor-2016-user-guide-linux)
diff --git a/docs.it4i/salomon/software/intel-suite/intel-compilers.md b/docs.it4i/salomon/software/intel-suite/intel-compilers.md
index 1a122dbae163406643dc6c3d5fde62a397cff3af..43b816102543988f8b462d2ab0e3d0c408b971f5 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-compilers.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-compilers.md
@@ -32,5 +32,5 @@ Read more at <https://software.intel.com/en-us/intel-cplusplus-compiler-16.0-use
 
  Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon compute nodes are equipped with Haswell based architecture. The UV1 SMP compute server has Ivy Bridge CPUs, which are equivalent to Sandy Bridge (only smaller manufacturing technology). The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this:
 
-*   Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge/Ivy Bridge nodes.
-*   Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This   will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge/Ivy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this    will result in larger binaries.
+-   Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge/Ivy Bridge nodes.
+-   Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This   will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge/Ivy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this    will result in larger binaries.
diff --git a/docs.it4i/salomon/software/intel-suite/intel-inspector.md b/docs.it4i/salomon/software/intel-suite/intel-inspector.md
index ca9a4eecaf1c5d93baf33d7e20e364c77cbaba4e..3ff7762b131f56dff0fa6385f90003e5f78d8812 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-inspector.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-inspector.md
@@ -6,7 +6,7 @@ Intel Inspector is a dynamic memory and threading error checking tool for C/C++/
 
 The following versions are currently available on Salomon as modules:
 
-2016 Update 1 * Inspector/2016_update1
+2016 Update 1 - Inspector/2016_update1
 
 ## Usage
 
diff --git a/docs.it4i/salomon/software/intel-suite/intel-mkl.md b/docs.it4i/salomon/software/intel-suite/intel-mkl.md
index e47d22568cbd245d7164ef79e22eefa4dedc1b5c..83f109e7d25d1eeb3bc3876fcf79750db6897cbc 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-mkl.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-mkl.md
@@ -4,14 +4,14 @@
 
 Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, extensively threaded and optimized for maximum performance. Intel MKL provides these basic math kernels:
 
-*   BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations.
-*   The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations.
-*   ScaLAPACK distributed processing linear algebra routines for Linux and Windows operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).
-*   Fast Fourier transform (FFT) functions in one, two, or three dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions.
-*   Vector Math Library (VML) routines for optimized mathematical operations on vectors.
-*   Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for several probability distributions, convolution and correlation routines, and summary statistics functions.
-*   Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
-*   Extended Eigensolver, a shared memory  version of an eigensolver based on the Feast Eigenvalue Solver.
+-   BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations.
+-   The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations.
+-   ScaLAPACK distributed processing linear algebra routines for Linux and Windows operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).
+-   Fast Fourier transform (FFT) functions in one, two, or three dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions.
+-   Vector Math Library (VML) routines for optimized mathematical operations on vectors.
+-   Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for several probability distributions, convolution and correlation routines, and summary statistics functions.
+-   Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
+-   Extended Eigensolver, a shared memory  version of an eigensolver based on the Feast Eigenvalue Solver.
 
 For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm).
 
@@ -30,7 +30,7 @@ Intel MKL library may be linked using any compiler. With intel compiler use -mkl
 Intel MKL library provides number of interfaces. The fundamental once are the LP64 and ILP64. The Intel MKL ILP64 libraries use the 64-bit integer type (necessary for indexing large arrays, with more than 231^-1 elements), whereas the LP64 libraries index arrays with the 32-bit integer type.
 
 | Interface | Integer type                                 |
-| --------* | -------------------------------------------* |
+| --------- | -------------------------------------------- |
 | LP64      | 32-bit, int, integer(kind=4), MPI_INT        |
 | ILP64     | 64-bit, long int, integer(kind=8), MPI_INT64 |
 
diff --git a/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md b/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
index 11b085618801252dee95409c377a7ca2d943fa65..cf170231eec28314236eb262ae6893abf71852a9 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
@@ -2,7 +2,7 @@
 
 Intel Trace Analyzer and Collector (ITAC) is a tool to collect and graphicaly analyze behaviour of MPI applications. It helps you to analyze communication patterns of your application, identify hotspots, perform correctnes checking (identify deadlocks, data corruption etc), simulate how your application would run on a different interconnect.
 
-ITAC is a offline analysis tool * first you run your application to collect a trace file, then you can open the trace in a GUI analyzer to view it.
+ITAC is a offline analysis tool - first you run your application to collect a trace file, then you can open the trace in a GUI analyzer to view it.
 
 ## Installed Version
 
@@ -37,4 +37,4 @@ Please refer to Intel documenation about usage of the GUI tool.
 ## References
 
 1.  [Getting Started with Intel® Trace Analyzer and Collector](https://software.intel.com/en-us/get-started-with-itac-for-linux)
-2.  [Intel® Trace Analyzer and Collector * Documentation](https://software.intel.com/en-us/intel-trace-analyzer)
+2.  [Intel® Trace Analyzer and Collector - Documentation](https://software.intel.com/en-us/intel-trace-analyzer)
diff --git a/docs.it4i/salomon/software/intel-xeon-phi.md b/docs.it4i/salomon/software/intel-xeon-phi.md
index cb573aebc21282d8841bdcf38be41b5bee0c85f7..9fbdb31eb1736420a6ea254928c8db9676941fea 100644
--- a/docs.it4i/salomon/software/intel-xeon-phi.md
+++ b/docs.it4i/salomon/software/intel-xeon-phi.md
@@ -233,12 +233,12 @@ During the compilation Intel compiler shows which loops have been vectorized in
 Some interesting compiler flags useful not only for code debugging are:
 
 !!! Note
-     Debugging
-    openmp_report[0|1|2] * controls the compiler based vectorization diagnostic level
-    vec-report[0|1|2] * controls the OpenMP parallelizer diagnostic level
+	Debugging
+    openmp_report[0|1|2] - controls the compiler based vectorization diagnostic level
+    vec-report[0|1|2] - controls the OpenMP parallelizer diagnostic level
 
     Performance ooptimization
-    xhost * FOR HOST ONLY * to generate AVX (Advanced Vector Extensions) instructions.
+    xhost - FOR HOST ONLY - to generate AVX (Advanced Vector Extensions) instructions.
 
 ## Automatic Offload Using Intel MKL Library
 
@@ -269,7 +269,7 @@ At first get an interactive PBS session on a node with MIC accelerator and load
     $ module load intel
 ```
 
-Following example show how to automatically offload an SGEMM (single precision * g dir="auto">eneral matrix multiply) function to MIC coprocessor. The code can be copied to a file and compiled without any necessary modification.
+Following example show how to automatically offload an SGEMM (single precision - g dir="auto">eneral matrix multiply) function to MIC coprocessor. The code can be copied to a file and compiled without any necessary modification.
 
 ```bash
     $ vim sgemm-ao-short.c
@@ -420,13 +420,13 @@ If the code is parallelized using OpenMP a set of additional libraries is requir
 For your information the list of libraries and their location required for execution of an OpenMP parallel code on Intel Xeon Phi is:
 
 !!! Note
-     /apps/intel/composer_xe_2013.5.192/compiler/lib/mic
+	/apps/intel/composer_xe_2013.5.192/compiler/lib/mic
 
-    * libiomp5.so
-    * libimf.so
-    * libsvml.so
-    * libirng.so
-    * libintlc.so.5
+    - libiomp5.so
+    - libimf.so
+    - libsvml.so
+    - libirng.so
+    - libintlc.so.5
 
 Finally, to run the compiled code use:
 
@@ -501,7 +501,7 @@ After executing the complied binary file, following output should be displayed.
 ```
 
 !!! Note
-     More information about this example can be found on Intel website: <http://software.intel.com/en-us/vcsource/samples/caps-basic/>
+	More information about this example can be found on Intel website: <http://software.intel.com/en-us/vcsource/samples/caps-basic/>
 
 The second example that can be found in "/apps/intel/opencl-examples" directory is General Matrix Multiply. You can follow the the same procedure to download the example to your directory and compile it.
 
@@ -603,11 +603,11 @@ An example of basic MPI version of "hello-world" example in C language, that can
 Intel MPI for the Xeon Phi coprocessors offers different MPI programming models:
 
 !!! Note
-    **Host-only model** * all MPI ranks reside on the host. The coprocessors can be used by using offload pragmas. (Using MPI calls inside offloaded code is not supported.)
+    **Host-only model** - all MPI ranks reside on the host. The coprocessors can be used by using offload pragmas. (Using MPI calls inside offloaded code is not supported.)
 
-    **Coprocessor-only model** * all MPI ranks reside only on the coprocessors.
+    **Coprocessor-only model** - all MPI ranks reside only on the coprocessors.
 
-    **Symmetric model** * the MPI ranks reside on both the host and the coprocessor. Most general MPI case.
+    **Symmetric model** - the MPI ranks reside on both the host and the coprocessor. Most general MPI case.
 
 ### Host-Only Model
 
@@ -650,8 +650,8 @@ Similarly to execution of OpenMP programs in native mode, since the environmenta
 ```
 
 !!! Note
-    * this file sets up both environmental variable for both MPI and OpenMP libraries.
-    * this file sets up the paths to a particular version of Intel MPI library and particular version of an Intel compiler. These versions have to match with loaded modules.
+    - this file sets up both environmental variable for both MPI and OpenMP libraries.
+    - this file sets up the paths to a particular version of Intel MPI library and particular version of an Intel compiler. These versions have to match with loaded modules.
 
 To access a MIC accelerator located on a node that user is currently connected to, use:
 
@@ -703,8 +703,8 @@ or using mpirun
 ```
 
 !!! Note
-    * the full path to the binary has to specified (here: "**>~/mpi-test-mic**")
-    * the LD_LIBRARY_PATH has to match with Intel MPI module used to compile the MPI code
+    - the full path to the binary has to specified (here: "**>~/mpi-test-mic**")
+    - the LD_LIBRARY_PATH has to match with Intel MPI module used to compile the MPI code
 
 The output should be again similar to:
 
@@ -725,7 +725,7 @@ A simple test to see if the file is present is to execute:
       /bin/pmi_proxy
 ```
 
-**Execution on host * MPI processes distributed over multiple accelerators on multiple nodes**
+**Execution on host - MPI processes distributed over multiple accelerators on multiple nodes**
 
 To get access to multiple nodes with MIC accelerator, user has to use PBS to allocate the resources. To start interactive session, that allocates 2 compute nodes = 2 MIC accelerators run qsub command with following parameters:
 
@@ -752,9 +752,9 @@ This output means that the PBS allocated nodes cn204 and cn205, which means that
 
 !!! Note
     At this point user can connect to any of the allocated nodes or any of the allocated MIC accelerators using ssh:
-    * to connect to the second node : `$ ssh cn205`
-    * to connect to the accelerator on the first node from the first node:  `$ ssh cn204-mic0` or `$ ssh mic0`
-    * to connect to the accelerator on the second node from the first node: `$ ssh cn205-mic0`
+    - to connect to the second node : `$ ssh cn205`
+    - to connect to the accelerator on the first node from the first node:  `$ ssh cn204-mic0` or `$ ssh mic0`
+    - to connect to the accelerator on the second node from the first node: `$ ssh cn205-mic0`
 
 At this point we expect that correct modules are loaded and binary is compiled. For parallel execution the mpiexec.hydra is used. Again the first step is to tell mpiexec that the MPI can be executed on MIC accelerators by setting up the environmental variable "I_MPI_MIC"
 
@@ -872,7 +872,7 @@ To run the MPI code using mpirun and the machine file "hosts_file_mix" use:
 A possible output of the MPI "hello-world" example executed on two hosts and two accelerators is:
 
 ```bash
-     Hello world from process 0 of 8 on host cn204
+	Hello world from process 0 of 8 on host cn204
     Hello world from process 1 of 8 on host cn204
     Hello world from process 2 of 8 on host cn204-mic0
     Hello world from process 3 of 8 on host cn204-mic0
@@ -890,11 +890,11 @@ A possible output of the MPI "hello-world" example executed on two hosts and two
 PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated:
 
 !!! Note
-     **Host only node-file:**
+	**Host only node-file:**
 
-     * /lscratch/${PBS_JOBID}/nodefile-cn MIC only node-file:
-     * /lscratch/${PBS_JOBID}/nodefile-mic Host and MIC node-file:
-     * /lscratch/${PBS_JOBID}/nodefile-mix
+     - /lscratch/${PBS_JOBID}/nodefile-cn MIC only node-file:
+     - /lscratch/${PBS_JOBID}/nodefile-mic Host and MIC node-file:
+     - /lscratch/${PBS_JOBID}/nodefile-mix
 
 Each host or accelerator is listed only per files. User has to specify how many jobs should be executed per node using "-n" parameter of the mpirun command.
 
diff --git a/docs.it4i/salomon/software/mpi/Running_OpenMPI.md b/docs.it4i/salomon/software/mpi/Running_OpenMPI.md
index a05758d90ba3bc30527179794b4981f2b0dc375d..4c742a6191de85dba2003ec6d02594e895fc225b 100644
--- a/docs.it4i/salomon/software/mpi/Running_OpenMPI.md
+++ b/docs.it4i/salomon/software/mpi/Running_OpenMPI.md
@@ -95,7 +95,7 @@ In this example, we demonstrate recommended way to run an MPI application, using
 ### OpenMP Thread Affinity
 
 !!! Note
-     Important!  Bind every OpenMP thread to a core!
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two examples with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You might want to avoid this by setting these environment variable for GCC OpenMP:
 
@@ -203,7 +203,7 @@ In all cases, binding and threading may be verified by executing for example:
 Some options have changed in OpenMPI version 1.8.
 
 | version 1.6.5    | version 1.8.1       |
-| ---------------* | ------------------* |
+| ---------------- | ------------------- |
 | --bind-to-none   | --bind-to none      |
 | --bind-to-core   | --bind-to core      |
 | --bind-to-socket | --bind-to socket    |
diff --git a/docs.it4i/salomon/software/mpi/mpi.md b/docs.it4i/salomon/software/mpi/mpi.md
index 7f121fafb2f1a23a44a6b6ad60b721f03dd22de1..2428b60d24fd31d9e160c671be9926204fe339f8 100644
--- a/docs.it4i/salomon/software/mpi/mpi.md
+++ b/docs.it4i/salomon/software/mpi/mpi.md
@@ -5,7 +5,7 @@
 The Salomon cluster provides several implementations of the MPI library:
 
 | MPI Library       | Thread support                                                                                         |
-| ----------------* | -----------------------------------------------------------------------------------------------------* |
+| ----------------- | ------------------------------------------------------------------------------------------------------ |
 | **Intel MPI 4.1** | Full thread support up to,                                        MPI_THREAD_MULTIPLE                  |
 | **Intel MPI 5.0** | Full thread support up to,                                        MPI_THREAD_MULTIPLE                  |
 | OpenMPI 1.8.6     | Full thread support up to,                                       MPI_THREAD_MULTIPLE, MPI-3.0, support |
@@ -17,7 +17,7 @@ Look up section modulefiles/mpi in module avail
 
 ```bash
     $ module avail
-    -----------------------------* /apps/modules/mpi -------------------------------
+    ------------------------------ /apps/modules/mpi -------------------------------
     impi/4.1.1.036-iccifort-2013.5.192
     impi/4.1.1.036-iccifort-2013.5.192-GCC-4.8.3
     impi/5.0.3.048-iccifort-2015.3.187
@@ -29,8 +29,8 @@ Look up section modulefiles/mpi in module avail
 There are default compilers associated with any particular MPI implementation. The defaults may be changed, the MPI libraries may be used in conjunction with any compiler. The defaults are selected via the modules in following way
 
 | Module                                   | MPI        | Compiler suite |
-| ---------------------------------------* | ---------* | -------------* |
-| impi-5.0.3.048-iccifort* Intel MPI 5.0.3 | 2015.3.187 |                |
+| ---------------------------------------- | ---------- | -------------- |
+| impi-5.0.3.048-iccifort- Intel MPI 5.0.3 | 2015.3.187 |                |
 | OpenMP-1.8.6-GNU-5.1.0-2 OpenMPI 1.8.6   | .25        |                |
 
 Examples:
@@ -127,7 +127,7 @@ Consider these ways to run an MPI program:
 **Two MPI** processes per node, using 12 threads each, bound to processor socket is most useful for memory bandwidth bound applications such as BLAS1 or FFT, with scalable memory demand. However, note that the two processes will share access to the network interface. The 12 threads and socket binding should ensure maximum memory access bandwidth and minimize communication, migration and numa effect overheads.
 
 !!! Note
-     Important!  Bind every OpenMP thread to a core!
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two cases with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You want to avoid this by setting the KMP_AFFINITY or GOMP_CPU_AFFINITY environment variables.
 
diff --git a/docs.it4i/salomon/software/numerical-languages/matlab.md b/docs.it4i/salomon/software/numerical-languages/matlab.md
index 5d59c90474680dee9032e84cda0642b30db98c3d..b9f7bc5a3c5c829e86aea964395c9df3443e15f6 100644
--- a/docs.it4i/salomon/software/numerical-languages/matlab.md
+++ b/docs.it4i/salomon/software/numerical-languages/matlab.md
@@ -4,8 +4,8 @@
 
 Matlab is available in versions R2015a and R2015b. There are always two variants of the release:
 
-*   Non commercial or so called EDU variant, which can be used for common research and educational purposes.
-*   Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
+-   Non commercial or so called EDU variant, which can be used for common research and educational purposes.
+-   Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
 
 To load the latest version of Matlab load the module
 
@@ -175,7 +175,7 @@ You can copy and paste the example in a .m file and execute. Note that the parpo
 
 ### Parallel Matlab Batch Job Using PBS Mode (Workers Spawned in a Separate Job)
 
-This mode uses PBS scheduler to launch the parallel pool. It uses the SalomonPBSPro profile that needs to be imported to Cluster Manager, as mentioned before. This methodod uses MATLAB's PBS Scheduler interface * it spawns the workers in a separate job submitted by MATLAB using qsub.
+This mode uses PBS scheduler to launch the parallel pool. It uses the SalomonPBSPro profile that needs to be imported to Cluster Manager, as mentioned before. This methodod uses MATLAB's PBS Scheduler interface - it spawns the workers in a separate job submitted by MATLAB using qsub.
 
 This is an example of m-script using PBS mode:
 
@@ -258,7 +258,7 @@ In case of non-interactive session please read the [following information](../..
 Starting Matlab workers is an expensive process that requires certain amount of time. For your information please see the following table:
 
 | compute nodes | number of workers | start-up time[s] |
-| ------------* | ----------------* | ---------------* |
+| ------------- | ----------------- | ---------------- |
 | 16            | 384               | 831              |
 | 8             | 192               | 807              |
 | 4             | 96                | 483              |
diff --git a/docs.it4i/salomon/software/numerical-languages/octave.md b/docs.it4i/salomon/software/numerical-languages/octave.md
index ebf09d39890f31d74f31563476719136feb010cf..a9a82dfc0e88d777754465e602ec9a18cf40b188 100644
--- a/docs.it4i/salomon/software/numerical-languages/octave.md
+++ b/docs.it4i/salomon/software/numerical-languages/octave.md
@@ -5,11 +5,11 @@ GNU Octave is a high-level interpreted language, primarily intended for numerica
 Two versions of octave are available on the cluster, via module
 
 | Status     | Version      | module |
-| ---------* | -----------* | -----* |
+| ---------- | ------------ | ------ |
 | **Stable** | Octave 3.8.2 | Octave |
 
 ```bash
-     $ module load Octave
+	$ module load Octave
 ```
 
 The octave on the cluster is linked to highly optimized MKL mathematical library. This provides threaded parallelization to many octave kernels, notably the linear algebra subroutines. Octave runs these heavy calculation kernels without any penalty. By default, octave would parallelize to 24 threads. You may control the threads by setting the OMP_NUM_THREADS environment variable.
@@ -50,7 +50,7 @@ This script may be submitted directly to the PBS workload manager via the qsub c
 The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment.
 
 ```bash
-     $ mkoctfile -v
+	$ mkoctfile -v
 ```
 
 Octave may use MPI for interprocess communication This functionality is currently not supported on the cluster cluster. In case you require the octave interface to MPI, please contact our [cluster support](https://support.it4i.cz/rt/).
diff --git a/docs.it4i/salomon/software/numerical-languages/r.md b/docs.it4i/salomon/software/numerical-languages/r.md
index a533ccec7b12b4c199ecd939224b75bcee8f58ed..9afa31655aa34f07ff217c5ece8f6de298e691e2 100644
--- a/docs.it4i/salomon/software/numerical-languages/r.md
+++ b/docs.it4i/salomon/software/numerical-languages/r.md
@@ -17,7 +17,7 @@ Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals
 **The R version 3.1.1 is available on the cluster, along with GUI interface Rstudio**
 
 | Application | Version           | module              |
-| ----------* | ----------------* | ------------------* |
+| ----------- | ----------------- | ------------------- |
 | **R**       | R 3.1.1           | R/3.1.1-intel-2015b |
 | **Rstudio** | Rstudio 0.98.1103 | Rstudio             |
 
@@ -96,7 +96,7 @@ Download the package [parallell](package-parallel-vignette.pdf) vignette.
 The forking is the most simple to use. Forking family of functions provide parallelized, drop in replacement for the serial apply() family of functions.
 
 !!! warning
-     Forking via package parallel provides functionality similar to OpenMP construct omp parallel for
+	Forking via package parallel provides functionality similar to OpenMP construct omp parallel for
 
     Only cores of single node can be utilized this way!
 
@@ -106,13 +106,13 @@ Forking example:
     library(parallel)
 
     #integrand function
-    f <* function(i,h) {
-    x <* h*(i-0.5)
+    f <- function(i,h) {
+    x <- h*(i-0.5)
     return (4/(1 + x*x))
     }
 
     #initialize
-    size <* detectCores()
+    size <- detectCores()
 
     while (TRUE)
     {
@@ -123,11 +123,11 @@ Forking example:
       if(n<=0) break
 
       #run the calculation
-      n <* max(n,size)
-      h <*   1.0/n
+      n <- max(n,size)
+      h <-   1.0/n
 
-      i <* seq(1,n);
-      pi3 <* h*sum(simplify2array(mclapply(i,f,h,mc.cores=size)));
+      i <- seq(1,n);
+      pi3 <- h*sum(simplify2array(mclapply(i,f,h,mc.cores=size)));
 
       #print results
       cat(sprintf("Value of PI %16.14f, diff= %16.14fn",pi3,pi3-pi))
@@ -161,7 +161,7 @@ Rmpi may be used in three basic ways. The static approach is identical to execut
 
 ### Static Rmpi
 
-Static Rmpi programs are executed via mpiexec, as any other MPI programs. Number of processes is static * given at the launch time.
+Static Rmpi programs are executed via mpiexec, as any other MPI programs. Number of processes is static - given at the launch time.
 
 Static Rmpi example:
 
@@ -169,15 +169,15 @@ Static Rmpi example:
     library(Rmpi)
 
     #integrand function
-    f <* function(i,h) {
-    x <* h*(i-0.5)
+    f <- function(i,h) {
+    x <- h*(i-0.5)
     return (4/(1 + x*x))
     }
 
     #initialize
     invisible(mpi.comm.dup(0,1))
-    rank <* mpi.comm.rank()
-    size <* mpi.comm.size()
+    rank <- mpi.comm.rank()
+    size <- mpi.comm.size()
     n<-0
 
     while (TRUE)
@@ -189,18 +189,18 @@ Static Rmpi example:
       }
 
       #broadcat the intervals
-      n <* mpi.bcast(as.integer(n),type=1)
+      n <- mpi.bcast(as.integer(n),type=1)
 
       if(n<=0) break
 
       #run the calculation
-      n <* max(n,size)
-      h <*   1.0/n
+      n <- max(n,size)
+      h <-   1.0/n
 
-      i <* seq(rank+1,n,size);
-      mypi <* h*sum(sapply(i,f,h));
+      i <- seq(rank+1,n,size);
+      mypi <- h*sum(sapply(i,f,h));
 
-      pi3 <* mpi.reduce(mypi)
+      pi3 <- mpi.reduce(mypi)
 
       #print results
       if (rank==0) cat(sprintf("Value of PI %16.14f, diff= %16.14fn",pi3,pi3-pi))
@@ -223,17 +223,17 @@ Dynamic Rmpi example:
 
 ```cpp
     #integrand function
-    f <* function(i,h) {
-    x <* h*(i-0.5)
+    f <- function(i,h) {
+    x <- h*(i-0.5)
     return (4/(1 + x*x))
     }
 
     #the worker function
-    workerpi <* function()
+    workerpi <- function()
     {
     #initialize
-    rank <* mpi.comm.rank()
-    size <* mpi.comm.size()
+    rank <- mpi.comm.rank()
+    size <- mpi.comm.size()
     n<-0
 
     while (TRUE)
@@ -245,18 +245,18 @@ Dynamic Rmpi example:
       }
 
       #broadcat the intervals
-      n <* mpi.bcast(as.integer(n),type=1)
+      n <- mpi.bcast(as.integer(n),type=1)
 
       if(n<=0) break
 
       #run the calculation
-      n <* max(n,size)
-      h <*   1.0/n
+      n <- max(n,size)
+      h <-   1.0/n
 
-      i <* seq(rank+1,n,size);
-      mypi <* h*sum(sapply(i,f,h));
+      i <- seq(rank+1,n,size);
+      mypi <- h*sum(sapply(i,f,h));
 
-      pi3 <* mpi.reduce(mypi)
+      pi3 <- mpi.reduce(mypi)
 
       #print results
       if (rank==0) cat(sprintf("Value of PI %16.14f, diff= %16.14fn",pi3,pi3-pi))
@@ -287,7 +287,7 @@ Execute the example as:
     $ mpirun -np 1 R --slave --no-save --no-restore -f pi3Rslaves.R
 ```
 
-Note that this method uses MPI_Comm_spawn (Dynamic process feature of MPI-2) to start the slave processes * the master process needs to be launched with MPI. In general, Dynamic processes are not well supported among MPI implementations, some issues might arise. Also, environment variables are not propagated to spawned processes, so they will not see paths from modules.
+Note that this method uses MPI_Comm_spawn (Dynamic process feature of MPI-2) to start the slave processes - the master process needs to be launched with MPI. In general, Dynamic processes are not well supported among MPI implementations, some issues might arise. Also, environment variables are not propagated to spawned processes, so they will not see paths from modules.
 
 ### mpi.apply Rmpi
 
@@ -301,20 +301,20 @@ mpi.apply Rmpi example:
 
 ```cpp
     #integrand function
-    f <* function(i,h) {
-    x <* h*(i-0.5)
+    f <- function(i,h) {
+    x <- h*(i-0.5)
     return (4/(1 + x*x))
     }
 
     #the worker function
-    workerpi <* function(rank,size,n)
+    workerpi <- function(rank,size,n)
     {
       #run the calculation
-      n <* max(n,size)
-      h <* 1.0/n
+      n <- max(n,size)
+      h <- 1.0/n
 
-      i <* seq(rank,n,size);
-      mypi <* h*sum(sapply(i,f,h));
+      i <- seq(rank,n,size);
+      mypi <- h*sum(sapply(i,f,h));
 
       return(mypi)
     }
diff --git a/docs.it4i/salomon/software/operating-system.md b/docs.it4i/salomon/software/operating-system.md
index ec7882d66ff5bf69603ca981ea61955c1f3dc27e..1cf18dfda159662b5ceb8db420bbfc30476e3ff7 100644
--- a/docs.it4i/salomon/software/operating-system.md
+++ b/docs.it4i/salomon/software/operating-system.md
@@ -1,5 +1,5 @@
 # Operating System
 
-The operating system on Salomon is Linux * **CentOS 6.x**
+The operating system on Salomon is Linux - **CentOS 6.x**
 
 The CentOS Linux distribution is a stable, predictable, manageable and reproducible platform derived from the sources of Red Hat Enterprise Linux (RHEL).
diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md
index e53ba05d23129c6b0288ab32e992dbee8129c9e9..cd42d086625f26020199732afd5eb3e377a4edaf 100644
--- a/docs.it4i/salomon/storage.md
+++ b/docs.it4i/salomon/storage.md
@@ -34,15 +34,15 @@ The  architecture of Lustre on Salomon is composed of two metadata servers (MDS)
 
 Configuration of the SCRATCH Lustre storage
 
-*   SCRATCH Lustre object storage
-    *   Disk array SFA12KX
-    *   540 x 4 TB SAS 7.2krpm disk
-    *   54 x OST of 10 disks in RAID6 (8+2)
-    *   15 x hot-spare disk
-    *   4 x 400 GB SSD cache
-*   SCRATCH Lustre metadata storage
-    *   Disk array EF3015
-    *   12 x 600 GB SAS 15 krpm disk
+-   SCRATCH Lustre object storage
+    -   Disk array SFA12KX
+    -   540 x 4 TB SAS 7.2krpm disk
+    -   54 x OST of 10 disks in RAID6 (8+2)
+    -   15 x hot-spare disk
+    -   4 x 400 GB SSD cache
+-   SCRATCH Lustre metadata storage
+    -   Disk array EF3015
+    -   12 x 600 GB SAS 15 krpm disk
 
 ### Understanding the Lustre File Systems
 
@@ -186,7 +186,7 @@ ACLs on a Lustre file system work exactly like ACLs on any Linux file system. Th
 [vop999@login1.salomon ~]$ umask 027
 [vop999@login1.salomon ~]$ mkdir test
 [vop999@login1.salomon ~]$ ls -ld test
-drwxr-x--* 2 vop999 vop999 4096 Nov  5 14:17 test
+drwxr-x--- 2 vop999 vop999 4096 Nov  5 14:17 test
 [vop999@login1.salomon ~]$ getfacl test
 # file: test
 # owner: vop999
@@ -229,7 +229,7 @@ The files on HOME will not be deleted until end of the [users lifecycle](../get-
 The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
 
 | HOME workspace |                |
-| -------------* | -------------* |
+| -------------- | -------------- |
 | Accesspoint    | /home/username |
 | Capacity       | 0.5 PB         |
 | Throughput     | 6 GB/s         |
@@ -251,7 +251,7 @@ The WORK workspace is hosted on SCRATCH file system. The SCRATCH is realized as
     Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
 
 | WORK workspace       |                                                           |
-| -------------------* | --------------------------------------------------------* |
+| -------------------- | --------------------------------------------------------- |
 | Accesspoints         | /scratch/work/user/username, /scratch/work/user/projectid |
 | Capacity             | 1.6 PB                                                    |
 | Throughput           | 30 GB/s                                                   |
@@ -278,7 +278,7 @@ The TEMP workspace is hosted on SCRATCH file system. The SCRATCH is realized as
     Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
 
 | TEMP workspace       |               |
-| -------------------* | ------------* |
+| -------------------- | ------------- |
 | Accesspoint          | /scratch/temp |
 | Capacity             | 1.6 PB        |
 | Throughput           | 30 GB/s       |
@@ -303,7 +303,7 @@ The local RAM disk file system is intended for temporary scratch data generated
     The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
 
 | RAM disk    |                                                                                                         |
-| ----------* | ------------------------------------------------------------------------------------------------------* |
+| ----------- | ------------------------------------------------------------------------------------------------------- |
 | Mountpoint  | /ramdisk                                                                                                |
 | Accesspoint | /ramdisk/$PBS_JOBID                                                                                     |
 | Capacity    | 120 GB                                                                                                  |
@@ -313,7 +313,7 @@ The local RAM disk file system is intended for temporary scratch data generated
 ## Summary
 
 | Mountpoint    | Usage                          | Protocol    | Net     | Capacity | Throughput   | Limitations             | Access                      |
-| ------------* | -----------------------------* | ----------* | ------* | -------* | -----------* | ----------------------* | --------------------------* |
+| ------------- | ------------------------------ | ----------- | ------- | -------- | ------------ | ----------------------- | --------------------------- |
 | /home         | home directory                 | NFS, 2-Tier | 0.5 PB  | 6 GB/s   | Quota 250GB  | Compute and login nodes | backed up                   |
 | /scratch/work | large project files            | Lustre      | 1.69 PB | 30 GB/s  | Quota        | Compute and login nodes | none                        |
 | /scratch/temp | job temporary data             | Lustre      | 1.69 PB | 30 GB/s  | Quota 100 TB | Compute and login nodes | files older 90 days removed |
@@ -350,7 +350,7 @@ Once registered for CESNET Storage, you may [access the storage](https://du.cesn
 ### SSHFS Access
 
 !!! Note
-     SSHFS: The storage will be mounted like a local hard drive
+	SSHFS: The storage will be mounted like a local hard drive
 
 The SSHFS  provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
 
@@ -395,7 +395,7 @@ Once done, please remember to unmount the storage
 ### Rsync Access
 
 !!! Note
-     Rsync provides delta transfer for best performance, can resume interrupted transfers
+	Rsync provides delta transfer for best performance, can resume interrupted transfers
 
 Rsync is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination.  Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
 
diff --git a/docs.it4i/software/bioinformatics.md b/docs.it4i/software/bioinformatics.md
index 80991ddd2f15f4ce2f5da1c5cf1cc85438659c63..76991fe7810ea45fdf7a77ed1cd03adf20a79152 100644
--- a/docs.it4i/software/bioinformatics.md
+++ b/docs.it4i/software/bioinformatics.md
@@ -206,7 +206,7 @@ sci-libs/umfpack-5.6.2
 ## Classification of Applications
 
 | Applications for bioinformatics at IT4I |        |
-| --------------------------------------* | -----* |
+| --------------------------------------- | ------ |
 | error-correctors                        | 6      |
 | aligners                                | 20     |
 | clusterers                              | 5      |
diff --git a/docs.it4i/software/lmod.md b/docs.it4i/software/lmod.md
index 1d35477a8ccb86c8e0b05ca4eead18c81755f12d..5ba63f7e03762e356a0d74cfb4eb4826682314a6 100644
--- a/docs.it4i/software/lmod.md
+++ b/docs.it4i/software/lmod.md
@@ -15,7 +15,7 @@ Detailed documentation on Lmod is available at [here](http://lmod.readthedocs.io
 Below you will find more details and examples.
 
 | command                  | equivalent/explanation                                           |
-| -----------------------* | ---------------------------------------------------------------* |
+| ------------------------ | ---------------------------------------------------------------- |
 | ml                       | module list                                                      |
 | ml GCC/6.2.0-2.27        | module load GCC/6.2.0-2.27                                       |
 | ml -GCC/6.2.0-2.27       | module unload GCC/6.2.0-2.27                                     |
@@ -47,14 +47,14 @@ To get an overview of all available modules, you can use module avail or simply
 
 ```bash
 $ ml av
----------------------------------------* /apps/modules/compiler ----------------------------------------------
-     GCC/5.2.0     GCCcore/6.2.0 (D)    icc/2013.5.192          ifort/2013.5.192    LLVM/3.9.0-intel-2017.00 (D)
-                  ...                                  ...
+---------------------------------------- /apps/modules/compiler ----------------------------------------------
+     GCC/5.2.0	GCCcore/6.2.0 (D)    icc/2013.5.192		ifort/2013.5.192    LLVM/3.9.0-intel-2017.00 (D)
+   			...                                  ...
 
----------------------------------------* /apps/modules/devel -------------------------------------------------
+---------------------------------------- /apps/modules/devel -------------------------------------------------
    Autoconf/2.69-foss-2015g                       CMake/3.0.0-intel-2016.01                           M4/1.4.17-intel-2016.01                     pkg-config/0.27.1-foss-2015g
    Autoconf/2.69-foss-2016a                       CMake/3.3.1-foss-2015g                              M4/1.4.17-intel-2017.00                     pkg-config/0.27.1-intel-2015b
-                  ...                                  ...
+   			...                                  ...
 ```
 
 In the current module naming scheme, each module name consists of two parts:
@@ -75,7 +75,7 @@ $ ml spider gcc
   GCC:
 ---------------------------------------------------------------------------------
     Description:
-      The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...). * Homepage: http://gcc.gnu.org/ 
+      The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...). - Homepage: http://gcc.gnu.org/ 
 
      Versions:
         GCC/4.4.7-system
@@ -108,7 +108,7 @@ $ ml spider gcc
 ```
 
 !!! tip
-     spider is case-insensitive.
+	spider is case-insensitive.
 
 If you use spider on a full module name like GCC/6.2.0-2.27 it will tell on which cluster(s) that module available:
 
@@ -118,13 +118,13 @@ $ module spider GCC/6.2.0-2.27
   GCC: GCC/6.2.0-2.27
 --------------------------------------------------------------------------------------------------------------
     Description:
-      The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...). * Homepage: http://gcc.gnu.org/ 
+      The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...). - Homepage: http://gcc.gnu.org/ 
 
     This module can be loaded directly: module load GCC/6.2.0-2.27
 
     Help:
       The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada,
-       as well as libraries for these languages (libstdc++, libgcj,...). * Homepage: http://gcc.gnu.org/
+       as well as libraries for these languages (libstdc++, libgcj,...). - Homepage: http://gcc.gnu.org/
 ```
 
 This tells you what the module contains and a URL to the homepage of the software.
@@ -137,7 +137,7 @@ For example, to check which versions of git are available:
 ```bash
 $ ml av git
 
--------------------------------------* /apps/modules/tools ----------------------------------------
+-------------------------------------- /apps/modules/tools ----------------------------------------
    git/2.8.0-GNU-4.9.3-2.25    git/2.8.0-intel-2017.00    git/2.9.0    git/2.9.2    git/2.11.0 (D)
 
   Where:
@@ -148,14 +148,14 @@ Use "module keyword key1 key2 ..." to search for all possible modules matching a
 ```
 
 !!! tip
-     the specified software name is treated case-insensitively.
+	the specified software name is treated case-insensitively.
 
 Lmod does a partial match on the module name, so sometimes you need to use / to indicate the end of the software name you are interested in:
 
 ```bash
 $ ml av GCC/
 
------------------------------------------* /apps/modules/compiler -------------------------------------------
+------------------------------------------ /apps/modules/compiler -------------------------------------------
 GCC/4.4.7-system    GCC/4.8.3   GCC/4.9.2   GCC/4.9.3   GCC/5.1.0-binutils-2.25  GCC/5.3.0-binutils-2.25   GCC/5.3.0-2.26   GCC/5.4.0-2.26   GCC/4.7.4   GCC/4.9.2-binutils-2.25   GCC/4.9.3-binutils-2.25   GCC/4.9.3-2.25   GCC/5.2.0   GCC/5.3.0-2.25  GCC/6.2.0-2.27 (D)
 
   Where:
@@ -172,8 +172,8 @@ To see how a module would change the environment, use module show or ml show:
 ```bash
 $ ml show Python/3.5.2
 
-help([[Python is a programming language that lets you work more quickly and integrate your systems more effectively. * Homepage: http://python.org/]])
-whatis("Description: Python is a programming language that lets you work more quickly and integrate your systems more effectively. * Homepage: http://python.org/")
+help([[Python is a programming language that lets you work more quickly and integrate your systems more effectively. - Homepage: http://python.org/]])
+whatis("Description: Python is a programming language that lets you work more quickly and integrate your systems more effectively. - Homepage: http://python.org/")
 conflict("Python")
 load("bzip2/1.0.6")
 load("zlib/1.2.8")
@@ -196,7 +196,7 @@ setenv("EBEXTSLISTPYTHON","setuptools-20.1.1,pip-8.0.2,nose-1.3.7")
 ```
 
 !!! tip
-     Note that both the direct changes to the environment as well as other modules that will be loaded are shown.
+	Note that both the direct changes to the environment as well as other modules that will be loaded are shown.
 
 If you're not sure what all of this means: don't worry, you don't have to know; just try loading the module as try using the software.
 
@@ -224,12 +224,12 @@ Currently Loaded Modules:
 ```
 
 !!! tip
-     Note that even though we only loaded a single module, the output of ml shows that a whole bunch of modules were loaded, which are required dependencies for intel/2017.00.
+	Note that even though we only loaded a single module, the output of ml shows that a whole bunch of modules were loaded, which are required dependencies for intel/2017.00.
 
 ## Conflicting Modules
 
 !!! warning
-     It is important to note that **only modules that are compatible with each other can be loaded together. In particular, modules must be installed either with the same toolchain as the modules that** are already loaded, or with a compatible (sub)toolchain.
+	It is important to note that **only modules that are compatible with each other can be loaded together. In particular, modules must be installed either with the same toolchain as the modules that** are already loaded, or with a compatible (sub)toolchain.
 
 For example, once you have loaded one or more modules that were installed with the intel/2017.00 toolchain, all other modules that you load should have been installed with the same toolchain.
 
diff --git a/docs.it4i/software/orca.md b/docs.it4i/software/orca.md
index 6229049258c31a22d9923282b85707c78fff5400..6a8769c65c1033f1b55fecf26a1e7855bc3a9da6 100644
--- a/docs.it4i/software/orca.md
+++ b/docs.it4i/software/orca.md
@@ -1,6 +1,6 @@
 # ORCA
 
-ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single* and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.
+ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.
 
 ## Making ORCA Available
 
@@ -66,10 +66,10 @@ qsub: job 196821.isrv5 ready
                                  * O   R   C   A *
                                  *****************
 
-           --* An Ab Initio, DFT and Semiempirical electronic structure package ---
+           --- An Ab Initio, DFT and Semiempirical electronic structure package ---
 
                   #######################################################
-                  #                        -****                        #
+                  #                        -***-                        #
                   #  Department of molecular theory and spectroscopy    #
                   #              Directorship: Frank Neese              #
                   # Max Planck Institute for Chemical Energy Conversion #
@@ -77,7 +77,7 @@ qsub: job 196821.isrv5 ready
                   #                       Germany                       #
                   #                                                     #
                   #                  All rights reserved                #
-                  #                        -****                        #
+                  #                        -***-                        #
                   #######################################################
 
 ...