diff --git a/docs.it4i/anselm-cluster-documentation/capacity-computing.md b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
index ed19b358fa2b94341604875c4a7fa4a8e29f1ce5..5021c12a2994303cd41a1e5cbff96cce0a3f3b72 100644
--- a/docs.it4i/anselm-cluster-documentation/capacity-computing.md
+++ b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
@@ -9,9 +9,9 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 !!! note
     Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
 
-*   Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
-*   Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
-*   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
+* Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
+* Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
+* Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
 
 ## Policy
 
@@ -25,9 +25,9 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 
 A job array is a compact representation of many jobs, called subjobs. The subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
 
-*   each subjob has a unique index, $PBS_ARRAY_INDEX
-*   job Identifiers of subjobs only differ by their indices
-*   the state of subjobs can differ (R,Q,...etc.)
+* each subjob has a unique index, $PBS_ARRAY_INDEX
+* job Identifiers of subjobs only differ by their indices
+* the state of subjobs can differ (R,Q,...etc.)
 
 All subjobs within a job array have the same scheduling priority and schedule as independent jobs. Entire job array is submitted through a single qsub command and may be managed by qdel, qalter, qhold, qrls and qsig commands as a single job.
 
diff --git a/docs.it4i/anselm-cluster-documentation/compute-nodes.md b/docs.it4i/anselm-cluster-documentation/compute-nodes.md
index 440201ed024b87d101ea4a28ffd9ad393df83e37..f4eb8961bcb3d9d442e90d1bbea49f7664c8f5e8 100644
--- a/docs.it4i/anselm-cluster-documentation/compute-nodes.md
+++ b/docs.it4i/anselm-cluster-documentation/compute-nodes.md
@@ -6,46 +6,46 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu
 
 ### Compute Nodes Without Accelerator
 
-*   180 nodes
-*   2880 cores in total
-*   two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
-*   64 GB of physical memory per node
-*   one 500GB SATA 2,5” 7,2 krpm HDD per node
-*   bullx B510 blade servers
-*   cn[1-180]
+* 180 nodes
+* 2880 cores in total
+* two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
+* 64 GB of physical memory per node
+* one 500GB SATA 2,5” 7,2 krpm HDD per node
+* bullx B510 blade servers
+* cn[1-180]
 
 ### Compute Nodes With GPU Accelerator
 
-*   23 nodes
-*   368 cores in total
-*   two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
-*   96 GB of physical memory per node
-*   one 500GB SATA 2,5” 7,2 krpm HDD per node
-*   GPU accelerator 1x NVIDIA Tesla Kepler K20 per node
-*   bullx B515 blade servers
-*   cn[181-203]
+* 23 nodes
+* 368 cores in total
+* two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
+* 96 GB of physical memory per node
+* one 500GB SATA 2,5” 7,2 krpm HDD per node
+* GPU accelerator 1x NVIDIA Tesla Kepler K20 per node
+* bullx B515 blade servers
+* cn[181-203]
 
 ### Compute Nodes With MIC Accelerator
 
-*   4 nodes
-*   64 cores in total
-*   two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
-*   96 GB of physical memory per node
-*   one 500GB SATA 2,5” 7,2 krpm HDD per node
-*   MIC accelerator 1x Intel Phi 5110P per node
-*   bullx B515 blade servers
-*   cn[204-207]
+* 4 nodes
+* 64 cores in total
+* two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
+* 96 GB of physical memory per node
+* one 500GB SATA 2,5” 7,2 krpm HDD per node
+* MIC accelerator 1x Intel Phi 5110P per node
+* bullx B515 blade servers
+* cn[204-207]
 
 ### Fat Compute Nodes
 
-*   2 nodes
-*   32 cores in total
-*   2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
-*   512 GB of physical memory per node
-*   two 300GB SAS 3,5”15krpm HDD (RAID1) per node
-*   two 100GB SLC SSD per node
-*   bullx R423-E3 servers
-*   cn[208-209]
+* 2 nodes
+* 32 cores in total
+* 2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
+* 512 GB of physical memory per node
+* two 300GB SAS 3,5”15krpm HDD (RAID1) per node
+* two 100GB SLC SSD per node
+* bullx R423-E3 servers
+* cn[208-209]
 
 ![](../img/bullxB510.png)
 **Figure Anselm bullx B510 servers**
@@ -65,23 +65,23 @@ Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes
 
 ### Intel Sandy Bridge E5-2665 Processor
 
-*   eight-core
-*   speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology
-*   peak performance:  19.2 GFLOP/s per core
-*   caches:
+* eight-core
+* speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology
+* peak performance:  19.2 GFLOP/s per core
+* caches:
    * L2: 256 KB per core
    * L3: 20 MB per processor
-*   memory bandwidth at the level of the processor: 51.2 GB/s
+* memory bandwidth at the level of the processor: 51.2 GB/s
 
 ### Intel Sandy Bridge E5-2470 Processor
 
-*   eight-core
-*   speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology
-*   peak performance:  18.4 GFLOP/s per core
-*   caches:
+* eight-core
+* speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology
+* peak performance:  18.4 GFLOP/s per core
+* caches:
    * L2: 256 KB per core
    * L3: 20 MB per processor
-*   memory bandwidth at the level of the processor: 38.4 GB/s
+* memory bandwidth at the level of the processor: 38.4 GB/s
 
 Nodes equipped with Intel Xeon E5-2665 CPU have set PBS resource attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU have set PBS resource attribute cpu_freq = 23.
 
@@ -101,30 +101,30 @@ Intel Turbo Boost Technology is used by default,  you can disable it for all nod
 
 ### Compute Node Without Accelerator
 
-*   2 sockets
-*   Memory Controllers are integrated into processors.
+* 2 sockets
+* Memory Controllers are integrated into processors.
    * 8 DDR3 DIMMs per node
    * 4 DDR3 DIMMs per CPU
    * 1 DDR3 DIMMs per channel
    * Data rate support: up to 1600MT/s
-*   Populated memory: 8 x 8 GB DDR3 DIMM 1600 MHz
+* Populated memory: 8 x 8 GB DDR3 DIMM 1600 MHz
 
 ### Compute Node With GPU or MIC Accelerator
 
-*   2 sockets
-*   Memory Controllers are integrated into processors.
+* 2 sockets
+* Memory Controllers are integrated into processors.
    * 6 DDR3 DIMMs per node
    * 3 DDR3 DIMMs per CPU
    * 1 DDR3 DIMMs per channel
    * Data rate support: up to 1600MT/s
-*   Populated memory: 6 x 16 GB DDR3 DIMM 1600 MHz
+* Populated memory: 6 x 16 GB DDR3 DIMM 1600 MHz
 
 ### Fat Compute Node
 
-*   2 sockets
-*   Memory Controllers are integrated into processors.
+* 2 sockets
+* Memory Controllers are integrated into processors.
    * 16 DDR3 DIMMs per node
    * 8 DDR3 DIMMs per CPU
    * 2 DDR3 DIMMs per channel
    * Data rate support: up to 1600MT/s
-*   Populated memory: 16 x 32 GB DDR3 DIMM 1600 MHz
+* Populated memory: 16 x 32 GB DDR3 DIMM 1600 MHz
diff --git a/docs.it4i/anselm-cluster-documentation/hardware-overview.md b/docs.it4i/anselm-cluster-documentation/hardware-overview.md
index 94130cf1737b06d64fff0a5b7a7574a2cd301da9..f130bd152f8666dd30cf9d3a7021d04f4ffa99f3 100644
--- a/docs.it4i/anselm-cluster-documentation/hardware-overview.md
+++ b/docs.it4i/anselm-cluster-documentation/hardware-overview.md
@@ -12,10 +12,10 @@ The cluster compute nodes cn[1-207] are organized within 13 chassis.
 
 There are four types of compute nodes:
 
-*   180 compute nodes without the accelerator
-*   23 compute nodes with GPU accelerator - equipped with NVIDIA Tesla Kepler K20
-*   4 compute nodes with MIC accelerator - equipped with Intel Xeon Phi 5110P
-*   2 fat nodes - equipped with 512 GB RAM and two 100 GB SSD drives
+* 180 compute nodes without the accelerator
+* 23 compute nodes with GPU accelerator - equipped with NVIDIA Tesla Kepler K20
+* 4 compute nodes with MIC accelerator - equipped with Intel Xeon Phi 5110P
+* 2 fat nodes - equipped with 512 GB RAM and two 100 GB SSD drives
 
 [More about Compute nodes](compute-nodes/).
 
diff --git a/docs.it4i/anselm-cluster-documentation/prace.md b/docs.it4i/anselm-cluster-documentation/prace.md
index 84200c820f2ec9956c712539c52917fd80106b2d..5f2cd2e3a75913ffc93471fff57c7ded6c718e97 100644
--- a/docs.it4i/anselm-cluster-documentation/prace.md
+++ b/docs.it4i/anselm-cluster-documentation/prace.md
@@ -28,11 +28,11 @@ The user will need a valid certificate and to be present in the PRACE LDAP (plea
 
 Most of the information needed by PRACE users accessing the Anselm TIER-1 system can be found here:
 
-*   [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
-*   [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
-*   [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
-*   [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
-*   [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
+* [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
+* [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
+* [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
+* [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
+* [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
 
 Before you start to use any of the services don't forget to create a proxy certificate from your certificate:
 
diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md
index 37a8f71f127a81669472f9a5b7fa5df910193fde..b04a95ead56383feaf887c3121495c091d0d380a 100644
--- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md
+++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md
@@ -6,11 +6,11 @@ To run a [job](../introduction/), [computational resources](../introduction/) fo
 
 The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](job-priority/) at Anselm ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are available to Anselm users:
 
-*   **qexp**, the Express queue
-*   **qprod**, the Production queue
-*   **qlong**, the Long queue, regula
-*   **qnvidia**, **qmic**, **qfat**, the Dedicated queues
-*   **qfree**, the Free resource utilization queue
+* **qexp**, the Express queue
+* **qprod**, the Production queue
+* **qlong**, the Long queue, regula
+* **qnvidia**, **qmic**, **qfat**, the Dedicated queues
+* **qfree**, the Free resource utilization queue
 
 !!! note
     Check the queue status at <https://extranet.it4i.cz/anselm/>
diff --git a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
index 5668df86761b633ce7f67b73a9eba724351037e6..16cb7510d63075d413a19a9a9702ebbf23a4fb78 100644
--- a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
+++ b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
@@ -20,11 +20,11 @@ The resources are allocated to the job in a fair-share fashion, subject to const
 
 **The qexp queue is equipped with the nodes not having the very same CPU clock speed.** Should you need the very same CPU speed, you have to select the proper nodes during the PSB job submission.
 
-*   **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user, from a pool of nodes containing Nvidia accelerated nodes (cn181-203), MIC accelerated nodes (cn204-207) and Fat nodes with 512GB RAM (cn208-209). This enables to test and tune also accelerated code or code with higher RAM requirements. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
-*   **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, except the reserved ones. 178 nodes without accelerator are included. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
-*   **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 60 nodes without acceleration may be accessed via the qlong queue. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 x 48 h).
-*   **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project.
-*   **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
+* **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user, from a pool of nodes containing Nvidia accelerated nodes (cn181-203), MIC accelerated nodes (cn204-207) and Fat nodes with 512GB RAM (cn208-209). This enables to test and tune also accelerated code or code with higher RAM requirements. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
+* **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, except the reserved ones. 178 nodes without accelerator are included. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
+* **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 60 nodes without acceleration may be accessed via the qlong queue. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 x 48 h).
+* **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project.
+* **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
 
 ### Notes
 
diff --git a/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md b/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
index bffc71a5010d2b5ec60645b3e3a1a3ce91d298cd..0256b6990d59cbf43a654d3d1a10019beb969db6 100644
--- a/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
+++ b/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
@@ -198,9 +198,9 @@ Now, configure the applications proxy settings to **localhost:6000**. Use port f
 
 ## Graphical User Interface
 
-*   The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
-*   The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
+* The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
+* The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
 
 ## VPN Access
 
-*   Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/).
+* Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/).
diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
index 569f20771197f93a74559037809e38bb606449f0..b05ff3eb30cc82bd3ad187f5fd03ea6e55319374 100644
--- a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
+++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
@@ -12,10 +12,10 @@ NWChem aims to provide its users with computational chemistry tools that are sca
 
 The following versions are currently installed:
 
-*   6.1.1, not recommended, problems have been observed with this version
-*   6.3-rev2-patch1, current release with QMD patch applied. Compiled with Intel compilers, MKL and Intel MPI
-*   6.3-rev2-patch1-openmpi, same as above, but compiled with OpenMPI and NWChem provided BLAS instead of MKL. This version is expected to be slower
-*   6.3-rev2-patch1-venus, this version contains only libraries for VENUS interface linking. Does not provide standalone NWChem executable
+* 6.1.1, not recommended, problems have been observed with this version
+* 6.3-rev2-patch1, current release with QMD patch applied. Compiled with Intel compilers, MKL and Intel MPI
+* 6.3-rev2-patch1-openmpi, same as above, but compiled with OpenMPI and NWChem provided BLAS instead of MKL. This version is expected to be slower
+* 6.3-rev2-patch1-venus, this version contains only libraries for VENUS interface linking. Does not provide standalone NWChem executable
 
 For a current list of installed versions, execute:
 
@@ -40,5 +40,5 @@ NWChem is compiled for parallel MPI execution. Normal procedure for MPI jobs app
 
 Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
 
-*   MEMORY : controls the amount of memory NWChem will use
-*   SCRATCH_DIR : set this to a directory in [SCRATCH file system](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, e.g.. "scf direct"
+* MEMORY : controls the amount of memory NWChem will use
+* SCRATCH_DIR : set this to a directory in [SCRATCH file system](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, e.g.. "scf direct"
diff --git a/docs.it4i/anselm-cluster-documentation/software/compilers.md b/docs.it4i/anselm-cluster-documentation/software/compilers.md
index d680a4b536a39ec6c6246dc2dbaab3e7d8e097e2..d1e59f29fd5c7862e8ad28780c1355ba837f8da1 100644
--- a/docs.it4i/anselm-cluster-documentation/software/compilers.md
+++ b/docs.it4i/anselm-cluster-documentation/software/compilers.md
@@ -4,11 +4,11 @@
 
 Currently there are several compilers for different programming languages available on the Anselm cluster:
 
-*   C/C++
-*   Fortran 77/90/95
-*   Unified Parallel C
-*   Java
-*   NVIDIA CUDA
+* C/C++
+* Fortran 77/90/95
+* Unified Parallel C
+* Java
+* NVIDIA CUDA
 
 The C/C++ and Fortran compilers are divided into two main groups GNU and Intel.
 
@@ -45,8 +45,8 @@ For more information about the possibilities of the compilers, please see the ma
 
  UPC is supported by two compiler/runtime implementations:
 
-*   GNU - SMP/multi-threading support only
-*   Berkley - multi-node support as well as SMP/multi-threading support
+* GNU - SMP/multi-threading support only
+* Berkley - multi-node support as well as SMP/multi-threading support
 
 ### GNU UPC Compiler
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
index dcb5a8f6760d2a3064da640ca16f9f543c95963b..a8cb53cce46a7caef11dc886f34dae875edbce9a 100644
--- a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
+++ b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
@@ -6,11 +6,11 @@
 standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical
 applications.
 
-*   [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
-*   [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
-*   [CFD Module](http://www.comsol.com/cfd-module),
-*   [Acoustics Module](http://www.comsol.com/acoustics-module),
-*   and [many others](http://www.comsol.com/products)
+* [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
+* [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
+* [CFD Module](http://www.comsol.com/cfd-module),
+* [Acoustics Module](http://www.comsol.com/acoustics-module),
+* and [many others](http://www.comsol.com/products)
 
 COMSOL also allows an interface support for equation-based modelling of partial differential equations.
 
@@ -18,8 +18,8 @@ COMSOL also allows an interface support for equation-based modelling of partial
 
 On the Anselm cluster COMSOL is available in the latest stable version. There are two variants of the release:
 
-*   **Non commercial** or so called **EDU variant**, which can be used for research and educational purposes.
-*   **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
+* **Non commercial** or so called **EDU variant**, which can be used for research and educational purposes.
+* **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
 
 To load the of COMSOL load the module
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
index 7c581fd14d872d0b81694de8d05b663c26a678a8..27863605d1e6578a4cc1352a6ddd2a66fe6db908 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
@@ -10,13 +10,13 @@ Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed for profil
 
 On Anselm users can debug OpenMP or MPI code that runs up to 64 parallel processes. In case of debugging GPU or Xeon Phi accelerated codes the limit is 8 accelerators. These limitation means that:
 
-*   1 user can debug up 64 processes, or
-*   32 users can debug 2 processes, etc.
+* 1 user can debug up 64 processes, or
+* 32 users can debug 2 processes, etc.
 
 In case of debugging on accelerators:
 
-*   1 user can debug on up to 8 accelerators, or
-*   8 users can debug on single accelerator.
+* 1 user can debug on up to 8 accelerators, or
+* 8 users can debug on single accelerator.
 
 ## Compiling Code to Run With DDT
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md
index 94646c3ea0e5a6f07f478b6fd926c971a2b0a0e6..2da4dc1caa4bcb8bf351d4406dfe6e68df195c44 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md
@@ -4,9 +4,9 @@
 
 CUBE is a graphical performance report explorer for displaying data from Score-P and Scalasca (and other compatible tools). The name comes from the fact that it displays performance data in a three-dimensions :
 
-*   **performance metric**, where a number of metrics are available, such as communication time or cache misses,
-*   **call path**, which contains the call tree of your program
-*   **system resource**, which contains system's nodes, processes and threads, depending on the parallel programming model.
+* **performance metric**, where a number of metrics are available, such as communication time or cache misses,
+* **call path**, which contains the call tree of your program
+* **system resource**, which contains system's nodes, processes and threads, depending on the parallel programming model.
 
 Each dimension is organized in a tree, for example the time performance metric is divided into Execution time and Overhead time, call path dimension is organized by files and routines in your source code etc.
 
@@ -20,8 +20,8 @@ Each node in the tree is colored by severity (the color scheme is displayed at t
 
 Currently, there are two versions of CUBE 4.2.3 available as [modules](../../environment-and-modules/):
 
-*   cube/4.2.3-gcc, compiled with GCC
-*   cube/4.2.3-icc, compiled with Intel compiler
+* cube/4.2.3-gcc, compiled with GCC
+* cube/4.2.3-icc, compiled with Intel compiler
 
 ## Usage
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
index 26676323ea7b9b8e1de370dde0fbf625b94aa862..e9921046dd13f4b3b3b345f2666b426f2bd5ca9c 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
@@ -4,11 +4,11 @@
 
 Intel VTune Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
 
-*   Hotspot analysis
-*   Locks and waits analysis
-*   Low level specific counters, such as branch analysis and memory
+* Hotspot analysis
+* Locks and waits analysis
+* Low level specific counters, such as branch analysis and memory
     bandwidth
-*   Power usage analysis - frequency and sleep states.
+* Power usage analysis - frequency and sleep states.
 
 ![screenshot](../../../img/vtune-amplifier.png)
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
index 7daec0b68f13103379e992e20c825b73667940b0..4326aa87ea6b2d0c5efcfb85d46794607303ac4a 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
@@ -76,15 +76,15 @@ Prints information about the memory architecture of the current CPU.
 
 PAPI provides two kinds of events:
 
-*   **Preset events** is a set of predefined common CPU events, standardized across platforms.
-*   **Native events **is a set of all events supported by the current hardware. This is a larger set of features than preset. For other components than CPU, only native events are usually available.
+* **Preset events** is a set of predefined common CPU events, standardized across platforms.
+* **Native events **is a set of all events supported by the current hardware. This is a larger set of features than preset. For other components than CPU, only native events are usually available.
 
 To use PAPI in your application, you need to link the appropriate include file.
 
-*   papi.h for C
-*   f77papi.h for Fortran 77
-*   f90papi.h for Fortran 90
-*   fpapi.h for Fortran with preprocessor
+* papi.h for C
+* f77papi.h for Fortran 77
+* f90papi.h for Fortran 90
+* fpapi.h for Fortran with preprocessor
 
 The include path is automatically added by papi module to $INCLUDE.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
index 6e00b033a75227bbed053d795bf9ea9f975f71bd..e14f18f9bfdb6f21acea338fabaff2add8343588 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
@@ -10,8 +10,8 @@ Scalasca supports profiling of MPI, OpenMP and hybrid MPI+OpenMP applications.
 
 There are currently two versions of Scalasca 2.0 [modules](../../environment-and-modules/) installed on Anselm:
 
-*   scalasca2/2.0-gcc-openmpi, for usage with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/),
-*   scalasca2/2.0-icc-impi, for usage with [Intel Compiler](../compilers.html) and [Intel MPI](../mpi/running-mpich2/).
+* scalasca2/2.0-gcc-openmpi, for usage with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/),
+* scalasca2/2.0-icc-impi, for usage with [Intel Compiler](../compilers.html) and [Intel MPI](../mpi/running-mpich2/).
 
 ## Usage
 
@@ -39,8 +39,8 @@ An example :
 
 Some notable Scalasca options are:
 
-*   **-t Enable trace data collection. By default, only summary data are collected.**
-*   **-e &lt;directory> Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep\_, followed by name of the executable and launch configuration.**
+* **-t Enable trace data collection. By default, only summary data are collected.**
+* **-e &lt;directory> Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep\_, followed by name of the executable and launch configuration.**
 
 !!! note
     Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage/storage/).
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
index bfc70267de248004cb109d6320be0b8d1a5eabc8..215e8b75069b26d7b1b3508a64afb5fb3f7966c5 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
@@ -10,8 +10,8 @@ Score-P can be used as an instrumentation tool for [Scalasca](scalasca/).
 
 There are currently two versions of Score-P version 1.2.6 [modules](../../environment-and-modules/) installed on Anselm :
 
-*   scorep/1.2.3-gcc-openmpi, for usage     with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/)
-*   scorep/1.2.3-icc-impi, for usage with [Intel Compiler](../compilers.html)> and [Intel MPI](../mpi/running-mpich2/)>.
+* scorep/1.2.3-gcc-openmpi, for usage     with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/)
+* scorep/1.2.3-icc-impi, for usage with [Intel Compiler](../compilers.html)> and [Intel MPI](../mpi/running-mpich2/)>.
 
 ## Instrumentation
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
index 494ec69a53ab6c72975bfd0f7f4403489ae07526..2602fdbf24c9bdf16503740541ed81c536628b5a 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
@@ -10,19 +10,19 @@ Valgind is an extremely useful tool for debugging memory errors such as [off-by-
 
 The main tools available in Valgrind are :
 
-*   **Memcheck**, the original, must used and default tool. Verifies memory access in you program and can detect use of unitialized memory, out of bounds memory access, memory leaks, double free, etc.
-*   **Massif**, a heap profiler.
-*   **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications.
-*   **Cachegrind**, a cache profiler.
-*   **Callgrind**, a callgraph analyzer.
-*   For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/).
+* **Memcheck**, the original, must used and default tool. Verifies memory access in you program and can detect use of unitialized memory, out of bounds memory access, memory leaks, double free, etc.
+* **Massif**, a heap profiler.
+* **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications.
+* **Cachegrind**, a cache profiler.
+* **Callgrind**, a callgraph analyzer.
+* For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/).
 
 ## Installed Versions
 
 There are two versions of Valgrind available on Anselm.
 
-*   Version 3.6.0, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support.
-*   Version 3.9.0 with support for Intel MPI, available in [module](../../environment-and-modules/) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind.
+* Version 3.6.0, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support.
+* Version 3.9.0 with support for Intel MPI, available in [module](../../environment-and-modules/) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind.
 
 ## Usage
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
index df0b0a8a124a2f70d11a2e2adc4eb3d17cf227a0..66de3b77a06d7333464336ada10d68cd3a899aa8 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
@@ -32,5 +32,5 @@ Read more at <http://software.intel.com/sites/products/documentation/doclib/stdx
 
 Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon will use Haswell architecture. >The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. >To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags >designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this:
 
-*   Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge nodes.
-*   Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries.
+* Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge nodes.
+* Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries.
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
index 51aef698b0d7422a151978a31934645fff3883d6..aed92ae69da6f721f676fa5e4180945711fe5fba 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
@@ -4,14 +4,14 @@
 
 Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, extensively threaded and optimized for maximum performance. Intel MKL provides these basic math kernels:
 
-*   BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations.
-*   The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations.
-*   ScaLAPACK distributed processing linear algebra routines for Linux and Windows operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).
-*   Fast Fourier transform (FFT) functions in one, two, or three dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions.
-*   Vector Math Library (VML) routines for optimized mathematical operations on vectors.
-*   Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for    several probability distributions, convolution and correlation routines, and summary statistics functions.
-*   Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
-*   Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver.
+* BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations.
+* The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations.
+* ScaLAPACK distributed processing linear algebra routines for Linux and Windows operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).
+* Fast Fourier transform (FFT) functions in one, two, or three dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions.
+* Vector Math Library (VML) routines for optimized mathematical operations on vectors.
+* Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for    several probability distributions, convolution and correlation routines, and summary statistics functions.
+* Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
+* Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver.
 
 For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm).
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
index bd5ba63afebaa85904bdd7a798047a4e434870bf..3e9dca22e7f13f5b6677465eca1f3aeeef44e412 100644
--- a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
+++ b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
@@ -61,21 +61,21 @@ The general format of the name is `feature__APP__FEATURE`.
 
 Names of applications (APP):
 
-*   ansys
-*   comsol
-*   comsol-edu
-*   matlab
-*   matlab-edu
+* ansys
+* comsol
+* comsol-edu
+* matlab
+* matlab-edu
 
 To get the FEATUREs of a license take a look into the corresponding state file ([see above](isv_licenses/#Licence)), or use:
 
 **Application and List of provided features**
 
-*   **ansys** $ grep -v "#" /apps/user/licenses/ansys_features_state.txt | cut -f1 -d' '
-*   **comsol** $ grep -v "#" /apps/user/licenses/comsol_features_state.txt | cut -f1 -d' '
-*   **comsol-ed** $ grep -v "#" /apps/user/licenses/comsol-edu_features_state.txt | cut -f1 -d' '
-*   **matlab** $ grep -v "#" /apps/user/licenses/matlab_features_state.txt | cut -f1 -d' '
-*   **matlab-edu** $ grep -v "#" /apps/user/licenses/matlab-edu_features_state.txt | cut -f1 -d' '
+* **ansys** $ grep -v "#" /apps/user/licenses/ansys_features_state.txt | cut -f1 -d' '
+* **comsol** $ grep -v "#" /apps/user/licenses/comsol_features_state.txt | cut -f1 -d' '
+* **comsol-ed** $ grep -v "#" /apps/user/licenses/comsol-edu_features_state.txt | cut -f1 -d' '
+* **matlab** $ grep -v "#" /apps/user/licenses/matlab_features_state.txt | cut -f1 -d' '
+* **matlab-edu** $ grep -v "#" /apps/user/licenses/matlab-edu_features_state.txt | cut -f1 -d' '
 
 Example of PBS Pro resource name, based on APP and FEATURE name:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
index 6b00a78177d385e0d0532a9b7a6db1f7b2edeaad..d35fc3a958d3ec23e81d5a196373921ade99af2b 100644
--- a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
+++ b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
@@ -6,11 +6,11 @@ Running virtual machines on compute nodes
 
 There are situations when Anselm's environment is not suitable for user needs.
 
-*   Application requires different operating system (e.g Windows), application is not available for Linux
-*   Application requires different versions of base system libraries and tools
-*   Application requires specific setup (installation, configuration) of complex software stack
-*   Application requires privileged access to operating system
-*   ... and combinations of above cases
+* Application requires different operating system (e.g Windows), application is not available for Linux
+* Application requires different versions of base system libraries and tools
+* Application requires specific setup (installation, configuration) of complex software stack
+* Application requires privileged access to operating system
+* ... and combinations of above cases
 
 We offer solution for these cases - **virtualization**. Anselm's environment gives the possibility to run virtual machines on compute nodes. Users can create their own images of operating system with specific software stack and run instances of these images as virtual machines on compute nodes. Run of virtual machines is provided by standard mechanism of [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction/).
 
@@ -65,13 +65,13 @@ You can either use your existing image or create new image from scratch.
 
 QEMU currently supports these image types or formats:
 
-*   raw
-*   cloop
-*   cow
-*   qcow
-*   qcow2
-*   vmdk - VMware 3 & 4, or 6 image format, for exchanging images with that product
-*   vdi - VirtualBox 1.1 compatible image format, for exchanging images with VirtualBox.
+* raw
+* cloop
+* cow
+* qcow
+* qcow2
+* vmdk - VMware 3 & 4, or 6 image format, for exchanging images with that product
+* vdi - VirtualBox 1.1 compatible image format, for exchanging images with VirtualBox.
 
 You can convert your existing image using qemu-img convert command. Supported formats of this command are: blkdebug blkverify bochs cloop cow dmg file ftp ftps host_cdrom host_device host_floppy http https nbd parallels qcow qcow2 qed raw sheepdog tftp vdi vhdx vmdk vpc vvfat.
 
@@ -97,10 +97,10 @@ Your image should run some kind of operating system startup script. Startup scri
 
 We recommend, that startup script
 
-*   maps Job Directory from host (from compute node)
-*   runs script (we call it "run script") from Job Directory and waits for application's exit
+* maps Job Directory from host (from compute node)
+* runs script (we call it "run script") from Job Directory and waits for application's exit
    * for management purposes if run script does not exist wait for some time period (few minutes)
-*   shutdowns/quits OS
+* shutdowns/quits OS
 
 For Windows operating systems we suggest using Local Group Policy Startup script, for Linux operating systems rc.local, runlevel init script or similar service.
 
@@ -338,9 +338,9 @@ Interface tap0 has IP address 192.168.1.1 and network mask 255.255.255.0 (/24).
 
 Redirected ports:
 
-*   DNS udp/53->udp/3053, tcp/53->tcp3053
-*   DHCP udp/67->udp3067
-*   SMB tcp/139->tcp3139, tcp/445->tcp3445).
+* DNS udp/53->udp/3053, tcp/53->tcp3053
+* DHCP udp/67->udp3067
+* SMB tcp/139->tcp3139, tcp/445->tcp3445).
 
 You can configure IP address of virtual machine statically or dynamically. For dynamic addressing provide your DHCP server on port 3067 of tap0 interface, you can also provide your DNS server on port 3053 of tap0 interface for example:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
index 992caf8f55f3279277c752110c585664782cbff2..10126cc5d864f5d00a8319c88dedd9ee402f226f 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
@@ -4,8 +4,8 @@
 
 Matlab is available in versions R2015a and R2015b. There are always two variants of the release:
 
-*   Non commercial or so called EDU variant, which can be used for common research and educational purposes.
-*   Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
+* Non commercial or so called EDU variant, which can be used for common research and educational purposes.
+* Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
 
 To load the latest version of Matlab load the module
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
index 312a1f09490e1937c4fb369c713c9e29d8cda16f..b4d883071284ef6ec8daa2f2bf936cd268d2a333 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
@@ -7,8 +7,8 @@
 
 Matlab is available in the latest stable version. There are always two variants of the release:
 
-*   Non commercial or so called EDU variant, which can be used for common research and educational purposes.
-*   Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
+* Non commercial or so called EDU variant, which can be used for common research and educational purposes.
+* Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
 
 To load the latest version of Matlab load the module
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
index 038de8aade954aa089d5e2878ef733861fde8fea..eb82371f76c7b6a7ea59e3b83fbec4a3dd36a083 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
@@ -90,8 +90,8 @@ In this example, the calculation was automatically divided among the CPU cores a
 
 A version of [native](../intel-xeon-phi/#section-4) Octave is compiled for Xeon Phi accelerators. Some limitations apply for this version:
 
-*   Only command line support. GUI, graph plotting etc. is not supported.
-*   Command history in interactive mode is not supported.
+* Only command line support. GUI, graph plotting etc. is not supported.
+* Command history in interactive mode is not supported.
 
 Octave is linked with parallel Intel MKL, so it best suited for batch processing of tasks that utilize BLAS, LAPACK and FFT operations. By default, number of threads is set to 120, you can control this with > OMP_NUM_THREADS environment
 variable.
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md
index c44e9122429366fd989c63be1736475bba53ca8e..df3b58a0f246e3408f6eaf618f39177db325b359 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md
@@ -8,11 +8,11 @@ PETSc (Portable, Extensible Toolkit for Scientific Computation) is a suite of bu
 
 ## Resources
 
-*   [project webpage](http://www.mcs.anl.gov/petsc/)
-*   [documentation](http://www.mcs.anl.gov/petsc/documentation/)
+* [project webpage](http://www.mcs.anl.gov/petsc/)
+* [documentation](http://www.mcs.anl.gov/petsc/documentation/)
    * [PETSc Users Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)
    * [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)
-*   PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)
+* PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)
 
 ## Modules
 
@@ -36,25 +36,25 @@ All these libraries can be used also alone, without PETSc. Their static or share
 
 ### Libraries Linked to PETSc on Anselm (As of 11 April 2015)
 
-*   dense linear algebra
+* dense linear algebra
    * [Elemental](http://libelemental.org/)
-*   sparse linear system solvers
+* sparse linear system solvers
    * [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)
    * [MUMPS](http://mumps.enseeiht.fr/)
    * [PaStiX](http://pastix.gforge.inria.fr/)
    * [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
    * [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)
    * [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)
-*   input/output
+* input/output
    * [ExodusII](http://sourceforge.net/projects/exodusii/)
    * [HDF5](http://www.hdfgroup.org/HDF5/)
    * [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)
-*   partitioning
+* partitioning
    * [Chaco](http://www.cs.sandia.gov/CRF/chac.html)
    * [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)
    * [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)
    * [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)
-*   preconditioners & multigrid
+* preconditioners & multigrid
    * [Hypre](http://www.nersc.gov/users/software/programming-libraries/math-libraries/petsc/)
    * [Trilinos ML](http://trilinos.sandia.gov/packages/ml/)
    * [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md
index dbf7f01a5ec323eef138a163a2a89034d814065a..bf2ffe6ece8a9114bc617810f7bdaab600c340a4 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md
@@ -10,13 +10,13 @@ Trilinos is a collection of software packages for the numerical solution of larg
 
 Current Trilinos installation on ANSELM contains (among others) the following main packages
 
-*   **Epetra** - core linear algebra package containing classes for manipulation with serial and distributed vectors, matrices, and graphs. Dense linear solvers are supported via interface to BLAS and LAPACK (Intel MKL on ANSELM). Its extension **EpetraExt** contains e.g. methods for matrix-matrix multiplication.
-*   **Tpetra** - next-generation linear algebra package. Supports 64-bit indexing and arbitrary data type using C++ templates.
-*   **Belos** - library of various iterative solvers (CG, block CG, GMRES, block GMRES etc.).
-*   **Amesos** - interface to direct sparse solvers.
-*   **Anasazi** - framework for large-scale eigenvalue algorithms.
-*   **IFPACK** - distributed algebraic preconditioner (includes e.g. incomplete LU factorization)
-*   **Teuchos** - common tools packages. This package contains classes for memory management, output, performance monitoring, BLAS and LAPACK wrappers etc.
+* **Epetra** - core linear algebra package containing classes for manipulation with serial and distributed vectors, matrices, and graphs. Dense linear solvers are supported via interface to BLAS and LAPACK (Intel MKL on ANSELM). Its extension **EpetraExt** contains e.g. methods for matrix-matrix multiplication.
+* **Tpetra** - next-generation linear algebra package. Supports 64-bit indexing and arbitrary data type using C++ templates.
+* **Belos** - library of various iterative solvers (CG, block CG, GMRES, block GMRES etc.).
+* **Amesos** - interface to direct sparse solvers.
+* **Anasazi** - framework for large-scale eigenvalue algorithms.
+* **IFPACK** - distributed algebraic preconditioner (includes e.g. incomplete LU factorization)
+* **Teuchos** - common tools packages. This package contains classes for memory management, output, performance monitoring, BLAS and LAPACK wrappers etc.
 
 For the full list of Trilinos packages, descriptions of their capabilities, and user manuals see [http://trilinos.sandia.gov.](http://trilinos.sandia.gov)
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md
index ee218d0fc26a7b8ec24922a185c4bfc6720371ef..d8c39ed83de7845d45d95741433cba64021a16e4 100644
--- a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md
+++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md
@@ -95,12 +95,12 @@ BAM is the binary representation of SAM and keeps exactly the same information a
 
 Some features
 
-*   Quality control
+* Quality control
    * reads with N errors
    * reads with multiple mappings
    * strand bias
    * paired-end insert
-*   Filtering: by number of errors, number of hits
+* Filtering: by number of errors, number of hits
    * Comparator: stats, intersection, ...
 
 ** Input: ** BAM file.
@@ -290,47 +290,47 @@ If we want to re-launch the pipeline from stage 4 until stage 20 we should use t
 
 The pipeline calls the following tools
 
-*   [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/), quality control tool for high throughput sequence data.
-*   [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at
+* [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/), quality control tool for high throughput sequence data.
+* [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at
       the Broad Institute to analyze high-throughput sequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance. Its robust architecture, powerful processing engine and high-performance computing features make it capable of taking on projects of any size.
-*   [hpg-aligner](https://github.com/opencb-hpg/hpg-aligner), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: _Burrows-Wheeler Transform_ (BWT) to speed-up mapping high-quality reads, and _Smith-Waterman_> (SW) to increase sensitivity when reads cannot be mapped using BWT.
-*   [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data.
-*   [hpg-variant](http://docs.bioinfo.cipf.es/projects/hpg-variant/wiki), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible.
-*   [picard](http://picard.sourceforge.net/), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported.
-*   [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
-*   [snpEff](http://snpeff.sourceforge.net/), Genetic variant annotation and effect prediction toolbox.
+* [hpg-aligner](https://github.com/opencb-hpg/hpg-aligner), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: _Burrows-Wheeler Transform_ (BWT) to speed-up mapping high-quality reads, and _Smith-Waterman_> (SW) to increase sensitivity when reads cannot be mapped using BWT.
+* [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data.
+* [hpg-variant](http://docs.bioinfo.cipf.es/projects/hpg-variant/wiki), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible.
+* [picard](http://picard.sourceforge.net/), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported.
+* [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
+* [snpEff](http://snpeff.sourceforge.net/), Genetic variant annotation and effect prediction toolbox.
 
 This listing show which tools are used in each step of the pipeline
 
-*   stage-00: fastqc
-*   stage-01: hpg_fastq
-*   stage-02: fastqc
-*   stage-03: hpg_aligner and samtools
-*   stage-04: samtools
-*   stage-05: samtools
-*   stage-06: fastqc
-*   stage-07: picard
-*   stage-08: fastqc
-*   stage-09: picard
-*   stage-10: gatk
-*   stage-11: gatk
-*   stage-12: gatk
-*   stage-13: gatk
-*   stage-14: gatk
-*   stage-15: gatk
-*   stage-16: samtools
-*   stage-17: samtools
-*   stage-18: fastqc
-*   stage-19: gatk
-*   stage-20: gatk
-*   stage-21: gatk
-*   stage-22: gatk
-*   stage-23: gatk
-*   stage-24: hpg-variant
-*   stage-25: hpg-variant
-*   stage-26: snpEff
-*   stage-27: snpEff
-*   stage-28: hpg-variant
+* stage-00: fastqc
+* stage-01: hpg_fastq
+* stage-02: fastqc
+* stage-03: hpg_aligner and samtools
+* stage-04: samtools
+* stage-05: samtools
+* stage-06: fastqc
+* stage-07: picard
+* stage-08: fastqc
+* stage-09: picard
+* stage-10: gatk
+* stage-11: gatk
+* stage-12: gatk
+* stage-13: gatk
+* stage-14: gatk
+* stage-15: gatk
+* stage-16: samtools
+* stage-17: samtools
+* stage-18: fastqc
+* stage-19: gatk
+* stage-20: gatk
+* stage-21: gatk
+* stage-22: gatk
+* stage-23: gatk
+* stage-24: hpg-variant
+* stage-25: hpg-variant
+* stage-26: snpEff
+* stage-27: snpEff
+* stage-28: hpg-variant
 
 ## Interpretation
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/openfoam.md b/docs.it4i/anselm-cluster-documentation/software/openfoam.md
index e3509febc4636fb0b4058f1cac6ee510db3f5f14..d7394a6e9282c42fb2ab1dc07ce3395f475dd645 100644
--- a/docs.it4i/anselm-cluster-documentation/software/openfoam.md
+++ b/docs.it4i/anselm-cluster-documentation/software/openfoam.md
@@ -22,10 +22,10 @@ Naming convection of the installed versions is following:
 
 openfoam\<VERSION\>-\<COMPILER\>\<openmpiVERSION\>-\<PRECISION\>
 
-*   \<VERSION\> - version of openfoam
-*   \<COMPILER\> - version of used compiler
-*   \<openmpiVERSION\> - version of used openmpi/impi
-*   \<PRECISION\> - DP/SP – double/single precision
+* \<VERSION\> - version of openfoam
+* \<COMPILER\> - version of used compiler
+* \<openmpiVERSION\> - version of used openmpi/impi
+* \<PRECISION\> - DP/SP – double/single precision
 
 ### Available OpenFOAM Modules
 
diff --git a/docs.it4i/anselm-cluster-documentation/storage.md b/docs.it4i/anselm-cluster-documentation/storage.md
index 0c8f59e2a7f034815c90cb26d734c2eb858ed5c6..f99b9f11a4159da704a78da3d13fcf3134b7a7f4 100644
--- a/docs.it4i/anselm-cluster-documentation/storage.md
+++ b/docs.it4i/anselm-cluster-documentation/storage.md
@@ -80,19 +80,19 @@ The architecture of Lustre on Anselm is composed of two metadata servers (MDS) a
 
  Configuration of the storages
 
-*   HOME Lustre object storage
+* HOME Lustre object storage
    * One disk array NetApp E5400
    * 22 OSTs
    * 227 2TB NL-SAS 7.2krpm disks
    * 22 groups of 10 disks in RAID6 (8+2)
    * 7 hot-spare disks
-*   SCRATCH Lustre object storage
+* SCRATCH Lustre object storage
    * Two disk arrays NetApp E5400
    * 10 OSTs
    * 106 2TB NL-SAS 7.2krpm disks
    * 10 groups of 10 disks in RAID6 (8+2)
    * 6 hot-spare disks
-*   Lustre metadata storage
+* Lustre metadata storage
    * One disk array NetApp E2600
    * 12 300GB SAS 15krpm disks
    * 2 groups of 5 disks in RAID5
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
index e4faa9fc2bddc63c4d708483e5960e52c13145ec..7a4d63ed99a12aa345a37d6afbe65a1e8d1f459d 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
@@ -15,43 +15,43 @@ We recommned you to download "**A Windows installer for everything except PuTTYt
 
 ## PuTTY - How to Connect to the IT4Innovations Cluster
 
-*   Run PuTTY
-*   Enter Host name and Save session fields with [Login address](../../../salomon/shell-and-data-access.md) and browse Connection -  SSH - Auth menu. The _Host Name_ input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**.
+* Run PuTTY
+* Enter Host name and Save session fields with [Login address](../../../salomon/shell-and-data-access.md) and browse Connection -  SSH - Auth menu. The _Host Name_ input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**.
 
 ![](../../../img/PuTTY_host_Salomon.png)
 
-*   Category - Connection -  SSH - Auth:
+* Category - Connection -  SSH - Auth:
       Select Attempt authentication using Pageant.
       Select Allow agent forwarding.
       Browse and select your [private key](ssh-keys/) file.
 
 ![](../../../img/PuTTY_keyV.png)
 
-*   Return to Session page and Save selected configuration with _Save_ button.
+* Return to Session page and Save selected configuration with _Save_ button.
 
 ![](../../../img/PuTTY_save_Salomon.png)
 
-*   Now you can log in using _Open_ button.
+* Now you can log in using _Open_ button.
 
 ![](../../../img/PuTTY_open_Salomon.png)
 
-*   Enter your username if the _Host Name_ input is not in the format "username@salomon.it4i.cz".
-*   Enter passphrase for selected [private key](ssh-keys/) file if Pageant **SSH authentication agent is not used.**
+* Enter your username if the _Host Name_ input is not in the format "username@salomon.it4i.cz".
+* Enter passphrase for selected [private key](ssh-keys/) file if Pageant **SSH authentication agent is not used.**
 
 ## Another PuTTY Settings
 
-*   Category - Windows - Translation - Remote character set and select **UTF-8**.
-*   Category - Terminal - Features and select **Disable application keypad mode** (enable numpad)
-*   Save your configuration on Session page in to Default Settings with _Save_ button.
+* Category - Windows - Translation - Remote character set and select **UTF-8**.
+* Category - Terminal - Features and select **Disable application keypad mode** (enable numpad)
+* Save your configuration on Session page in to Default Settings with _Save_ button.
 
 ## Pageant SSH Agent
 
 Pageant holds your private key in memory without needing to retype a passphrase on every login.
 
-*   Run Pageant.
-*   On Pageant Key List press _Add key_ and select your private key (id_rsa.ppk).
-*   Enter your passphrase.
-*   Now you have your private key in memory without needing to retype a passphrase on every login.
+* Run Pageant.
+* On Pageant Key List press _Add key_ and select your private key (id_rsa.ppk).
+* Enter your passphrase.
+* Now you have your private key in memory without needing to retype a passphrase on every login.
 
 ![](../../../img/PageantV.png)
 
@@ -63,11 +63,11 @@ PuTTYgen is the PuTTY key generator. You can load in an existing private key and
 
 You can change the password of your SSH key with "PuTTY Key Generator". Make sure to backup the key.
 
-*   Load your [private key](../shell-access-and-data-transfer/ssh-keys/) file with _Load_ button.
-*   Enter your current passphrase.
-*   Change key passphrase.
-*   Confirm key passphrase.
-*   Save your private key with _Save private key_ button.
+* Load your [private key](../shell-access-and-data-transfer/ssh-keys/) file with _Load_ button.
+* Enter your current passphrase.
+* Change key passphrase.
+* Confirm key passphrase.
+* Save your private key with _Save private key_ button.
 
 ![](../../../img/PuttyKeygeneratorV.png)
 
@@ -75,33 +75,33 @@ You can change the password of your SSH key with "PuTTY Key Generator". Make sur
 
 You can generate an additional public/private key pair and insert public key into authorized_keys file for authentication with your own private key.
 
-*   Start with _Generate_ button.
+* Start with _Generate_ button.
 
 ![](../../../img/PuttyKeygenerator_001V.png)
 
-*   Generate some randomness.
+* Generate some randomness.
 
 ![](../../../img/PuttyKeygenerator_002V.png)
 
-*   Wait.
+* Wait.
 
 ![](../../../img/PuttyKeygenerator_003V.png)
 
-*   Enter a _comment_ for your key using format 'username@organization.example.com'.
+* Enter a _comment_ for your key using format 'username@organization.example.com'.
       Enter key passphrase.
       Confirm key passphrase.
       Save your new private key in "_.ppk" format with _Save private key\* button.
 
 ![](../../../img/PuttyKeygenerator_004V.png)
 
-*   Save the public key with _Save public key_ button.
+* Save the public key with _Save public key_ button.
       You can copy public key out of the ‘Public key for pasting into authorized_keys file’ box.
 
 ![](../../../img/PuttyKeygenerator_005V.png)
 
-*   Export private key in OpenSSH format "id_rsa" using Conversion - Export OpenSSH key
+* Export private key in OpenSSH format "id_rsa" using Conversion - Export OpenSSH key
 
 ![](../../../img/PuttyKeygenerator_006V.png)
 
-*   Now you can insert additional public key into authorized_keys file for authentication with your own private key.
+* Now you can insert additional public key into authorized_keys file for authentication with your own private key.
       You must log in using ssh key received after registration. Then proceed to [How to add your own key](../shell-access-and-data-transfer/ssh-keys/).
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
index 5e9bbe54c8bc438358542c265a3b04995324ebfb..a2a4d429fc06d4943a0ab89df247f410ccdc4bd2 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
@@ -21,9 +21,9 @@ After logging in, you can see .ssh/ directory with SSH keys and authorized_keys
 
 ## Access Privileges on .ssh Folder
 
-*   .ssh directory: 700 (drwx------)
-*   Authorized_keys, known_hosts and public key (.pub file): 644 (-rw-r--r--)
-*   Private key (id_rsa/id_rsa.ppk): 600 (-rw-------)
+* .ssh directory: 700 (drwx------)
+* Authorized_keys, known_hosts and public key (.pub file): 644 (-rw-r--r--)
+* Private key (id_rsa/id_rsa.ppk): 600 (-rw-------)
 
 ```bash
     cd /home/username/
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md
index 0de78fa9f87ddffc9f5ec75dd0e4292a0c9b3fa9..01123953847eefae3965c86e4896e6573f5514a5 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md
@@ -6,12 +6,12 @@ AnyConnect users on Windows 8.1 will receive a "Failed to initialize connection
 
 ## Workaround
 
-*   Close the Cisco AnyConnect Window and the taskbar mini-icon
-*   Right click vpnui.exe in the 'Cisco AnyConnect Secure Mobility Client' folder. (C:Program Files (x86)CiscoCisco AnyConnect Secure Mobility Client)
-*   Click on the 'Run compatibility troubleshooter' button
-*   Choose 'Try recommended settings'
-*   The wizard suggests Windows 8 compatibility.
-*   Click 'Test Program'. This will open the program.
-*   Close
+* Close the Cisco AnyConnect Window and the taskbar mini-icon
+* Right click vpnui.exe in the 'Cisco AnyConnect Secure Mobility Client' folder. (C:Program Files (x86)CiscoCisco AnyConnect Secure Mobility Client)
+* Click on the 'Run compatibility troubleshooter' button
+* Choose 'Try recommended settings'
+* The wizard suggests Windows 8 compatibility.
+* Click 'Test Program'. This will open the program.
+* Close
 
 ![](../../../img/vpnuiV.png)
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md
index d33b2e20b89686147f0b850c31c17748e762d32c..8f24a21f54aa37624035cf8aa42806af9d09c4a8 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn-access.md
@@ -4,12 +4,12 @@
 
 For using resources and licenses which are located at IT4Innovations local network, it is necessary to VPN connect to this network. We use Cisco AnyConnect Secure Mobility Client, which is supported on the following operating systems:
 
-*   Windows XP
-*   Windows Vista
-*   Windows 7
-*   Windows 8
-*   Linux
-*   MacOS
+* Windows XP
+* Windows Vista
+* Windows 7
+* Windows 8
+* Linux
+* MacOS
 
 It is impossible to connect to VPN from other operating systems.
 
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md
index e0a21d4700a16011afb67d8ef1b19983f6ef8db2..bbfdb211a5b4e1ffa1725fbb7772cbbdc30106ad 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/vpn1-access.md
@@ -9,12 +9,12 @@ Workaround can be found at [vpn-connection-fail-in-win-8.1](../../get-started-wi
 
 For using resources and licenses which are located at IT4Innovations local network, it is necessary to VPN connect to this network. We use Cisco AnyConnect Secure Mobility Client, which is supported on the following operating systems:
 
-*   Windows XP
-*   Windows Vista
-*   Windows 7
-*   Windows 8
-*   Linux
-*   MacOS
+* Windows XP
+* Windows Vista
+* Windows 7
+* Windows 8
+* Linux
+* MacOS
 
 It is impossible to connect to VPN from other operating systems.
 
diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md
index c0d9c23231a434c803f8718ab853dfa0c54b8392..bf0b5c5acc85d611237908cfecf5f8e73b07afd5 100644
--- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md
+++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md
@@ -8,10 +8,10 @@ IT4Innovations employs X.509 certificates for secure communication (e. g. creden
 
 There are different kinds of certificates, each with a different scope of use. We mention here:
 
-*   User (Private) certificates
-*   Certificate Authority (CA) certificates
-*   Host certificates
-*   Service certificates
+* User (Private) certificates
+* Certificate Authority (CA) certificates
+* Host certificates
+* Service certificates
 
 However, users need only manage User and CA certificates. Note that your user certificate is protected by an associated private key, and this **private key must never be disclosed**.
 
diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
index da0acddbf6216760791918ad728bdc2b458305df..dd3de61f5770e1a0ed9d21e3a9387b11041f7b55 100644
--- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
+++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
@@ -24,8 +24,8 @@ This is a preferred way of granting access to project resources. Please, use thi
 
 Log in to the [IT4I Extranet portal](https://extranet.it4i.cz) using IT4I credentials and go to the **Projects** section.
 
-*   **Users:** Please, submit your requests for becoming a project member.
-*   **Primary Investigators:** Please, approve or deny users' requests in the same section.
+* **Users:** Please, submit your requests for becoming a project member.
+* **Primary Investigators:** Please, approve or deny users' requests in the same section.
 
 ## Authorization by E-Mail (An Alternative Approach)
 
@@ -120,7 +120,7 @@ We accept personal certificates issued by any widely respected certification aut
 
 Certificate generation process is well-described here:
 
-*   [How to generate a personal TCS certificate in Mozilla Firefox web browser (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-gen)
+* [How to generate a personal TCS certificate in Mozilla Firefox web browser (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-gen)
 
 A FAQ about certificates can be found here: [Certificates FAQ](certificates-faq/).
 
@@ -128,18 +128,18 @@ A FAQ about certificates can be found here: [Certificates FAQ](certificates-faq/
 
 Follow these steps **only** if you can not obtain your certificate in a standard way. In case you choose this procedure, please attach a **scan of photo ID** (personal ID or passport or drivers license) when applying for [login credentials](obtaining-login-credentials/#the-login-credentials).
 
-*   Go to [CAcert](www.cacert.org).
+* Go to [CAcert](www.cacert.org).
    * If there's a security warning, just acknowledge it.
-*   Click _Join_.
-*   Fill in the form and submit it by the _Next_ button.
+* Click _Join_.
+* Fill in the form and submit it by the _Next_ button.
    * Type in the e-mail address which you use for communication with us.
    * Don't forget your chosen _Pass Phrase_.
-*   You will receive an e-mail verification link. Follow it.
-*   After verifying, go to the CAcert's homepage and login using     _Password Login_.
-*   Go to _Client Certificates_ _New_.
-*   Tick _Add_ for your e-mail address and click the _Next_ button.
-*   Click the _Create Certificate Request_ button.
-*   You'll be redirected to a page from where you can download/install your certificate.
+* You will receive an e-mail verification link. Follow it.
+* After verifying, go to the CAcert's homepage and login using     _Password Login_.
+* Go to _Client Certificates_ _New_.
+* Tick _Add_ for your e-mail address and click the _Next_ button.
+* Click the _Create Certificate Request_ button.
+* You'll be redirected to a page from where you can download/install your certificate.
    * Simultaneously you'll get an e-mail with a link to the certificate.
 
 ## Installation of the Certificate Into Your Mail Client
@@ -148,13 +148,13 @@ The procedure is similar to the following guides:
 
 MS Outlook 2010
 
-*   [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380)
-*   [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp)
+* [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380)
+* [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp)
 
 Mozilla Thudnerbird
 
-*   [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate)
-*   [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp)
+* [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate)
+* [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp)
 
 ## End of User Account Lifecycle
 
@@ -162,8 +162,8 @@ User accounts are supported by membership in active Project(s) or by affiliation
 
 User will get 3 automatically generated warning e-mail messages of the pending removal:.
 
-*   First message will be sent 3 months before the removal
-*   Second message will be sent 1 month before the removal
-*   Third message will be sent 1 week before the removal.
+* First message will be sent 3 months before the removal
+* Second message will be sent 1 month before the removal
+* Third message will be sent 1 week before the removal.
 
 The messages will inform about the projected removal date and will challenge the user to migrate her/his data
diff --git a/docs.it4i/index.md b/docs.it4i/index.md
index e233038e06bc7a3a43db04583b32a85f363d9f95..86072355734e682af540708c3808a33c01d9d40f 100644
--- a/docs.it4i/index.md
+++ b/docs.it4i/index.md
@@ -29,17 +29,17 @@ In many cases, you will run your own code on the cluster. In order to fully expl
 
 ## Terminology Frequently Used on These Pages
 
-*   **node:** a computer, interconnected by network to other computers - Computational nodes are powerful computers, designed and dedicated for executing demanding scientific computations.
-*   **core:** processor core, a unit of processor, executing computations
-*   **corehours:** wall clock hours of processor core time - Each node is equipped with **X** processor cores, provides **X** corehours per 1 wall clock hour.
-*   **job:** a calculation running on the supercomputer - The job allocates and utilizes resources of the supercomputer for certain time.
-*   **HPC:** High Performance Computing
-*   **HPC (computational) resources:** corehours, storage capacity, software licences
-*   **code:** a program
-*   **primary investigator (PI):** a person responsible for execution of computational project and utilization of computational resources allocated to that project
-*   **collaborator:** a person participating on execution of computational project and utilization of computational resources allocated to that project
-*   **project:** a computational project under investigation by the PI - The project is identified by the project ID. The computational resources are allocated and charged per project.
-*   **jobscript:** a script to be executed by the PBS Professional workload manager
+* **node:** a computer, interconnected by network to other computers - Computational nodes are powerful computers, designed and dedicated for executing demanding scientific computations.
+* **core:** processor core, a unit of processor, executing computations
+* **corehours:** wall clock hours of processor core time - Each node is equipped with **X** processor cores, provides **X** corehours per 1 wall clock hour.
+* **job:** a calculation running on the supercomputer - The job allocates and utilizes resources of the supercomputer for certain time.
+* **HPC:** High Performance Computing
+* **HPC (computational) resources:** corehours, storage capacity, software licences
+* **code:** a program
+* **primary investigator (PI):** a person responsible for execution of computational project and utilization of computational resources allocated to that project
+* **collaborator:** a person participating on execution of computational project and utilization of computational resources allocated to that project
+* **project:** a computational project under investigation by the PI - The project is identified by the project ID. The computational resources are allocated and charged per project.
+* **jobscript:** a script to be executed by the PBS Professional workload manager
 
 ## Conventions
 
diff --git a/docs.it4i/pbspro.md b/docs.it4i/pbspro.md
index e89ddfe72d54ff6b0e3fce2ab53f47bb2c6bbac5..72f5c3dd33b2946d0399ffe3c16c7cab8613a5b8 100644
--- a/docs.it4i/pbspro.md
+++ b/docs.it4i/pbspro.md
@@ -1,4 +1,4 @@
-*   ![pdf](img/pdf.png)[PBS Pro Programmer's Guide](http://www.pbsworks.com/pdfs/PBSProgramGuide13.0.pdf)
-*   ![pdf](img/pdf.png)[PBS Pro Quick Start Guide](http://www.pbsworks.com/pdfs/PBSQuickStartGuide13.0.pdf)
-*   ![pdf](img/pdf.png)[PBS Pro Reference Guide](http://www.pbsworks.com/pdfs/PBSReferenceGuide13.0.pdf)
-*   ![pdf](img/pdf.png)[PBS Pro User's Guide](http://www.pbsworks.com/pdfs/PBSUserGuide13.0.pdf)
+* ![pdf](img/pdf.png)[PBS Pro Programmer's Guide](http://www.pbsworks.com/pdfs/PBSProgramGuide13.0.pdf)
+* ![pdf](img/pdf.png)[PBS Pro Quick Start Guide](http://www.pbsworks.com/pdfs/PBSQuickStartGuide13.0.pdf)
+* ![pdf](img/pdf.png)[PBS Pro Reference Guide](http://www.pbsworks.com/pdfs/PBSReferenceGuide13.0.pdf)
+* ![pdf](img/pdf.png)[PBS Pro User's Guide](http://www.pbsworks.com/pdfs/PBSUserGuide13.0.pdf)
diff --git a/docs.it4i/salomon/capacity-computing.md b/docs.it4i/salomon/capacity-computing.md
index 36ee6211b315b751799920027789bf0d1173e893..18b896440cb841c7a68ec5d9b1e0fdfa58216b3d 100644
--- a/docs.it4i/salomon/capacity-computing.md
+++ b/docs.it4i/salomon/capacity-computing.md
@@ -9,9 +9,9 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 !!! note
     Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
 
-*   Use [Job arrays](capacity-computing.md#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
-*   Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
-*   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
+* Use [Job arrays](capacity-computing.md#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
+* Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
+* Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
 
 ## Policy
 
@@ -25,9 +25,9 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 
 A job array is a compact representation of many jobs, called subjobs. The subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
 
-*   each subjob has a unique index, $PBS_ARRAY_INDEX
-*   job Identifiers of subjobs only differ by their indices
-*   the state of subjobs can differ (R,Q,...etc.)
+* each subjob has a unique index, $PBS_ARRAY_INDEX
+* job Identifiers of subjobs only differ by their indices
+* the state of subjobs can differ (R,Q,...etc.)
 
 All subjobs within a job array have the same scheduling priority and schedule as independent jobs. Entire job array is submitted through a single qsub command and may be managed by qdel, qalter, qhold, qrls and qsig commands as a single job.
 
diff --git a/docs.it4i/salomon/compute-nodes.md b/docs.it4i/salomon/compute-nodes.md
index 83bca5c4045e93800fe922accb9882508f0aa0b1..ed4fcf71cde5bdbba440923bb4a4d556dbfde1f4 100644
--- a/docs.it4i/salomon/compute-nodes.md
+++ b/docs.it4i/salomon/compute-nodes.md
@@ -9,22 +9,22 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 
 ### Compute Nodes Without Accelerator
 
-*   codename "grafton"
-*   576 nodes
-*   13 824 cores in total
-*   two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
-*   128 GB of physical memory per node
+* codename "grafton"
+* 576 nodes
+* 13 824 cores in total
+* two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
+* 128 GB of physical memory per node
 
 ![cn_m_cell](../img/cn_m_cell)
 
 ### Compute Nodes With MIC Accelerator
 
-*   codename "perrin"
-*   432 nodes
-*   10 368 cores in total
-*   two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
-*   128 GB of physical memory per node
-*   MIC accelerator 2 x Intel Xeon Phi 7120P per node, 61-cores, 16 GB per accelerator
+* codename "perrin"
+* 432 nodes
+* 10 368 cores in total
+* two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
+* 128 GB of physical memory per node
+* MIC accelerator 2 x Intel Xeon Phi 7120P per node, 61-cores, 16 GB per accelerator
 
 ![cn_mic](../img/cn_mic-1)
 
@@ -34,12 +34,12 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 
 ### Uv 2000
 
-*   codename "UV2000"
-*   1 node
-*   112 cores in total
-*   14 x Intel Xeon E5-4627v2, 8-core, 3.3 GHz processors, in 14 NUMA nodes
-*   3328 GB of physical memory per node
-*   1 x NVIDIA GM200 (GeForce GTX TITAN X), 12 GB RAM
+* codename "UV2000"
+* 1 node
+* 112 cores in total
+* 14 x Intel Xeon E5-4627v2, 8-core, 3.3 GHz processors, in 14 NUMA nodes
+* 3328 GB of physical memory per node
+* 1 x NVIDIA GM200 (GeForce GTX TITAN X), 12 GB RAM
 
 ![](../img/uv-2000.jpeg)
 
@@ -57,22 +57,22 @@ Salomon is equipped with Intel Xeon processors Intel Xeon E5-2680v3. Processors
 
 ### Intel Xeon E5-2680v3 Processor
 
-*   12-core
-*   speed: 2.5 GHz, up to 3.3 GHz using Turbo Boost Technology
-*   peak performance:  19.2 GFLOP/s per core
-*   caches:
+* 12-core
+* speed: 2.5 GHz, up to 3.3 GHz using Turbo Boost Technology
+* peak performance:  19.2 GFLOP/s per core
+* caches:
    * Intel® Smart Cache:  30 MB
-*   memory bandwidth at the level of the processor: 68 GB/s
+* memory bandwidth at the level of the processor: 68 GB/s
 
 ### MIC Accelerator Intel Xeon Phi 7120P Processor
 
-*   61-core
-*   speed:  1.238
+* 61-core
+* speed:  1.238
     GHz, up to 1.333 GHz using Turbo Boost Technology
-*   peak performance:  18.4 GFLOP/s per core
-*   caches:
+* peak performance:  18.4 GFLOP/s per core
+* caches:
    * L2:  30.5 MB
-*   memory bandwidth at the level of the processor:  352 GB/s
+* memory bandwidth at the level of the processor:  352 GB/s
 
 ## Memory Architecture
 
@@ -80,27 +80,27 @@ Memory is equally distributed across all CPUs and cores for optimal performance.
 
 ### Compute Node Without Accelerator
 
-*   2 sockets
-*   Memory Controllers are integrated into processors.
+* 2 sockets
+* Memory Controllers are integrated into processors.
    * 8 DDR4 DIMMs per node
    * 4 DDR4 DIMMs per CPU
    * 1 DDR4 DIMMs per channel
-*   Populated memory: 8 x 16 GB DDR4 DIMM >2133 MHz
+* Populated memory: 8 x 16 GB DDR4 DIMM >2133 MHz
 
 ### Compute Node With MIC Accelerator
 
 2 sockets
 Memory Controllers are integrated into processors.
 
-*   8 DDR4 DIMMs per node
-*   4 DDR4 DIMMs per CPU
-*   1 DDR4 DIMMs per channel
+* 8 DDR4 DIMMs per node
+* 4 DDR4 DIMMs per CPU
+* 1 DDR4 DIMMs per channel
 
 Populated memory: 8 x 16 GB DDR4 DIMM 2133 MHz
 MIC Accelerator Intel Xeon Phi 7120P Processor
 
-*   2 sockets
-*   Memory Controllers are are connected via an
+* 2 sockets
+* Memory Controllers are are connected via an
     Interprocessor Network (IPN) ring.
    * 16 GDDR5 DIMMs per node
    * 8 GDDR5 DIMMs per CPU
diff --git a/docs.it4i/salomon/environment-and-modules.md b/docs.it4i/salomon/environment-and-modules.md
index 06be4665c5f1c800c42586fb11eb9dd4a027605f..983ec0318130909f98b441112d723b948a6de3b3 100644
--- a/docs.it4i/salomon/environment-and-modules.md
+++ b/docs.it4i/salomon/environment-and-modules.md
@@ -107,9 +107,9 @@ The EasyBuild framework prepares the build environment for the different toolcha
 
 Recent releases of EasyBuild include out-of-the-box toolchain support for:
 
-*   various compilers, including GCC, Intel, Clang, CUDA
-*   common MPI libraries, such as Intel MPI, MPICH, MVAPICH2, Open MPI
-*   various numerical libraries, including ATLAS, Intel MKL, OpenBLAS, ScaLAPACK, FFTW
+* various compilers, including GCC, Intel, Clang, CUDA
+* common MPI libraries, such as Intel MPI, MPICH, MVAPICH2, Open MPI
+* various numerical libraries, including ATLAS, Intel MKL, OpenBLAS, ScaLAPACK, FFTW
 
 On Salomon, we have currently following toolchains installed:
 
diff --git a/docs.it4i/salomon/ib-single-plane-topology.md b/docs.it4i/salomon/ib-single-plane-topology.md
index 9456b83b37aaa5ae54b9a97b49e3019eabc09bda..5ea65562559d028641e2c2019b5e3eaf44f0d4b1 100644
--- a/docs.it4i/salomon/ib-single-plane-topology.md
+++ b/docs.it4i/salomon/ib-single-plane-topology.md
@@ -4,9 +4,9 @@ A complete M-Cell assembly consists of four compute racks. Each rack contains 4
 
 The SGI ICE X IB Premium Blade provides the first level of interconnection via dual 36-port Mellanox FDR InfiniBand ASIC switch with connections as follows:
 
-*   9 ports from each switch chip connect to the unified backplane, to connect the 18 compute node slots
-*   3 ports on each chip provide connectivity between the chips
-*   24 ports from each switch chip connect to the external bulkhead, for a total of 48
+* 9 ports from each switch chip connect to the unified backplane, to connect the 18 compute node slots
+* 3 ports on each chip provide connectivity between the chips
+* 24 ports from each switch chip connect to the external bulkhead, for a total of 48
 
 ### IB Single-Plane Topology - ICEX M-Cell
 
@@ -22,9 +22,9 @@ Each of the 3 inter-connected D racks are equivalent to one half of M-Cell rack.
 
 As shown in a diagram ![IB Topology](../img/Salomon_IB_topology.png)
 
-*   Racks 21, 22, 23, 24, 25, 26 are equivalent to one M-Cell rack.
-*   Racks 27, 28, 29, 30, 31, 32 are equivalent to one M-Cell rack.
-*   Racks 33, 34, 35, 36, 37, 38 are equivalent to one M-Cell rack.
+* Racks 21, 22, 23, 24, 25, 26 are equivalent to one M-Cell rack.
+* Racks 27, 28, 29, 30, 31, 32 are equivalent to one M-Cell rack.
+* Racks 33, 34, 35, 36, 37, 38 are equivalent to one M-Cell rack.
 
 [IB single-plane topology - Accelerated nodes.pdf](<../src/IB single-plane topology - Accelerated nodes.pdf>)
 
diff --git a/docs.it4i/salomon/prace.md b/docs.it4i/salomon/prace.md
index 1cade6cf91cccac5351cb598e7d0aa9218d05954..45f9e46ac009486a9d44641aa21c961a45703e17 100644
--- a/docs.it4i/salomon/prace.md
+++ b/docs.it4i/salomon/prace.md
@@ -28,11 +28,11 @@ The user will need a valid certificate and to be present in the PRACE LDAP (plea
 
 Most of the information needed by PRACE users accessing the Salomon TIER-1 system can be found here:
 
-*   [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
-*   [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
-*   [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
-*   [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
-*   [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
+* [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
+* [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
+* [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
+* [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
+* [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
 
 Before you start to use any of the services don't forget to create a proxy certificate from your certificate:
 
diff --git a/docs.it4i/salomon/resource-allocation-and-job-execution.md b/docs.it4i/salomon/resource-allocation-and-job-execution.md
index 940e43a91b0e389a2758eae8ac3d51ff1e9f2f08..a28c2a63a19b0de082214d7e2a2e93da91b0d0e8 100644
--- a/docs.it4i/salomon/resource-allocation-and-job-execution.md
+++ b/docs.it4i/salomon/resource-allocation-and-job-execution.md
@@ -6,12 +6,12 @@ To run a [job](job-submission-and-execution/), [computational resources](resourc
 
 The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](job-priority/) at Salomon ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are available to Anselm users:
 
-*   **qexp**, the Express queue
-*   **qprod**, the Production queue
-*   **qlong**, the Long queue
-*   **qmpp**, the Massively parallel queue
-*   **qfat**, the queue to access SMP UV2000 machine
-*   **qfree**, the Free resource utilization queue
+* **qexp**, the Express queue
+* **qprod**, the Production queue
+* **qlong**, the Long queue
+* **qmpp**, the Massively parallel queue
+* **qfat**, the queue to access SMP UV2000 machine
+* **qfree**, the Free resource utilization queue
 
 !!! note
     Check the queue status at <https://extranet.it4i.cz/rsweb/salomon/>
diff --git a/docs.it4i/salomon/resources-allocation-policy.md b/docs.it4i/salomon/resources-allocation-policy.md
index 4122d7097ac0b360d5bcdbaac1b6c5eace6a4050..28d78fc4a650c0d1e24650baafa3c50e86dfb46e 100644
--- a/docs.it4i/salomon/resources-allocation-policy.md
+++ b/docs.it4i/salomon/resources-allocation-policy.md
@@ -20,13 +20,13 @@ The resources are allocated to the job in a fair-share fashion, subject to const
 !!! note
     **The qfree queue is not free of charge**. [Normal accounting](resources-allocation-policy/#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
 
-*   **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
-*   **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
-*   **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 336 nodes without acceleration may be accessed via the qlong queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 \* 48 h)
-*   **qmpp**, the massively parallel queue. This queue is intended for massively parallel runs. It is required that active project with nonzero remaining resources is specified to enter the qmpp. All nodes may be accessed via the qmpp queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it.  The maximum runtime in qmpp is 4 hours. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
-*   **qfat**, the UV2000 queue. This queue is dedicated to access the fat SGI UV2000 SMP machine. The machine (uv1) has 112 Intel IvyBridge cores at 3.3GHz and 3.25TB RAM. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
-*   **qfree**, the Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 24 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
-*   **qviz**, the Visualization queue: Intended for pre-/post-processing using OpenGL accelerated graphics. Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 73 GB of RAM and 1/7 of the GPU capacity (default "chunk"). If more GPU power or RAM is required, it is recommended to allocate more chunks (with 4 cores each) up to one whole node per user, so that all 28 cores, 512 GB RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
+* **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
+* **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
+* **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 336 nodes without acceleration may be accessed via the qlong queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 \* 48 h)
+* **qmpp**, the massively parallel queue. This queue is intended for massively parallel runs. It is required that active project with nonzero remaining resources is specified to enter the qmpp. All nodes may be accessed via the qmpp queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it.  The maximum runtime in qmpp is 4 hours. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
+* **qfat**, the UV2000 queue. This queue is dedicated to access the fat SGI UV2000 SMP machine. The machine (uv1) has 112 Intel IvyBridge cores at 3.3GHz and 3.25TB RAM. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
+* **qfree**, the Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 24 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
+* **qviz**, the Visualization queue: Intended for pre-/post-processing using OpenGL accelerated graphics. Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 73 GB of RAM and 1/7 of the GPU capacity (default "chunk"). If more GPU power or RAM is required, it is recommended to allocate more chunks (with 4 cores each) up to one whole node per user, so that all 28 cores, 512 GB RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
 
 !!! note
     To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
diff --git a/docs.it4i/salomon/shell-and-data-access.md b/docs.it4i/salomon/shell-and-data-access.md
index 04bdc3a78fcf8eb0d0889719a27c45dd14121833..438e60ccadd29fc69b670a1c3351bf2d56a72a35 100644
--- a/docs.it4i/salomon/shell-and-data-access.md
+++ b/docs.it4i/salomon/shell-and-data-access.md
@@ -191,9 +191,9 @@ Now, configure the applications proxy settings to **localhost:6000**. Use port f
 
 ## Graphical User Interface
 
-*   The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
-*   The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
+* The [X Window system](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
+* The [Virtual Network Computing](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
 
 ## VPN Access
 
-*   Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/).
+* Access to IT4Innovations internal resources via [VPN](../get-started-with-it4innovations/accessing-the-clusters/vpn-access/).
diff --git a/docs.it4i/salomon/software/ansys/licensing.md b/docs.it4i/salomon/software/ansys/licensing.md
index 8709ba86478c7bb933fb2cd586cd9eaa3c8729bc..04ff6513349ccede25a0846dd21227251e954732 100644
--- a/docs.it4i/salomon/software/ansys/licensing.md
+++ b/docs.it4i/salomon/software/ansys/licensing.md
@@ -2,9 +2,9 @@
 
 ## ANSYS Licence Can Be Used By:
 
-*   all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB - Technical University of Ostrava, users are CE IT4Innovations third parties - CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
-*   all persons who have a valid license
-*   students of the Technical University
+* all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB - Technical University of Ostrava, users are CE IT4Innovations third parties - CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
+* all persons who have a valid license
+* students of the Technical University
 
 ## ANSYS Academic Research
 
@@ -16,8 +16,8 @@ The licence intended to be used for science and research, publications, students
 
 ## Available Versions
 
-*   16.1
-*   17.0
+* 16.1
+* 17.0
 
 ## License Preferences
 
diff --git a/docs.it4i/salomon/software/chemistry/nwchem.md b/docs.it4i/salomon/software/chemistry/nwchem.md
index be4e95f060601b302b8a4a9a677672256001ba5e..00bebe9e14c4c57689e8d80266876b92ed69e4d9 100644
--- a/docs.it4i/salomon/software/chemistry/nwchem.md
+++ b/docs.it4i/salomon/software/chemistry/nwchem.md
@@ -12,8 +12,8 @@ NWChem aims to provide its users with computational chemistry tools that are sca
 
 The following versions are currently installed:
 
-*   NWChem/6.3.revision2-2013-10-17-Python-2.7.8, current release. Compiled with Intel compilers, MKL and Intel MPI
-*   NWChem/6.5.revision26243-intel-2015b-2014-09-10-Python-2.7.8
+* NWChem/6.3.revision2-2013-10-17-Python-2.7.8, current release. Compiled with Intel compilers, MKL and Intel MPI
+* NWChem/6.5.revision26243-intel-2015b-2014-09-10-Python-2.7.8
 
 For a current list of installed versions, execute:
 
@@ -41,5 +41,5 @@ The recommend to use version 6.5. Version 6.3 fails on Salomon nodes with accele
 
 Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
 
-*   MEMORY : controls the amount of memory NWChem will use
-*   SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
+* MEMORY : controls the amount of memory NWChem will use
+* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
diff --git a/docs.it4i/salomon/software/compilers.md b/docs.it4i/salomon/software/compilers.md
index da785d173bb9cf361c2f5d062b3d269b0e530293..2c5b545c6ed7e7a85041ba18c7467d5530609803 100644
--- a/docs.it4i/salomon/software/compilers.md
+++ b/docs.it4i/salomon/software/compilers.md
@@ -4,22 +4,22 @@ Available compilers, including GNU, INTEL and UPC compilers
 
 There are several compilers for different programming languages available on the cluster:
 
-*   C/C++
-*   Fortran 77/90/95/HPF
-*   Unified Parallel C
-*   Java
+* C/C++
+* Fortran 77/90/95/HPF
+* Unified Parallel C
+* Java
 
 The C/C++ and Fortran compilers are provided by:
 
 Opensource:
 
-*   GNU GCC
-*   Clang/LLVM
+* GNU GCC
+* Clang/LLVM
 
 Commercial licenses:
 
-*   Intel
-*   PGI
+* Intel
+* PGI
 
 ## Intel Compilers
 
@@ -81,8 +81,8 @@ For more information about the possibilities of the compilers, please see the ma
 
 UPC is supported by two compiler/runtime implementations:
 
-*   GNU - SMP/multi-threading support only
-*   Berkley - multi-node support as well as SMP/multi-threading support
+* GNU - SMP/multi-threading support only
+* Berkley - multi-node support as well as SMP/multi-threading support
 
 ### GNU UPC Compiler
 
diff --git a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
index a03c4495d9dad72aea2748ddff332697a8370bd9..6b2d725c102af9f6b08940087344b96f9d5d7633 100644
--- a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
+++ b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
@@ -4,11 +4,11 @@
 
 [COMSOL](http://www.comsol.com) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical applications.
 
-*   [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
-*   [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
-*   [CFD Module](http://www.comsol.com/cfd-module),
-*   [Acoustics Module](http://www.comsol.com/acoustics-module),
-*   and [many others](http://www.comsol.com/products)
+* [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
+* [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
+* [CFD Module](http://www.comsol.com/cfd-module),
+* [Acoustics Module](http://www.comsol.com/acoustics-module),
+* and [many others](http://www.comsol.com/products)
 
 COMSOL also allows an interface support for equation-based modelling of partial differential equations.
 
@@ -16,9 +16,9 @@ COMSOL also allows an interface support for equation-based modelling of partial
 
 On the clusters COMSOL is available in the latest stable version. There are two variants of the release:
 
-*   **Non commercial** or so called >**EDU variant**>, which can be used for research and educational purposes.
+* **Non commercial** or so called >**EDU variant**>, which can be used for research and educational purposes.
 
-*   **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
+* **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
 
 To load the of COMSOL load the module
 
diff --git a/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md b/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
index 1f1ad09bd9d88edba248c317b83db26f2593159f..4358b930fedbfcdf3ea9277d2fa5c89e8a74ca37 100644
--- a/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
+++ b/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
@@ -2,9 +2,9 @@
 
 ## Comsol Licence Can Be Used By:
 
-*   all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB - Technical University of Ostrava, users are CE IT4Innovations third parties - CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
-*   all persons who have a valid license
-*   students of the Technical University
+* all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB - Technical University of Ostrava, users are CE IT4Innovations third parties - CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
+* all persons who have a valid license
+* students of the Technical University
 
 ## Comsol EDU Network Licence
 
@@ -16,4 +16,4 @@ The licence intended to be used for science and research, publications, students
 
 ## Available Versions
 
-*   ver. 51
+* ver. 51
diff --git a/docs.it4i/salomon/software/debuggers/aislinn.md b/docs.it4i/salomon/software/debuggers/aislinn.md
index 7db8ebc34343b03af0449b4790f0cd880ced5c6a..0b41542cf1d9167ad517cb2a0f788f433efaf942 100644
--- a/docs.it4i/salomon/software/debuggers/aislinn.md
+++ b/docs.it4i/salomon/software/debuggers/aislinn.md
@@ -1,9 +1,9 @@
 # Aislinn
 
-*   Aislinn is a dynamic verifier for MPI programs. For a fixed input it covers all possible runs with respect to nondeterminism introduced by MPI. It allows to detect bugs (for sure) that occurs very rare in normal runs.
-*   Aislinn detects problems like invalid memory accesses, deadlocks, misuse of MPI, and resource leaks.
-*   Aislinn is open-source software; you can use it without any licensing limitations.
-*   Web page of the project: <http://verif.cs.vsb.cz/aislinn/>
+* Aislinn is a dynamic verifier for MPI programs. For a fixed input it covers all possible runs with respect to nondeterminism introduced by MPI. It allows to detect bugs (for sure) that occurs very rare in normal runs.
+* Aislinn detects problems like invalid memory accesses, deadlocks, misuse of MPI, and resource leaks.
+* Aislinn is open-source software; you can use it without any licensing limitations.
+* Web page of the project: <http://verif.cs.vsb.cz/aislinn/>
 
 !!! note
     Aislinn is software developed at IT4Innovations and some parts are still considered experimental. If you have any questions or experienced any problems, please contact the author: <mailto:stanislav.bohm@vsb.cz>.
@@ -83,20 +83,20 @@ At the beginning of the report there are some basic summaries of the verificatio
 
 It shows us:
 
-*   Error occurs in process 0 in test.cpp on line 16.
-*   Stdout and stderr streams are empty. (The program does not write anything).
-*   The last part shows MPI calls for each process that occurs in the invalid run. The more detailed information about each call can be obtained by mouse cursor.
+* Error occurs in process 0 in test.cpp on line 16.
+* Stdout and stderr streams are empty. (The program does not write anything).
+* The last part shows MPI calls for each process that occurs in the invalid run. The more detailed information about each call can be obtained by mouse cursor.
 
 ### Limitations
 
 Since the verification is a non-trivial process there are some of limitations.
 
-*   The verified process has to terminate in all runs, i.e. we cannot answer the halting problem.
-*   The verification is a computationally and memory demanding process. We put an effort to make it efficient and it is an important point for further research. However covering all runs will be always more demanding than techniques that examines only a single run. The good practise is to start with small instances and when it is feasible, make them bigger. The Aislinn is good to find bugs that are hard to find because they occur very rarely (only in a rare scheduling). Such bugs often do not need big instances.
-*   Aislinn expects that your program is a "standard MPI" program, i.e. processes communicate only through MPI, the verified program does not interacts with the system in some unusual ways (e.g. opening sockets).
+* The verified process has to terminate in all runs, i.e. we cannot answer the halting problem.
+* The verification is a computationally and memory demanding process. We put an effort to make it efficient and it is an important point for further research. However covering all runs will be always more demanding than techniques that examines only a single run. The good practise is to start with small instances and when it is feasible, make them bigger. The Aislinn is good to find bugs that are hard to find because they occur very rarely (only in a rare scheduling). Such bugs often do not need big instances.
+* Aislinn expects that your program is a "standard MPI" program, i.e. processes communicate only through MPI, the verified program does not interacts with the system in some unusual ways (e.g. opening sockets).
 
 There are also some limitations bounded to the current version and they will be removed in the future:
 
-*   All files containing MPI calls have to be recompiled by MPI implementation provided by Aislinn. The files that does not contain MPI calls, they do not have to recompiled. Aislinn MPI implementation supports many commonly used calls from MPI-2 and MPI-3 related to point-to-point communication, collective communication, and communicator management. Unfortunately, MPI-IO and one-side communication is not implemented yet.
-*   Each MPI can use only one thread (if you use OpenMP, set OMP_NUM_THREADS to 1).
-*   There are some limitations for using files, but if the program just reads inputs and writes results, it is ok.
+* All files containing MPI calls have to be recompiled by MPI implementation provided by Aislinn. The files that does not contain MPI calls, they do not have to recompiled. Aislinn MPI implementation supports many commonly used calls from MPI-2 and MPI-3 related to point-to-point communication, collective communication, and communicator management. Unfortunately, MPI-IO and one-side communication is not implemented yet.
+* Each MPI can use only one thread (if you use OpenMP, set OMP_NUM_THREADS to 1).
+* There are some limitations for using files, but if the program just reads inputs and writes results, it is ok.
diff --git a/docs.it4i/salomon/software/debuggers/allinea-ddt.md b/docs.it4i/salomon/software/debuggers/allinea-ddt.md
index b1710f44fde7a369d1beecc49a575fdce9c0c548..0c8128afed0c36c58e3477d31350e3976d363cbd 100644
--- a/docs.it4i/salomon/software/debuggers/allinea-ddt.md
+++ b/docs.it4i/salomon/software/debuggers/allinea-ddt.md
@@ -10,13 +10,13 @@ Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed for profil
 
 On Anselm users can debug OpenMP or MPI code that runs up to 64 parallel processes. In case of debugging GPU or Xeon Phi accelerated codes the limit is 8 accelerators. These limitation means that:
 
-*   1 user can debug up 64 processes, or
-*   32 users can debug 2 processes, etc.
+* 1 user can debug up 64 processes, or
+* 32 users can debug 2 processes, etc.
 
 In case of debugging on accelerators:
 
-*   1 user can debug on up to 8 accelerators, or
-*   8 users can debug on single accelerator.
+* 1 user can debug on up to 8 accelerators, or
+* 8 users can debug on single accelerator.
 
 ## Compiling Code to Run With DDT
 
diff --git a/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
index 224dca9612556aeb83c14fe58554782b55af297b..724e4e58a43d7a196544702b601ffa306da7386d 100644
--- a/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
+++ b/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
@@ -4,10 +4,10 @@
 
 Intel *®* VTune™ Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
 
-*   Hotspot analysis
-*   Locks and waits analysis
-*   Low level specific counters, such as branch analysis and memory bandwidth
-*   Power usage analysis - frequency and sleep states.
+* Hotspot analysis
+* Locks and waits analysis
+* Low level specific counters, such as branch analysis and memory bandwidth
+* Power usage analysis - frequency and sleep states.
 
 ![](../../../img/vtune-amplifier.png)
 
diff --git a/docs.it4i/salomon/software/debuggers/valgrind.md b/docs.it4i/salomon/software/debuggers/valgrind.md
index 9759f2c09e902feee23c0c762959355805b3f73e..430118785a08bc43e67a4711396f9ac6b63c4afb 100644
--- a/docs.it4i/salomon/software/debuggers/valgrind.md
+++ b/docs.it4i/salomon/software/debuggers/valgrind.md
@@ -8,20 +8,20 @@ Valgind is an extremely useful tool for debugging memory errors such as [off-by-
 
 The main tools available in Valgrind are :
 
-*   **Memcheck**, the original, must used and default tool. Verifies memory access in you program and can detect use of unitialized memory, out of bounds memory access, memory leaks, double free, etc.
-*   **Massif**, a heap profiler.
-*   **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications.
-*   **Cachegrind**, a cache profiler.
-*   **Callgrind**, a callgraph analyzer.
-*   For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/).
+* **Memcheck**, the original, must used and default tool. Verifies memory access in you program and can detect use of unitialized memory, out of bounds memory access, memory leaks, double free, etc.
+* **Massif**, a heap profiler.
+* **Hellgrind** and **DRD** can detect race conditions in multi-threaded applications.
+* **Cachegrind**, a cache profiler.
+* **Callgrind**, a callgraph analyzer.
+* For a full list and detailed documentation, please refer to the [official Valgrind documentation](http://valgrind.org/docs/).
 
 ## Installed Versions
 
 There are two versions of Valgrind available on the cluster.
 
-*   Version 3.8.1, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support. Also, it does not support AVX2 instructions, debugging of an AVX2-enabled executable with this version will fail
-*   Version 3.11.0 built by ICC with support for Intel MPI, available in module Valgrind/3.11.0-intel-2015b. After loading the module, this version replaces the default valgrind.
-*   Version 3.11.0 built by GCC with support for Open MPI, module Valgrind/3.11.0-foss-2015b
+* Version 3.8.1, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support. Also, it does not support AVX2 instructions, debugging of an AVX2-enabled executable with this version will fail
+* Version 3.11.0 built by ICC with support for Intel MPI, available in module Valgrind/3.11.0-intel-2015b. After loading the module, this version replaces the default valgrind.
+* Version 3.11.0 built by GCC with support for Open MPI, module Valgrind/3.11.0-foss-2015b
 
 ## Usage
 
diff --git a/docs.it4i/salomon/software/intel-suite/intel-compilers.md b/docs.it4i/salomon/software/intel-suite/intel-compilers.md
index 1a122dbae163406643dc6c3d5fde62a397cff3af..63a05bd91e15c04afa6a3cc8d21231ba030437bc 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-compilers.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-compilers.md
@@ -32,5 +32,5 @@ Read more at <https://software.intel.com/en-us/intel-cplusplus-compiler-16.0-use
 
  Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon compute nodes are equipped with Haswell based architecture. The UV1 SMP compute server has Ivy Bridge CPUs, which are equivalent to Sandy Bridge (only smaller manufacturing technology). The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this:
 
-*   Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge/Ivy Bridge nodes.
-*   Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This   will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge/Ivy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this    will result in larger binaries.
+* Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge/Ivy Bridge nodes.
+* Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This   will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge/Ivy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this    will result in larger binaries.
diff --git a/docs.it4i/salomon/software/intel-suite/intel-mkl.md b/docs.it4i/salomon/software/intel-suite/intel-mkl.md
index ce284b0ef4ca5096df31282b6ecdbf6c4eada38b..322492010827e5dc2cc63d6ccd7cb3452f1a4214 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-mkl.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-mkl.md
@@ -4,14 +4,14 @@
 
 Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, extensively threaded and optimized for maximum performance. Intel MKL provides these basic math kernels:
 
-*   BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations.
-*   The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations.
-*   ScaLAPACK distributed processing linear algebra routines for Linux and Windows operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).
-*   Fast Fourier transform (FFT) functions in one, two, or three dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions.
-*   Vector Math Library (VML) routines for optimized mathematical operations on vectors.
-*   Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for several probability distributions, convolution and correlation routines, and summary statistics functions.
-*   Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
-*   Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver.
+* BLAS (level 1, 2, and 3) and LAPACK linear algebra routines, offering vector, vector-matrix, and matrix-matrix operations.
+* The PARDISO direct sparse solver, an iterative sparse solver, and supporting sparse BLAS (level 1, 2, and 3) routines for solving sparse systems of equations.
+* ScaLAPACK distributed processing linear algebra routines for Linux and Windows operating systems, as well as the Basic Linear Algebra Communications Subprograms (BLACS) and the Parallel Basic Linear Algebra Subprograms (PBLAS).
+* Fast Fourier transform (FFT) functions in one, two, or three dimensions with support for mixed radices (not limited to sizes that are powers of 2), as well as distributed versions of these functions.
+* Vector Math Library (VML) routines for optimized mathematical operations on vectors.
+* Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for several probability distributions, convolution and correlation routines, and summary statistics functions.
+* Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
+* Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver.
 
 For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm).
 
diff --git a/docs.it4i/salomon/software/numerical-languages/matlab.md b/docs.it4i/salomon/software/numerical-languages/matlab.md
index ea4e6181f7934099e005042e1c177adde7654615..59bf293f253eaf189dd3bb95ac08697f7491aa08 100644
--- a/docs.it4i/salomon/software/numerical-languages/matlab.md
+++ b/docs.it4i/salomon/software/numerical-languages/matlab.md
@@ -4,8 +4,8 @@
 
 Matlab is available in versions R2015a and R2015b. There are always two variants of the release:
 
-*   Non commercial or so called EDU variant, which can be used for common research and educational purposes.
-*   Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
+* Non commercial or so called EDU variant, which can be used for common research and educational purposes.
+* Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
 
 To load the latest version of Matlab load the module
 
diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md
index 9cbb9ac1a80fb7eaaa0812b6698499f95253d60d..5166a7f238308fd8e8e6898f1f12f643c31e9a45 100644
--- a/docs.it4i/salomon/storage.md
+++ b/docs.it4i/salomon/storage.md
@@ -34,13 +34,13 @@ The architecture of Lustre on Salomon is composed of two metadata servers (MDS)
 
 Configuration of the SCRATCH Lustre storage
 
-*   SCRATCH Lustre object storage
+* SCRATCH Lustre object storage
    * Disk array SFA12KX
    * 540 x 4 TB SAS 7.2krpm disk
    * 54 x OST of 10 disks in RAID6 (8+2)
    * 15 x hot-spare disk
    * 4 x 400 GB SSD cache
-*   SCRATCH Lustre metadata storage
+* SCRATCH Lustre metadata storage
    * Disk array EF3015
    * 12 x 600 GB SAS 15 krpm disk