From 2d119ecf24ca0e4ca877cacea41de013597cb595 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?David=20Hrb=C3=A1=C4=8D?= <david@hrbac.cz>
Date: Mon, 23 Jan 2017 12:13:19 +0100
Subject: [PATCH] Clean UNICODE hard space

---
 .../capacity-computing.md                     |  28 +-
 .../compute-nodes.md                          |   2 +-
 .../environment-and-modules.md                |   8 +-
 .../hardware-overview.md                      |  10 +-
 .../introduction.md                           |   2 +-
 .../job-priority.md                           |   8 +-
 .../job-submission-and-execution.md           |  18 +-
 .../anselm-cluster-documentation/prace.md     |   8 +-
 .../resource-allocation-and-job-execution.md  |   2 +-
 .../resources-allocation-policy.md            | 110 +++---
 .../shell-and-data-access.md                  |  16 +-
 .../software/ansys/ansys-cfx.md               |   6 +-
 .../software/ansys/ansys-fluent.md            |  12 +-
 .../software/ansys/ansys-ls-dyna.md           |   6 +-
 .../software/ansys/ansys-mechanical-apdl.md   |   4 +-
 .../software/ansys/ansys.md                   |   6 +-
 .../software/ansys/ls-dyna.md                 |   6 +-
 .../software/chemistry/molpro.md              |   2 +-
 .../software/chemistry/nwchem.md              |   4 +-
 .../software/compilers.md                     |  26 +-
 .../software/comsol-multiphysics.md           |   8 +-
 .../software/debuggers/cube.md                |   2 +-
 .../software/debuggers/debuggers.md           |   2 +-
 .../intel-performance-counter-monitor.md      |  12 +-
 .../debuggers/intel-vtune-amplifier.md        |  12 +-
 .../software/debuggers/papi.md                |  28 +-
 .../software/debuggers/scalasca.md            |   4 +-
 .../software/debuggers/score-p.md             |  10 +-
 .../software/debuggers/total-view.md          |  36 +-
 .../software/debuggers/valgrind.md            |  14 +-
 .../software/debuggers/vampir.md              |   6 +-
 .../software/gpi2.md                          |   4 +-
 .../software/intel-suite/intel-compilers.md   |  10 +-
 .../software/intel-suite/intel-mkl.md         |  14 +-
 .../software/intel-suite/intel-tbb.md         |   4 +-
 .../software/isv_licenses.md                  |   2 +-
 .../software/java.md                          |   2 +-
 .../software/kvirtualization.md               |  54 +--
 .../software/mpi/Running_OpenMPI.md           |   4 +-
 .../software/mpi/mpi.md                       |  10 +-
 .../software/mpi/mpi4py-mpi-for-python.md     |   6 +-
 .../software/mpi/running-mpich2.md            |   4 +-
 .../software/numerical-languages/matlab.md    |  16 +-
 .../numerical-languages/matlab_1314.md        |  12 +-
 .../software/numerical-languages/octave.md    |  24 +-
 .../software/numerical-languages/r.md         |   6 +-
 .../software/numerical-libraries/fftw.md      |  44 +--
 .../software/numerical-libraries/gsl.md       |  60 ++--
 .../magma-for-intel-xeon-phi.md               |  26 +-
 .../software/nvidia-cuda.md                   | 186 +++++-----
 .../omics-master/diagnostic-component-team.md |   4 +-
 .../software/omics-master/overview.md         | 102 +++---
 .../priorization-component-bierapp.md         |  10 +-
 .../software/openfoam.md                      |  20 +-
 .../software/paraview.md                      |   8 +-
 .../anselm-cluster-documentation/storage.md   |  36 +-
 docs.it4i/downtimes_history.md                |   4 +-
 .../graphical-user-interface.md               |   2 +-
 .../graphical-user-interface/vnc.md           |  36 +-
 .../x-window-system.md                        |   6 +-
 .../shell-access-and-data-transfer/putty.md   |   6 +-
 .../puttygen.md                               |   2 +-
 .../ssh-keys.md                               |  12 +-
 .../applying-for-resources.md                 |   2 +-
 .../obtaining-login-credentials.md            |   8 +-
 .../vpn-access.md                             |   2 +-
 .../vpn1-access.md                            |   2 +-
 docs.it4i/index.md                            |   2 +-
 docs.it4i/salomon/7d-enhanced-hypercube.md    |   2 +-
 docs.it4i/salomon/capacity-computing.md       |  28 +-
 docs.it4i/salomon/compute-nodes.md            |  38 +-
 docs.it4i/salomon/environment-and-modules.md  |  14 +-
 docs.it4i/salomon/hardware-overview.md        |   4 +-
 docs.it4i/salomon/ib-single-plane-topology.md |   2 +-
 docs.it4i/salomon/introduction.md             |   2 +-
 docs.it4i/salomon/job-priority.md             |   8 +-
 .../salomon/job-submission-and-execution.md   |  20 +-
 docs.it4i/salomon/network.md                  |   6 +-
 docs.it4i/salomon/prace.md                    |  10 +-
 .../salomon/resources-allocation-policy.md    | 112 +++---
 docs.it4i/salomon/shell-and-data-access.md    |  14 +-
 docs.it4i/salomon/software/ansys/ansys-cfx.md |   6 +-
 .../salomon/software/ansys/ansys-fluent.md    |  12 +-
 .../salomon/software/ansys/ansys-ls-dyna.md   |   6 +-
 .../software/ansys/ansys-mechanical-apdl.md   |   4 +-
 docs.it4i/salomon/software/ansys/ansys.md     |   6 +-
 .../ansys/setting-license-preferences.md      |   4 +-
 docs.it4i/salomon/software/ansys/workbench.md |   2 +-
 .../salomon/software/chemistry/molpro.md      |   2 +-
 .../salomon/software/chemistry/nwchem.md      |   6 +-
 .../salomon/software/chemistry/phono3py.md    |  70 ++--
 docs.it4i/salomon/software/compilers.md       |  24 +-
 .../software/comsol/comsol-multiphysics.md    |   8 +-
 .../debuggers/intel-vtune-amplifier.md        |  12 +-
 .../salomon/software/debuggers/total-view.md  |  30 +-
 .../salomon/software/debuggers/valgrind.md    |  10 +-
 .../software/intel-suite/intel-advisor.md     |   2 +-
 .../software/intel-suite/intel-compilers.md   |  12 +-
 .../software/intel-suite/intel-debugger.md    |   2 +-
 .../salomon/software/intel-suite/intel-mkl.md |  14 +-
 .../salomon/software/intel-suite/intel-tbb.md |   6 +-
 .../intel-trace-analyzer-and-collector.md     |   2 +-
 docs.it4i/salomon/software/intel-xeon-phi.md  | 330 +++++++++---------
 docs.it4i/salomon/software/java.md            |   2 +-
 .../salomon/software/mpi/Running_OpenMPI.md   |   2 +-
 docs.it4i/salomon/software/mpi/mpi.md         |   6 +-
 .../software/mpi/mpi4py-mpi-for-python.md     |   6 +-
 .../software/numerical-languages/matlab.md    |  16 +-
 .../software/numerical-languages/octave.md    |   2 +-
 .../salomon/software/numerical-languages/r.md |   8 +-
 docs.it4i/salomon/storage.md                  |  42 +--
 111 files changed, 1042 insertions(+), 1042 deletions(-)

diff --git a/docs.it4i/anselm-cluster-documentation/capacity-computing.md b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
index 5913a27fd..e916a1412 100644
--- a/docs.it4i/anselm-cluster-documentation/capacity-computing.md
+++ b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
@@ -3,7 +3,7 @@ Capacity computing
 
 Introduction
 ------------
-In many cases, it is useful to submit huge (>100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization.
+In many cases, it is useful to submit huge (>100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization.
 
 However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1000 per job array**
 
@@ -12,7 +12,7 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 
 -   Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
 -   Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
--   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
+-   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
 
 Policy
 ------
@@ -73,7 +73,7 @@ cp $PBS_O_WORKDIR/$TASK input ; cp $PBS_O_WORKDIR/myprog.x .
 cp output $PBS_O_WORKDIR/$TASK.out
 ```
 
-In this example, the submit directory holds the 900 input files, executable myprog.x and the jobscript file. As input for each run, we take the filename of input file from created tasklist file. We copy the input file to local scratch /lscratch/$PBS_JOBID, execute the myprog.x and copy the output file back to >the submit directory, under the $TASK.out name. The myprog.x runs on one node only and must use threads to run in parallel. Be aware, that if the myprog.x **is not multithreaded**, then all the **jobs are run as single thread programs in sequential** manner. Due to allocation of the whole node, the accounted time is equal to the usage of whole node**, while using only 1/16 of the node!
+In this example, the submit directory holds the 900 input files, executable myprog.x and the jobscript file. As input for each run, we take the filename of input file from created tasklist file. We copy the input file to local scratch /lscratch/$PBS_JOBID, execute the myprog.x and copy the output file back to >the submit directory, under the $TASK.out name. The myprog.x runs on one node only and must use threads to run in parallel. Be aware, that if the myprog.x **is not multithreaded**, then all the **jobs are run as single thread programs in sequential** manner. Due to allocation of the whole node, the accounted time is equal to the usage of whole node**, while using only 1/16 of the node!
 
 If huge number of parallel multicore (in means of multinode multithread, e. g. MPI enabled) jobs is needed to run, then a job array approach should also be used. The main difference compared to previous example using one node is that the local scratch should not be used (as it's not shared between nodes) and MPI or other technique for parallel multinode run has to be used properly.
 
@@ -156,7 +156,7 @@ GNU parallel
 !!! Note "Note"
 	Use GNU parallel to run many single core tasks on one node.
 
-GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on  Anselm.
+GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on  Anselm.
 
 For more information and examples see the parallel man page:
 
@@ -171,7 +171,7 @@ The GNU parallel shell executes multiple instances of the jobscript using all co
 
 Example:
 
-Assume we have 101 input files with name beginning with "file" (e. g. file001, ..., file101). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
+Assume we have 101 input files with name beginning with "file" (e. g. file001, ..., file101). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
 
 First, we create a tasklist file, listing all tasks - all input files in our example:
 
@@ -201,13 +201,13 @@ TASK=$1
 cp $PBS_O_WORKDIR/$TASK input
 
 # execute the calculation
-cat  input > output
+cat  input > output
 
 # copy output file to submit directory
 cp output $PBS_O_WORKDIR/$TASK.out
 ```
 
-In this example, tasks from tasklist are executed via the GNU parallel. The jobscript executes multiple instances of itself in parallel, on all cores of the node. Once an instace of jobscript is finished, new instance starts until all entries in tasklist are processed. Currently processed entry of the joblist may be retrieved via $1 variable. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.
+In this example, tasks from tasklist are executed via the GNU parallel. The jobscript executes multiple instances of itself in parallel, on all cores of the node. Once an instace of jobscript is finished, new instance starts until all entries in tasklist are processed. Currently processed entry of the joblist may be retrieved via $1 variable. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.
 
 ### Submit the job
 
@@ -218,7 +218,7 @@ $ qsub -N JOBNAME jobscript
 12345.dm2
 ```
 
-In this example, we submit a job of 101 tasks. 16 input files will be processed in  parallel. The 101 tasks on 16 cores are assumed to complete in less than 2 hours.
+In this example, we submit a job of 101 tasks. 16 input files will be processed in  parallel. The 101 tasks on 16 cores are assumed to complete in less than 2 hours.
 
 Please note the #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
 
@@ -239,7 +239,7 @@ Combined approach, very similar to job arrays, can be taken. Job array is submit
 
 Example:
 
-Assume we have 992 input files with name beginning with "file" (e. g. file001, ..., file992). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
+Assume we have 992 input files with name beginning with "file" (e. g. file001, ..., file992). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
 
 First, we create a tasklist file, listing all tasks - all input files in our example:
 
@@ -283,14 +283,14 @@ cat input > output
 cp output $PBS_O_WORKDIR/$TASK.out
 ```
 
-In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node.  Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.  The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks  in numtasks file is reached.
+In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node.  Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.  The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks  in numtasks file is reached.
 
 !!! Note "Note"
-	Select  subjob walltime and number of tasks per subjob  carefully
+	Select  subjob walltime and number of tasks per subjob  carefully
 
 When deciding this values, think about following guiding rules:
 
-1.  Let n=N/16.  Inequality (n+1) * T &lt; W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
+1.  Let n=N/16.  Inequality (n+1) * T &lt; W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
 2.  Number of tasks should be modulo 16.
 3.  These rules are valid only when all tasks have similar task walltimes T.
 
@@ -303,14 +303,14 @@ $ qsub -N JOBNAME -J 1-992:32 jobscript
 12345[].dm2
 ```
 
-In this example, we submit a job array of 31 subjobs. Note the  -J 1-992:**32**, this must be the same as the number sent to numtasks file. Each subjob will run on full node and process 16 input files in parallel, 32 in total per subjob.  Every subjob is assumed to complete in less than 2 hours.
+In this example, we submit a job array of 31 subjobs. Note the  -J 1-992:**32**, this must be the same as the number sent to numtasks file. Each subjob will run on full node and process 16 input files in parallel, 32 in total per subjob.  Every subjob is assumed to complete in less than 2 hours.
 
 Please note the #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
 
 Examples
 --------
 
-Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
+Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
 
 Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
 
diff --git a/docs.it4i/anselm-cluster-documentation/compute-nodes.md b/docs.it4i/anselm-cluster-documentation/compute-nodes.md
index 86a89350b..2d4f1707c 100644
--- a/docs.it4i/anselm-cluster-documentation/compute-nodes.md
+++ b/docs.it4i/anselm-cluster-documentation/compute-nodes.md
@@ -95,7 +95,7 @@ $ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16:cpu_freq=24 -I
 
 In this example, we allocate 4 nodes, 16 cores at 2.4GHhz per node.
 
-Intel Turbo Boost Technology is used by default,  you can disable it for all nodes of job by using resource attribute cpu_turbo_boost.
+Intel Turbo Boost Technology is used by default,  you can disable it for all nodes of job by using resource attribute cpu_turbo_boost.
 
 ```bash
     $ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
diff --git a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md
index 1dafc5879..0835ce008 100644
--- a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md
+++ b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md
@@ -29,12 +29,12 @@ fi
 
 ### Application Modules
 
-In order to configure your shell for  running particular application on Anselm we use Module package interface.
+In order to configure your shell for  running particular application on Anselm we use Module package interface.
 
 !!! Note "Note"
 	The modules set up the application paths, library paths and environment variables for running particular application.
 
-    We have also second modules repository. This modules repository is created using tool called EasyBuild. On Salomon cluster, all modules will be build by this tool. If you want to use software from this modules repository, please follow instructions in section [Application Modules Path Expansion](environment-and-modules/#EasyBuild).
+    We have also second modules repository. This modules repository is created using tool called EasyBuild. On Salomon cluster, all modules will be build by this tool. If you want to use software from this modules repository, please follow instructions in section [Application Modules Path Expansion](environment-and-modules/#EasyBuild).
 
 The modules may be loaded, unloaded and switched, according to momentary needs.
 
@@ -44,7 +44,7 @@ To check available modules use
 $ module avail
 ```
 
-To load a module, for example the octave module  use
+To load a module, for example the octave module  use
 
 ```bash
 $ module load octave
@@ -58,7 +58,7 @@ To check loaded modules use
 $ module list
 ```
 
- To unload a module, for example the octave module use
+ To unload a module, for example the octave module use
 
 ```bash
 $ module unload octave
diff --git a/docs.it4i/anselm-cluster-documentation/hardware-overview.md b/docs.it4i/anselm-cluster-documentation/hardware-overview.md
index 9d129933b..a13b08f2d 100644
--- a/docs.it4i/anselm-cluster-documentation/hardware-overview.md
+++ b/docs.it4i/anselm-cluster-documentation/hardware-overview.md
@@ -15,17 +15,17 @@ There are four types of compute nodes:
 
 -   180 compute nodes without the accelerator
 -   23 compute nodes with GPU accelerator - equipped with NVIDIA Tesla Kepler K20
--   4 compute nodes with MIC accelerator - equipped with Intel Xeon Phi 5110P
+-   4 compute nodes with MIC accelerator - equipped with Intel Xeon Phi 5110P
 -   2 fat nodes - equipped with 512 GB RAM and two 100 GB SSD drives
 
 [More about Compute nodes](compute-nodes/).
 
 GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resources-allocation-policy/).
 
-All these nodes are interconnected by fast InfiniBand network and Ethernet network.  [More about the Network](network/).
+All these nodes are interconnected by fast InfiniBand network and Ethernet network.  [More about the Network](network/).
 Every chassis provides InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
 
-All nodes share 360 TB /home disk storage to store user files. The 146 TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch.  [More about Storage](storage/).
+All nodes share 360 TB /home disk storage to store user files. The 146 TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch.  [More about Storage](storage/).
 
 The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](shell-and-data-access/)
 
@@ -47,8 +47,8 @@ The parameters are summarized in the following tables:
 |MIC accelerated|4, cn[204-207]|
 |Fat compute nodes|2, cn[208-209]|
 |**In total**||
-|Total theoretical peak performance  (Rpeak)|94 TFLOP/s|
-|Total max. LINPACK performance  (Rmax)|73 TFLOP/s|
+|Total theoretical peak performance  (Rpeak)|94 TFLOP/s|
+|Total max. LINPACK performance  (Rmax)|73 TFLOP/s|
 |Total amount of RAM|15.136 TB|
 
   |Node|Processor|Memory|Accelerator|
diff --git a/docs.it4i/anselm-cluster-documentation/introduction.md b/docs.it4i/anselm-cluster-documentation/introduction.md
index 574bf41d3..ffdac2802 100644
--- a/docs.it4i/anselm-cluster-documentation/introduction.md
+++ b/docs.it4i/anselm-cluster-documentation/introduction.md
@@ -1,7 +1,7 @@
 Introduction
 ============
 
-Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM and giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB RAM, and 500 GB hard disk drive. Nodes are interconnected by fully non-blocking fat-tree InfiniBand network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
+Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM and giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB RAM, and 500 GB hard disk drive. Nodes are interconnected by fully non-blocking fat-tree InfiniBand network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
 
 The cluster runs bullx Linux ([bull](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)) [operating system](software/operating-system/), which is compatible with the  RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/).
 
diff --git a/docs.it4i/anselm-cluster-documentation/job-priority.md b/docs.it4i/anselm-cluster-documentation/job-priority.md
index cb378282a..454bbf200 100644
--- a/docs.it4i/anselm-cluster-documentation/job-priority.md
+++ b/docs.it4i/anselm-cluster-documentation/job-priority.md
@@ -1,12 +1,12 @@
 Job scheduling
 ==============
 
-Job execution priority
+Job execution priority
 ----------------------
 
 Scheduler gives each job an execution priority and then uses this job execution priority to select which job(s) to run.
 
-Job execution priority on Anselm is determined by these job properties (in order of importance):
+Job execution priority on Anselm is determined by these job properties (in order of importance):
 
 1.  queue priority
 2.  fair-share priority
@@ -16,7 +16,7 @@ Job execution priority on Anselm is determined by these job properties (in orde
 
 Queue priority is priority of queue where job is queued before execution.
 
-Queue priority has the biggest impact on job execution priority. Execution priority of jobs in higher priority queues is always greater than execution priority of jobs in lower priority queues. Other properties of job used for determining job execution priority (fair-share priority, eligible time) cannot compete with queue priority.
+Queue priority has the biggest impact on job execution priority. Execution priority of jobs in higher priority queues is always greater than execution priority of jobs in lower priority queues. Other properties of job used for determining job execution priority (fair-share priority, eligible time) cannot compete with queue priority.
 
 Queue priorities can be seen at <https://extranet.it4i.cz/anselm/queues>
 
@@ -48,7 +48,7 @@ Eligible time is amount (in seconds) of eligible time job accrued while waiting
 
 Eligible time has the least impact on execution priority. Eligible time is used for sorting jobs with equal queue priority and fair-share priority. It is very, very difficult for eligible time to compete with fair-share priority.
 
-Eligible time can be seen as eligible_time attribute of job.
+Eligible time can be seen as eligible_time attribute of job.
 
 ### Formula
 
diff --git a/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md b/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
index b42196620..b500a5a2b 100644
--- a/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
+++ b/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
@@ -41,13 +41,13 @@ In this example, we allocate 4 nodes, 16 cores per node, for 1 hour. We allocate
 $ qsub -A OPEN-0-0 -q qnvidia -l select=10:ncpus=16 ./myjob
 ```
 
-In this example, we allocate 10 nvidia accelerated nodes, 16 cores per node, for  24 hours. We allocate these resources via the qnvidia queue. Jobscript myjob will be executed on the first node in the allocation.
+In this example, we allocate 10 nvidia accelerated nodes, 16 cores per node, for  24 hours. We allocate these resources via the qnvidia queue. Jobscript myjob will be executed on the first node in the allocation.
 
 ```bash
 $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=16 ./myjob
 ```
 
-In this example, we allocate 10  nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation.
+In this example, we allocate 10  nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation.
 
 All qsub options may be [saved directly into the jobscript](job-submission-and-execution/#PBSsaved). In such a case, no options to qsub are needed.
 
@@ -72,11 +72,11 @@ Specific nodes may be allocated via the PBS
 qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=16:host=cn171+1:ncpus=16:host=cn172 -I
 ```
 
-In this example, we allocate nodes cn171 and cn172, all 16 cores per node, for 24 hours.  Consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. The resources will be available interactively.
+In this example, we allocate nodes cn171 and cn172, all 16 cores per node, for 24 hours.  Consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. The resources will be available interactively.
 
 ### Placement by CPU type
 
-Nodes equipped with Intel Xeon E5-2665 CPU have base clock frequency 2.4GHz, nodes equipped with Intel Xeon E5-2470 CPU have base frequency 2.3 GHz (see section Compute Nodes for details).  Nodes may be selected via the PBS resource attribute cpu_freq .
+Nodes equipped with Intel Xeon E5-2665 CPU have base clock frequency 2.4GHz, nodes equipped with Intel Xeon E5-2470 CPU have base frequency 2.3 GHz (see section Compute Nodes for details).  Nodes may be selected via the PBS resource attribute cpu_freq .
 
 |CPU Type|base freq.|Nodes|cpu_freq attribute|
 |---|---|---|---|
@@ -91,7 +91,7 @@ In this example, we allocate 4 nodes, 16 cores, selecting only the nodes with In
 
 ### Placement by IB switch
 
-Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](../network/) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and provides for unbiased, most efficient network communication.
+Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](../network/) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and provides for unbiased, most efficient network communication.
 
 Nodes sharing the same switch may be selected via the PBS resource attribute ibswitch. Values of this attribute are iswXX, where XX is the switch number. The node-switch mapping can be seen at [Hardware Overview](../hardware-overview/) section.
 
@@ -129,7 +129,7 @@ In the following example, we select an allocation for benchmarking a very specia
     -N Benchmark ./mybenchmark
 ```
 
-The MPI processes will be distributed differently on the nodes connected to the two switches. On the isw10 nodes, we will run 1 MPI process per node 16 threads per process, on isw20  nodes we will run 16 plain MPI processes.
+The MPI processes will be distributed differently on the nodes connected to the two switches. On the isw10 nodes, we will run 1 MPI process per node 16 threads per process, on isw20  nodes we will run 16 plain MPI processes.
 
 Although this example is somewhat artificial, it demonstrates the flexibility of the qsub command options.
 
@@ -207,8 +207,8 @@ $ check-pbs-jobs --jobid 35141.dm2 --print-job-out
 JOB 35141.dm2, session_id 71995, user user2, nodes cn164,cn165
 Print job standard output:
 ======================== Job start  ==========================
-Started at    : Fri Aug 30 02:47:53 CEST 2013
-Script name   : script
+Started at    : Fri Aug 30 02:47:53 CEST 2013
+Script name   : script
 Run loop 1
 Run loop 2
 Run loop 3
@@ -278,7 +278,7 @@ $ pwd
 In this example, 4 nodes were allocated interactively for 1 hour via the qexp queue. The interactive shell is executed in the home directory.
 
 !!! Note "Note"
-	All nodes within the allocation may be accessed via ssh.  Unallocated nodes are not accessible to user.
+	All nodes within the allocation may be accessed via ssh.  Unallocated nodes are not accessible to user.
 
 The allocated nodes are accessible via ssh from login nodes. The nodes may access each other via ssh as well.
 
diff --git a/docs.it4i/anselm-cluster-documentation/prace.md b/docs.it4i/anselm-cluster-documentation/prace.md
index b78343743..74520cdd2 100644
--- a/docs.it4i/anselm-cluster-documentation/prace.md
+++ b/docs.it4i/anselm-cluster-documentation/prace.md
@@ -243,10 +243,10 @@ Users who have undergone the full local registration procedure (including signin
 ```bash
     $ it4ifree
     Password:
-         PID    Total   Used   ...by me Free
-       -------- ------- ------ -------- -------
-       OPEN-0-0 1500000 400644   225265 1099356
-       DD-13-1    10000   2606     2606    7394
+         PID    Total   Used   ...by me Free
+       -------- ------- ------ -------- -------
+       OPEN-0-0 1500000 400644   225265 1099356
+       DD-13-1    10000   2606     2606    7394
 ```
 
 By default file system quota is applied. To check the current status of the quota use
diff --git a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md
index a4f8f2e36..4d4d9f50d 100644
--- a/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md
+++ b/docs.it4i/anselm-cluster-documentation/resource-allocation-and-job-execution.md
@@ -34,6 +34,6 @@ Capacity computing
 
 Use GNU Parallel and/or Job arrays when running (many) single core jobs.
 
-In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization. In this chapter, we discuss the the recommended way to run huge number of jobs, including **ways to run huge number of single core jobs**.
+In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization. In this chapter, we discuss the the recommended way to run huge number of jobs, including **ways to run huge number of single core jobs**.
 
 Read more on [Capacity computing](capacity-computing/) page.
diff --git a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
index 762904ae5..57fe2ee65 100644
--- a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
+++ b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
@@ -29,7 +29,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
 
 ### Notes
 
-The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be  [set manually, see examples](job-submission-and-execution/).
+The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be  [set manually, see examples](job-submission-and-execution/).
 
 Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. Wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R).
 
@@ -54,55 +54,55 @@ $ rspbs
 Usage: rspbs [options]
 
 Options:
-  --version             show program's version number and exit
-  -h, --help            show this help message and exit
-  --get-node-ncpu-chart
-                        Print chart of allocated ncpus per node
-  --summary             Print summary
-  --get-server-details  Print server
-  --get-queues          Print queues
-  --get-queues-details  Print queues details
-  --get-reservations    Print reservations
-  --get-reservations-details
-                        Print reservations details
-  --get-nodes           Print nodes of PBS complex
-  --get-nodeset         Print nodeset of PBS complex
-  --get-nodes-details   Print nodes details
-  --get-jobs            Print jobs
-  --get-jobs-details    Print jobs details
-  --get-jobs-check-params
-                        Print jobid, job state, session_id, user, nodes
-  --get-users           Print users of jobs
-  --get-allocated-nodes
-                        Print allocated nodes of jobs
-  --get-allocated-nodeset
-                        Print allocated nodeset of jobs
-  --get-node-users      Print node users
-  --get-node-jobs       Print node jobs
-  --get-node-ncpus      Print number of ncpus per node
-  --get-node-allocated-ncpus
-                        Print number of allocated ncpus per node
-  --get-node-qlist      Print node qlist
-  --get-node-ibswitch   Print node ibswitch
-  --get-user-nodes      Print user nodes
-  --get-user-nodeset    Print user nodeset
-  --get-user-jobs       Print user jobs
-  --get-user-jobc       Print number of jobs per user
-  --get-user-nodec      Print number of allocated nodes per user
-  --get-user-ncpus      Print number of allocated ncpus per user
-  --get-qlist-nodes     Print qlist nodes
-  --get-qlist-nodeset   Print qlist nodeset
-  --get-ibswitch-nodes  Print ibswitch nodes
-  --get-ibswitch-nodeset
-                        Print ibswitch nodeset
-  --state=STATE         Only for given job state
-  --jobid=JOBID         Only for given job ID
-  --user=USER           Only for given user
-  --node=NODE           Only for given node
-  --nodestate=NODESTATE
-                        Only for given node state (affects only --get-node*
-                        --get-qlist-* --get-ibswitch-* actions)
-  --incl-finished       Include finished jobs
+  --version             show program's version number and exit
+  -h, --help            show this help message and exit
+  --get-node-ncpu-chart
+                        Print chart of allocated ncpus per node
+  --summary             Print summary
+  --get-server-details  Print server
+  --get-queues          Print queues
+  --get-queues-details  Print queues details
+  --get-reservations    Print reservations
+  --get-reservations-details
+                        Print reservations details
+  --get-nodes           Print nodes of PBS complex
+  --get-nodeset         Print nodeset of PBS complex
+  --get-nodes-details   Print nodes details
+  --get-jobs            Print jobs
+  --get-jobs-details    Print jobs details
+  --get-jobs-check-params
+                        Print jobid, job state, session_id, user, nodes
+  --get-users           Print users of jobs
+  --get-allocated-nodes
+                        Print allocated nodes of jobs
+  --get-allocated-nodeset
+                        Print allocated nodeset of jobs
+  --get-node-users      Print node users
+  --get-node-jobs       Print node jobs
+  --get-node-ncpus      Print number of ncpus per node
+  --get-node-allocated-ncpus
+                        Print number of allocated ncpus per node
+  --get-node-qlist      Print node qlist
+  --get-node-ibswitch   Print node ibswitch
+  --get-user-nodes      Print user nodes
+  --get-user-nodeset    Print user nodeset
+  --get-user-jobs       Print user jobs
+  --get-user-jobc       Print number of jobs per user
+  --get-user-nodec      Print number of allocated nodes per user
+  --get-user-ncpus      Print number of allocated ncpus per user
+  --get-qlist-nodes     Print qlist nodes
+  --get-qlist-nodeset   Print qlist nodeset
+  --get-ibswitch-nodes  Print ibswitch nodes
+  --get-ibswitch-nodeset
+                        Print ibswitch nodeset
+  --state=STATE         Only for given job state
+  --jobid=JOBID         Only for given job ID
+  --user=USER           Only for given user
+  --node=NODE           Only for given node
+  --nodestate=NODESTATE
+                        Only for given node state (affects only --get-node*
+                        --get-qlist-* --get-ibswitch-* actions)
+  --incl-finished       Include finished jobs
 ```
 
 Resources Accounting Policy
@@ -110,7 +110,7 @@ Resources Accounting Policy
 
 ### The Core-Hour
 
-The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (16 cores) for 1 hour accounts to 16 core-hours. See example in the  [Job submission and execution](job-submission-and-execution/) section.
+The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (16 cores) for 1 hour accounts to 16 core-hours. See example in the  [Job submission and execution](job-submission-and-execution/) section.
 
 ### Check consumed resources
 
@@ -122,8 +122,8 @@ User may check at any time, how many core-hours have been consumed by himself/he
 ```bash
 $ it4ifree
 Password:
-     PID    Total   Used   ...by me Free
-   -------- ------- ------ -------- -------
-   OPEN-0-0 1500000 400644   225265 1099356
-   DD-13-1    10000   2606     2606    7394
+     PID    Total   Used   ...by me Free
+   -------- ------- ------ -------- -------
+   OPEN-0-0 1500000 400644   225265 1099356
+   DD-13-1    10000   2606     2606    7394
 ```
diff --git a/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md b/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
index 2163cb123..912e49a16 100644
--- a/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
+++ b/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
@@ -16,8 +16,8 @@ The authentication is by the [private key](../get-started-with-it4innovations/ac
 !!! Note "Note"
 	Please verify SSH fingerprints during the first logon. They are identical on all login nodes:
 
-	29:b3:f4:64:b0:73:f5:6f:a7:85:0f:e0:0d:be:76:bf (DSA)
-	d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA)
+	29:b3:f4:64:b0:73:f5:6f:a7:85:0f:e0:0d:be:76:bf (DSA)
+	d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA)
 
 Private key authentication:
 
@@ -59,7 +59,7 @@ Example to the cluster login:
 
 Data Transfer
 -------------
-Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols.  (Not available yet.) In case large volumes of data are transferred, use dedicated data mover node dm1.anselm.it4i.cz for increased performance.
+Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols.  (Not available yet.) In case large volumes of data are transferred, use dedicated data mover node dm1.anselm.it4i.cz for increased performance.
 
 |Address|Port|Protocol|
 |---|---|---|
@@ -75,7 +75,7 @@ The authentication is by the [private key](../get-started-with-it4innovations/ac
 
     1TB may be transferred in 1:50h.
 
-To achieve 160MB/s transfer rates, the end user must be connected by 10G line all the way to IT4Innovations and use computer with fast processor for the transfer. Using Gigabit ethernet connection, up to 110MB/s may be expected.  Fast cipher (aes128-ctr) should be used.
+To achieve 160MB/s transfer rates, the end user must be connected by 10G line all the way to IT4Innovations and use computer with fast processor for the transfer. Using Gigabit ethernet connection, up to 110MB/s may be expected.  Fast cipher (aes128-ctr) should be used.
 
 !!! Note "Note"
 	If you experience degraded data transfer performance, consult your local network provider.
@@ -143,13 +143,13 @@ Port forwarding
 
 It works by tunneling the connection from Anselm back to users workstation and forwarding from the workstation to the remote host.
 
-Pick some unused port on Anselm login node  (for example 6000) and establish the port forwarding:
+Pick some unused port on Anselm login node  (for example 6000) and establish the port forwarding:
 
 ```bash
 local $ ssh -R 6000:remote.host.com:1234 anselm.it4i.cz
 ```
 
-In this example, we establish port forwarding between port 6000 on Anselm and port 1234 on the remote.host.com. By accessing localhost:6000 on Anselm, an application will see response of remote.host.com:1234. The traffic will run via users local workstation.
+In this example, we establish port forwarding between port 6000 on Anselm and port 1234 on the remote.host.com. By accessing localhost:6000 on Anselm, an application will see response of remote.host.com:1234. The traffic will run via users local workstation.
 
 Port forwarding may be done **using PuTTY** as well. On the PuTTY Configuration screen, load your Anselm configuration first. Then go to Connection-&gt;SSH-&gt;Tunnels to set up the port forwarding. Click Remote radio button. Insert 6000 to Source port textbox. Insert remote.host.com:1234. Click Add button, then Open.
 
@@ -170,7 +170,7 @@ First, establish the remote port forwarding form the login node, as [described a
 Second, invoke port forwarding from the compute node to the login node. Insert following line into your jobscript or interactive shell
 
 ```bash
-$ ssh  -TN -f -L 6000:localhost:6000 login1
+$ ssh  -TN -f -L 6000:localhost:6000 login1
 ```
 
 In this example, we assume that port forwarding from login1:6000 to remote.host.com:1234 has been established beforehand. By accessing localhost:6000, an application running on a compute node will see response of remote.host.com:1234
@@ -196,7 +196,7 @@ Once the proxy server is running, establish ssh port forwarding from Anselm to t
 local $ ssh -R 6000:localhost:1080 anselm.it4i.cz
 ```
 
-Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding  to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well.
+Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding  to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well.
 
 Graphical User Interface
 ------------------------
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md
index 3ea63bb83..1693bc723 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md
@@ -48,9 +48,9 @@ echo Machines: $hl
 /ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r
 ```
 
-Header of the PBS file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
+Header of the PBS file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
 
-Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def
+Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def
 
-**License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**.
+**License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**.
 [More about licensing here](licensing/)
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md
index 5828f69bd..fc7f718e8 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md
@@ -39,9 +39,9 @@ NCORES=`wc -l $PBS_NODEFILE |awk '{print $1}'`
 /ansys_inc/v145/fluent/bin/fluent 3d -t$NCORES -cnf=$PBS_NODEFILE -g -i fluent.jou
 ```
 
-Header of the pbs file (above) is common and description can be find on [this site](../../resources-allocation-policy/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
+Header of the pbs file (above) is common and description can be find on [this site](../../resources-allocation-policy/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
 
-Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common Fluent journal file which is attached to the Fluent solver via parameter -i fluent.jou
+Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common Fluent journal file which is attached to the Fluent solver via parameter -i fluent.jou
 
 Journal file with definition of the input geometry and boundary conditions and defined process of solution has e.g. the following structure:
 
@@ -64,11 +64,11 @@ The appropriate dimension of the problem has to be set by parameter (2d/3d).
 fluent solver_version [FLUENT_options] -i journal_file -pbs
 ```
 
-This syntax will start the ANSYS FLUENT job under PBS Professional using the  qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of *job_ID.hostname*. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as  qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o *job_ID*.
+This syntax will start the ANSYS FLUENT job under PBS Professional using the  qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of *job_ID.hostname*. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as  qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o *job_ID*.
 
 3. Running Fluent via user's config file
 ----------------------------------------
-The sample script uses a configuration file called pbs_fluent.conf  if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of  pbs_fluent.conf can be:
+The sample script uses a configuration file called pbs_fluent.conf  if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of  pbs_fluent.conf can be:
 
 ```bash
 input="example_small.flin"
@@ -82,9 +82,9 @@ The following is an explanation of the parameters:
 
 input is the name of the input file.
 
-case is the name of the .cas file that the input file will utilize.
+case is the name of the .cas file that the input file will utilize.
 
-fluent_args are extra ANSYS FLUENT arguments. As shown in the previous example, you can specify the interconnect by using the  -p interconnect command. The available interconnects include ethernet (the default), myrinet, infiniband,  vendor, altix, and crayx. The MPI is selected automatically, based on the specified interconnect.
+fluent_args are extra ANSYS FLUENT arguments. As shown in the previous example, you can specify the interconnect by using the  -p interconnect command. The available interconnects include ethernet (the default), myrinet, infiniband,  vendor, altix, and crayx. The MPI is selected automatically, based on the specified interconnect.
 
 outfile is the name of the file to which the standard output will be sent.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md
index f254380e9..d712659f5 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md
@@ -1,7 +1,7 @@
 ANSYS LS-DYNA
 =============
 
-**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern  graphical user environment.
+**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern  graphical user environment.
 
 To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command.
 
@@ -51,6 +51,6 @@ echo Machines: $hl
 /ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl
 ```
 
-Header of the PBS file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
+Header of the PBS file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
 
-Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ANSYS solver via parameter i=
+Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ANSYS solver via parameter i=
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
index b876d10b4..69d920f50 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
@@ -2,7 +2,7 @@ ANSYS MAPDL
 ===========
 
 **[ANSYS Multiphysics](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/ANSYS+Multiphysics)**
-software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high- and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver.
+software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high- and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver.
 
 To run ANSYS MAPDL in batch mode you can utilize/modify the default mapdl.pbs script and execute it via the qsub command.
 
@@ -52,7 +52,7 @@ echo Machines: $hl
 
 Header of the PBS file (above) is common and description can be found on [this site](../../resource-allocation-policy/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allow to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
 
-Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ANSYS solver via parameter -i
+Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ANSYS solver via parameter -i
 
 **License** should be selected by parameter -p. Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**, aa_r_dy (ANSYS **Academic** AUTODYN)
 [More about licensing here](licensing/)
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md
index 92a9cb156..8d0a501e6 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md
@@ -1,9 +1,9 @@
 Overview of ANSYS Products
 ==========================
 
-**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
+**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
 
-Anselm provides commercial as well as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products). [ More  about licensing here](ansys/licensing/)
+Anselm provides commercial as well as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products). [ More  about licensing here](ansys/licensing/)
 
 To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module:
 
@@ -13,5 +13,5 @@ To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,
 
 ANSYS supports interactive regime, but due to assumed solution of extremely difficult tasks it is not recommended.
 
-If user needs to work in interactive regime we recommend to configure the RSM service on the client machine which allows to forward the solution to the Anselm directly from the client's Workbench project (see ANSYS RSM service).
+If user needs to work in interactive regime we recommend to configure the RSM service on the client machine which allows to forward the solution to the Anselm directly from the client's Workbench project (see ANSYS RSM service).
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md
index 2639a873d..dc9ca58c1 100644
--- a/docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md
+++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md
@@ -1,7 +1,7 @@
 LS-DYNA
 =======
 
-[LS-DYNA](http://www.lstc.com/) is a multi-purpose, explicit and implicit finite element program used to analyze the nonlinear dynamic response of structures. Its fully automated contact analysis capability, a wide range of constitutive models to simulate a whole range of engineering materials (steels, composites, foams, concrete, etc.), error-checking features and the high scalability have enabled users worldwide to solve successfully many complex problems. Additionally LS-DYNA is extensively used to simulate impacts on structures from drop tests, underwater shock, explosions or high-velocity impacts. Explosive forming, process engineering, accident reconstruction, vehicle dynamics, thermal brake disc analysis or nuclear safety are further areas in the broad range of possible applications. In leading-edge research LS-DYNA is used to investigate the behavior of materials like composites, ceramics, concrete, or wood. Moreover, it is used in biomechanics, human modeling, molecular structures, casting, forging, or virtual testing.
+[LS-DYNA](http://www.lstc.com/) is a multi-purpose, explicit and implicit finite element program used to analyze the nonlinear dynamic response of structures. Its fully automated contact analysis capability, a wide range of constitutive models to simulate a whole range of engineering materials (steels, composites, foams, concrete, etc.), error-checking features and the high scalability have enabled users worldwide to solve successfully many complex problems. Additionally LS-DYNA is extensively used to simulate impacts on structures from drop tests, underwater shock, explosions or high-velocity impacts. Explosive forming, process engineering, accident reconstruction, vehicle dynamics, thermal brake disc analysis or nuclear safety are further areas in the broad range of possible applications. In leading-edge research LS-DYNA is used to investigate the behavior of materials like composites, ceramics, concrete, or wood. Moreover, it is used in biomechanics, human modeling, molecular structures, casting, forging, or virtual testing.
 
 Anselm provides **1 commercial license of LS-DYNA without HPC** support now.
 
@@ -31,6 +31,6 @@ module load lsdyna
 /apps/engineering/lsdyna/lsdyna700s i=input.k
 ```
 
-Header of the PBS file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
+Header of the PBS file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
 
-Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA **.k** file which is attached to the LS-DYNA solver via parameter i=
+Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA **.k** file which is attached to the LS-DYNA solver via parameter i=
diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
index 859f6afa0..f345e5053 100644
--- a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
+++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
@@ -9,7 +9,7 @@ Molpro is a software package used for accurate ab-initio quantum chemistry calcu
 
 License
 -------
-Molpro software package is available only to users that have a valid license. Please contact support to enable access to Molpro if you have a valid license appropriate for running on our cluster (eg. academic research group licence, parallel execution).
+Molpro software package is available only to users that have a valid license. Please contact support to enable access to Molpro if you have a valid license appropriate for running on our cluster (eg. academic research group licence, parallel execution).
 
 To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee).
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
index 7d9c02bd6..0318c4a11 100644
--- a/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
+++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
@@ -39,7 +39,7 @@ NWChem is compiled for parallel MPI execution. Normal procedure for MPI jobs app
 
 Options
 --------------------
-Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
+Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
 
 -   MEMORY : controls the amount of memory NWChem will use
--   SCRATCH_DIR : set this to a directory in [SCRATCH file system](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, e.g.. "scf direct"
+-   SCRATCH_DIR : set this to a directory in [SCRATCH file system](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, e.g.. "scf direct"
diff --git a/docs.it4i/anselm-cluster-documentation/software/compilers.md b/docs.it4i/anselm-cluster-documentation/software/compilers.md
index 3c87064f9..ab8935abf 100644
--- a/docs.it4i/anselm-cluster-documentation/software/compilers.md
+++ b/docs.it4i/anselm-cluster-documentation/software/compilers.md
@@ -19,7 +19,7 @@ For information about the usage of Intel Compilers and other Intel products, ple
 
 GNU C/C++ and Fortran Compilers
 -------------------------------
-For compatibility reasons there are still available the original (old 4.4.6-4) versions of GNU compilers as part of the OS. These are accessible in the search path  by default.
+For compatibility reasons there are still available the original (old 4.4.6-4) versions of GNU compilers as part of the OS. These are accessible in the search path  by default.
 
 It is strongly recommended to use the up to date version (4.8.1) which comes with the module gcc:
 
@@ -69,12 +69,12 @@ Simple program to test the compiler
     #include <stdio.h>
 
     int main() {
-      if (MYTHREAD == 0) {
-        printf("Welcome to GNU UPC!!!n");
-      }
-      upc_barrier;
-      printf(" - Hello from thread %in", MYTHREAD);
-      return 0;
+      if (MYTHREAD == 0) {
+        printf("Welcome to GNU UPC!!!n");
+      }
+      upc_barrier;
+      printf(" - Hello from thread %in", MYTHREAD);
+      return 0;
     }
 ```
 
@@ -115,12 +115,12 @@ Example UPC code:
     #include <stdio.h>
 
     int main() {
-      if (MYTHREAD == 0) {
-        printf("Welcome to Berkeley UPC!!!n");
-      }
-      upc_barrier;
-      printf(" - Hello from thread %in", MYTHREAD);
-      return 0;
+      if (MYTHREAD == 0) {
+        printf("Welcome to Berkeley UPC!!!n");
+      }
+      upc_barrier;
+      printf(" - Hello from thread %in", MYTHREAD);
+      return 0;
     }
 ```
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
index 1d87e0ebf..ee61c219b 100644
--- a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
+++ b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
@@ -20,7 +20,7 @@ Execution
 On the Anselm cluster COMSOL is available in the latest stable version. There are two variants of the release:
 
 -   **Non commercial** or so called **EDU variant**, which can be used for research and educational purposes.
--   **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
+-   **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
 
 To load the of COMSOL load the module
 
@@ -50,7 +50,7 @@ To run COMSOL in batch mode, without the COMSOL Desktop GUI environment, user ca
 #PBS -l select=3:ncpus=16
 #PBS -q qprod
 #PBS -N JOB_NAME
-#PBS -A  PROJECT_ID
+#PBS -A  PROJECT_ID
 
 cd /scratch/$USER/ || exit
 
@@ -72,7 +72,7 @@ comsol -nn ${ntask} batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tm
 
 Working directory has to be created before sending the (comsol.pbs) job script into the queue. Input file (name_input_f.mph) has to be in working directory or full path to input file has to be specified. The appropriate path to the temp directory of the job has to be set by command option (-tmpdir).
 
-LiveLink™* *for MATLAB®^
+LiveLink™* *for MATLAB®^
 -------------------------
 COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL**®** API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB.
 
@@ -96,7 +96,7 @@ To run LiveLink for MATLAB in batch mode with (comsol_matlab.pbs) job script you
 #PBS -l select=3:ncpus=16
 #PBS -q qprod
 #PBS -N JOB_NAME
-#PBS -A  PROJECT_ID
+#PBS -A  PROJECT_ID
 
 cd /scratch/$USER || exit
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md
index 10310bc89..a416deab5 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md
@@ -31,7 +31,7 @@ CUBE is a graphical application. Refer to Graphical User Interface documentation
 !!! Note "Note"
 	Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes.
 
-After loading the appropriate module, simply launch cube command, or alternatively you can use  scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available.
+After loading the appropriate module, simply launch cube command, or alternatively you can use  scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available.
 
 References
 1.  <http://www.scalasca.org/software/cube-4.x/download.html>
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md
index e8c678789..b36213ce4 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/debuggers.md
@@ -58,4 +58,4 @@ Vampir is a GUI trace analyzer for traces in OTF format.
     $ vampir
 ```
 
-Read more at the [Vampir](../../salomon/software/debuggers/vampir/) page.
+Read more at the [Vampir](../../salomon/software/debuggers/vampir/) page.
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
index 965fe6c73..5ff0b9822 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
@@ -3,7 +3,7 @@ Intel Performance Counter Monitor
 
 Introduction
 ------------
-Intel PCM (Performance Counter Monitor) is a tool to monitor performance hardware counters on Intel>® processors, similar to [PAPI](papi/). The difference between PCM and PAPI is that PCM supports only Intel hardware, but PCM can monitor also uncore metrics, like memory controllers and >QuickPath Interconnect links.
+Intel PCM (Performance Counter Monitor) is a tool to monitor performance hardware counters on Intel>® processors, similar to [PAPI](papi/). The difference between PCM and PAPI is that PCM supports only Intel hardware, but PCM can monitor also uncore metrics, like memory controllers and >QuickPath Interconnect links.
 
 Installed version
 ------------------------------
@@ -53,7 +53,7 @@ Sample output:
     --                   System Read Throughput(MB/s):      4.93                  --
     --                  System Write Throughput(MB/s):      3.43                  --
     --                 System Memory Throughput(MB/s):      8.35                  --
-    ---------------------------------------||--------------------------------------- 
+    ---------------------------------------||--------------------------------------- 
 ```
 
 ### pcm-msr
@@ -70,11 +70,11 @@ Can be used to monitor PCI Express bandwith. Usage: pcm-pcie.x &lt;delay&gt;
 
 ### pcm-power
 
-Displays energy usage and thermal headroom for CPU and DRAM sockets. Usage: pcm-power.x &lt;delay&gt; | &lt;external program&gt;
+Displays energy usage and thermal headroom for CPU and DRAM sockets. Usage: pcm-power.x &lt;delay&gt; | &lt;external program&gt;
 
 ### pcm
 
-This command provides an overview of performance counters and memory usage. Usage: pcm.x &lt;delay&gt; | &lt;external program&gt;
+This command provides an overview of performance counters and memory usage. Usage: pcm.x &lt;delay&gt; | &lt;external program&gt;
 
 Sample output :
 
@@ -278,5 +278,5 @@ Sample output:
 References
 ----------
 1.  <https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization>
-2.  <https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf> Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide.
-3.  <http://intel-pcm-api-documentation.github.io/classPCM.html> API Documentation
+2.  <https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf> Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide.
+3.  <http://intel-pcm-api-documentation.github.io/classPCM.html> API Documentation
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
index 1aa9bc9e9..fac34c5e8 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
@@ -3,7 +3,7 @@ Intel VTune Amplifier
 
 Introduction
 ------------
-Intel*® *VTune™ Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
+Intel*® *VTune™ Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
 
 -   Hotspot analysis
 -   Locks and waits analysis
@@ -24,15 +24,15 @@ To launch the GUI, first load the module:
 and launch the GUI :
 
 ```bash
-    $ amplxe-gui
+    $ amplxe-gui
 ```
 
 !!! Note "Note"
 	To profile an application with VTune Amplifier, special kernel modules need to be loaded. The modules are not loaded on Anselm login nodes, thus direct profiling on login nodes is not possible. Use VTune on compute nodes and refer to the documentation on using GUI applications.
 
-The GUI will open in new window. Click on "*New Project...*" to create a new project. After clicking *OK*, a new window with project properties will appear.  At "*Application:*", select the bath to your binary you want to profile (the binary should be compiled with -g flag). Some additional options such as command line arguments can be selected. At "*Managed code profiling mode:*" select "*Native*" (unless you want to profile managed mode .NET/Mono applications). After clicking *OK*, your project is created.
+The GUI will open in new window. Click on "*New Project...*" to create a new project. After clicking *OK*, a new window with project properties will appear.  At "*Application:*", select the bath to your binary you want to profile (the binary should be compiled with -g flag). Some additional options such as command line arguments can be selected. At "*Managed code profiling mode:*" select "*Native*" (unless you want to profile managed mode .NET/Mono applications). After clicking *OK*, your project is created.
 
-To run a new analysis, click "*New analysis...*". You will see a list of possible analysis. Some of them will not be possible on the current CPU (e.g. Intel Atom analysis is not possible on Sandy Bridge CPU), the GUI will show an error box if you select the wrong analysis. For example, select "*Advanced Hotspots*". Clicking on *Start *will start profiling of the application.
+To run a new analysis, click "*New analysis...*". You will see a list of possible analysis. Some of them will not be possible on the current CPU (e.g. Intel Atom analysis is not possible on Sandy Bridge CPU), the GUI will show an error box if you select the wrong analysis. For example, select "*Advanced Hotspots*". Clicking on *Start *will start profiling of the application.
 
 Remote Analysis
 ---------------
@@ -57,7 +57,7 @@ Application:  ssh
 
 Application parameters:  mic0 source ~/.profile && /path/to/your/bin
 
-Note that we include  source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/).
+Note that we include  source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/).
 
 !!! Note "Note"
 	If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
@@ -71,4 +71,4 @@ You may also use remote analysis to collect data from the MIC and then analyze i
 
 References
 ----------
-1.  <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors
+1.  <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
index 376b3ee19..3bc686243 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
@@ -3,7 +3,7 @@ PAPI
 
 Introduction
 ------------
-Performance Application Programming Interface (PAPI)  is a portable interface to access hardware performance counters (such as instruction counts and cache misses) found in most modern architectures. With the new component framework, PAPI is not limited only to CPU counters, but offers also components for CUDA, network, Infiniband etc.
+Performance Application Programming Interface (PAPI)  is a portable interface to access hardware performance counters (such as instruction counts and cache misses) found in most modern architectures. With the new component framework, PAPI is not limited only to CPU counters, but offers also components for CUDA, network, Infiniband etc.
 
 PAPI provides two levels of interface - a simpler, high level interface and more detailed low level interface.
 
@@ -77,8 +77,8 @@ PAPI API
 --------
 PAPI provides two kinds of events:
 
--   **Preset events** is a set of predefined common CPU events, standardized across platforms.
--   **Native events **is a set of all events supported by the current hardware. This is a larger set of features than preset. For other components than CPU, only native events are usually available.
+-   **Preset events** is a set of predefined common CPU events, standardized across platforms.
+-   **Native events **is a set of all events supported by the current hardware. This is a larger set of features than preset. For other components than CPU, only native events are usually available.
 
 To use PAPI in your application, you need to link the appropriate include file.
 
@@ -91,24 +91,24 @@ The include path is automatically added by papi module to $INCLUDE.
 
 ### High level API
 
-Please refer to <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:High_Level> for a description of the High level API.
+Please refer to <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:High_Level> for a description of the High level API.
 
 ### Low level API
 
-Please refer to <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:Low_Level> for a description of the Low level API.
+Please refer to <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:Low_Level> for a description of the Low level API.
 
 ### Timers
 
-PAPI provides the most accurate timers the platform can support. See <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:Timers>
+PAPI provides the most accurate timers the platform can support. See <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:Timers>
 
 ### System information
 
-PAPI can be used to query some system infromation, such as CPU name and MHz. See <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:System_Information>
+PAPI can be used to query some system infromation, such as CPU name and MHz. See <http://icl.cs.utk.edu/projects/papi/wiki/PAPIC:System_Information>
 
 Example
 -------
 
-The following example prints MFLOPS rate of a naive matrix-matrix multiplication:
+The following example prints MFLOPS rate of a naive matrix-matrix multiplication:
 
 ```bash
     #include <stdlib.h>
@@ -126,7 +126,7 @@ The following example prints MFLOPS rate of a naive matrix-matrix multiplicatio
      /* Initialize the Matrix arrays */
      for ( i=0; i<SIZE*SIZE; i++ ){
      mresult[0][i] = 0.0;
-     matrixa[0][i] = matrixb[0][i] = rand()*(float)1.1; 
+     matrixa[0][i] = matrixb[0][i] = rand()*(float)1.1; 
      }
 
      /* Setup PAPI library and begin collecting data from the counters */
@@ -195,7 +195,7 @@ Now the compiler won't remove the multiplication loop. (However it is still not
 !!! Note "Note"
 	PAPI currently supports only a subset of counters on the Intel Xeon Phi processor compared to Intel Xeon, for example the floating point operations counter is missing.
 
-To use PAPI in [Intel Xeon Phi](../intel-xeon-phi/) native applications, you need to load module with " -mic" suffix, for example " papi/5.3.2-mic" :
+To use PAPI in [Intel Xeon Phi](../intel-xeon-phi/) native applications, you need to load module with " -mic" suffix, for example " papi/5.3.2-mic" :
 
 ```bash
     $ module load papi/5.3.2-mic
@@ -208,10 +208,10 @@ Then, compile your application in the following way:
     $ icc -mmic -Wl,-rpath,/apps/intel/composer_xe_2013.5.192/compiler/lib/mic matrix-mic.c -o matrix-mic -lpapi -lpfm
 ```
 
-To execute the application on MIC, you need to manually set LD_LIBRARY_PATH:
+To execute the application on MIC, you need to manually set LD_LIBRARY_PATH:
 
 ```bash
-    $ qsub -q qmic -A NONE-0-0 -I
+    $ qsub -q qmic -A NONE-0-0 -I
     $ ssh mic0
     $ export LD_LIBRARY_PATH=/apps/tools/papi/5.4.0-mic/lib/
     $ ./matrix-mic
@@ -234,6 +234,6 @@ To use PAPI in offload mode, you need to provide both host and MIC versions of P
 
 References
 ----------
-1.  <http://icl.cs.utk.edu/papi/> Main project page
-2.  <http://icl.cs.utk.edu/projects/papi/wiki/Main_Page> Wiki
+1.  <http://icl.cs.utk.edu/papi/> Main project page
+2.  <http://icl.cs.utk.edu/projects/papi/wiki/Main_Page> Wiki
 3.  <http://icl.cs.utk.edu/papi/docs/> API Documentation
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
index 08ae57435..76e227f19 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
@@ -41,7 +41,7 @@ An example :
 Some notable Scalasca options are:
 
 **-t Enable trace data collection. By default, only summary data are collected.**
-**-e &lt;directory&gt; Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep_, followed by name of the executable and launch configuration.**
+**-e &lt;directory&gt; Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep_, followed by name of the executable and launch configuration.**
 
 !!! Note "Note"
 	Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage/storage/).
@@ -64,7 +64,7 @@ scalasca -examine -s <experiment_directory>
 
 Alternatively you can open CUBE and load the data directly from here. Keep in mind that in that case the preprocessing is not done and not all metrics will be shown in the viewer.
 
-Refer to [CUBE documentation](cube/) on usage of the GUI viewer.
+Refer to [CUBE documentation](cube/) on usage of the GUI viewer.
 
 References
 ----------
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
index 29ecc0882..c51794fb8 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
@@ -9,10 +9,10 @@ Score-P can be used as an instrumentation tool for [Scalasca](scalasca/).
 
 Installed versions
 ------------------
-There are currently two versions of Score-P version 1.2.6 [modules](../../environment-and-modules/) installed on Anselm :
+There are currently two versions of Score-P version 1.2.6 [modules](../../environment-and-modules/) installed on Anselm :
 
--   scorep/1.2.3-gcc-openmpi, for usage     with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/)
--   scorep/1.2.3-icc-impi, for usage with [Intel Compiler](../compilers.html)> and [Intel MPI](../mpi/running-mpich2/)>.
+-   scorep/1.2.3-gcc-openmpi, for usage     with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/)
+-   scorep/1.2.3-icc-impi, for usage with [Intel Compiler](../compilers.html)> and [Intel MPI](../mpi/running-mpich2/)>.
 
 Instrumentation
 ---------------
@@ -40,7 +40,7 @@ $ scorep  mpif90 -c bar.f90
 $ scorep  mpif90 -o myapp foo.o bar.o
 ```
 
-Usually your program is compiled using a Makefile or similar script, so it advisable to add the  scorep command to your definition of variables  CC, CXX, FCC etc.
+Usually your program is compiled using a Makefile or similar script, so it advisable to add the  scorep command to your definition of variables  CC, CXX, FCC etc.
 
 It is important that  scorep is prepended also to the linking command, in order to link with Score-P instrumentation libraries.
 
@@ -62,7 +62,7 @@ An example in C/C++ :
     }
 ```
 
- and Fortran :
+ and Fortran :
 
 ```cpp
     #include "scorep/SCOREP_User.inc"
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md
index 8e448a8c4..6df6ba9b9 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md
@@ -8,8 +8,8 @@ License and Limitations for Anselm Users
 On Anselm users can debug OpenMP or MPI code that runs up to 64 parallel processes. These limitation means that:
 
 ```bash
-    1 user can debug up 64 processes, or
-    32 users can debug 2 processes, etc.
+    1 user can debug up 64 processes, or
+    32 users can debug 2 processes, etc.
 ```
 
 Debugging of GPU accelerated codes is also supported.
@@ -21,11 +21,11 @@ You can check the status of the licenses here:
 
     # totalview
     # -------------------------------------------------
-    # FEATURE                       TOTAL   USED  AVAIL
+    # FEATURE                       TOTAL   USED  AVAIL
     # -------------------------------------------------
-    TotalView_Team                     64      0     64
-    Replay                             64      0     64
-    CUDA                               64      0     64
+    TotalView_Team                     64      0     64
+    Replay                             64      0     64
+    CUDA                               64      0     64
 ```
 
 Compiling Code to run with TotalView
@@ -37,7 +37,7 @@ Load all necessary modules to compile the code. For example:
 ```bash
     module load intel
 
-    module load impi   ... or ... module load openmpi/X.X.X-icc
+    module load impi   ... or ... module load openmpi/X.X.X-icc
 ```
 
 Load the TotalView module:
@@ -98,16 +98,16 @@ To debug a parallel code compiled with **OpenMPI** you need to setup your TotalV
 
 ```bash
     proc mpi_auto_run_starter {loaded_id} {
-        set starter_programs {mpirun mpiexec orterun}
-        set executable_name [TV::symbol get $loaded_id full_pathname]
-        set file_component [file tail $executable_name]
-
-        if {[lsearch -exact $starter_programs $file_component] != -1} {
-            puts "*************************************"
-            puts "Automatically starting $file_component"
-            puts "*************************************"
-            dgo
-        }
+        set starter_programs {mpirun mpiexec orterun}
+        set executable_name [TV::symbol get $loaded_id full_pathname]
+        set file_component [file tail $executable_name]
+
+        if {[lsearch -exact $starter_programs $file_component] != -1} {
+            puts "*************************************"
+            puts "Automatically starting $file_component"
+            puts "*************************************"
+            dgo
+        }
     }
 
     # Append this function to TotalView's image load callbacks so that
@@ -155,7 +155,7 @@ The following example shows how to start debugging session with Intel MPI:
 
 After running previous command you will see the same window as shown in the screenshot above.
 
-More information regarding the command line parameters of the TotalView can be found TotalView Reference Guide, Chapter 7: TotalView Command Syntax.
+More information regarding the command line parameters of the TotalView can be found TotalView Reference Guide, Chapter 7: TotalView Command Syntax.
 
 Documentation
 -------------
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
index 4ea4ed33b..1b0919431 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
@@ -22,7 +22,7 @@ Installed versions
 ------------------
 There are two versions of Valgrind available on Anselm.
 
--   Version 3.6.0, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support.
+-   Version 3.6.0, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support.
 -   Version 3.9.0 with support for Intel MPI, available in [module](../../environment-and-modules/) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind.
 
 Usage
@@ -157,7 +157,7 @@ The default version without MPI support will however report a large number of fa
     ==30166== by 0x4008BD: main (valgrind-example-mpi.c:18)
 ```
 
-so it is better to use the MPI-enabled valgrind from module. The MPI version requires library /apps/tools/valgrind/3.9.0/impi/lib/valgrind/libmpiwrap-amd64-linux.so, which must be included in the  LD_PRELOAD environment variable.
+so it is better to use the MPI-enabled valgrind from module. The MPI version requires library /apps/tools/valgrind/3.9.0/impi/lib/valgrind/libmpiwrap-amd64-linux.so, which must be included in the  LD_PRELOAD environment variable.
 
 Lets look at this MPI example :
 
@@ -167,13 +167,13 @@ Lets look at this MPI example :
 
     int main(int argc, char *argv[])
     {
-            int *data = malloc(sizeof(int)*99);
+            int *data = malloc(sizeof(int)*99);
 
-            MPI_Init(&argc, &argv);
-            MPI_Bcast(data, 100, MPI_INT, 0, MPI_COMM_WORLD);
-            MPI_Finalize();
+            MPI_Init(&argc, &argv);
+            MPI_Bcast(data, 100, MPI_INT, 0, MPI_COMM_WORLD);
+            MPI_Finalize();
 
-            return 0;
+            return 0;
     }
 ```
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/vampir.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/vampir.md
index 1224d6823..129eb41dd 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/vampir.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/vampir.md
@@ -1,13 +1,13 @@
 hVampir
 ======
 
-Vampir is a commercial trace analysis and visualization tool. It can work with traces in OTF and OTF2 formats. It does not have the functionality to collect traces, you need to use a trace collection tool (such as [Score-P](../../../salomon/software/debuggers/score-p/)) first to collect the traces.
+Vampir is a commercial trace analysis and visualization tool. It can work with traces in OTF and OTF2 formats. It does not have the functionality to collect traces, you need to use a trace collection tool (such as [Score-P](../../../salomon/software/debuggers/score-p/)) first to collect the traces.
 
 ![](../../../img/Snmekobrazovky20160708v12.33.35.png)
 
 Installed versions
 ------------------
-Version 8.5.0 is currently installed as module Vampir/8.5.0 :
+Version 8.5.0 is currently installed as module Vampir/8.5.0 :
 
 ```bash
     $ module load Vampir/8.5.0
@@ -16,7 +16,7 @@ Version 8.5.0 is currently installed as module Vampir/8.5.0 :
 
 User manual
 -----------
-You can find the detailed user manual in PDF format in $EBROOTVAMPIR/doc/vampir-manual.pdf
+You can find the detailed user manual in PDF format in $EBROOTVAMPIR/doc/vampir-manual.pdf
 
 References
 ----------
diff --git a/docs.it4i/anselm-cluster-documentation/software/gpi2.md b/docs.it4i/anselm-cluster-documentation/software/gpi2.md
index ab600fedf..d61fbed6f 100644
--- a/docs.it4i/anselm-cluster-documentation/software/gpi2.md
+++ b/docs.it4i/anselm-cluster-documentation/software/gpi2.md
@@ -24,7 +24,7 @@ Linking
 !!! Note "Note"
 	Link with -lGPI2 -libverbs
 
-Load the gpi2 module. Link using **-lGPI2** and **-libverbs** switches to link your code against GPI-2. The GPI-2 requires the OFED infinband communication library ibverbs.
+Load the gpi2 module. Link using **-lGPI2** and **-libverbs** switches to link your code against GPI-2. The GPI-2 requires the OFED infinband communication library ibverbs.
 
 ### Compiling and linking with Intel compilers
 
@@ -169,4 +169,4 @@ At the same time, in another session, you may start the gaspi logger:
     [cn80:0] Hello from rank 1 of 2
 ```
 
-In this example, we compile the helloworld_gpi.c code using the **gnu compiler** (gcc) and link it to the GPI-2 and ibverbs library. The library search path is compiled in. For execution, we use the qexp queue, 2 nodes 1 core each. The GPI module must be loaded on the master compute node (in this example the cn79), gaspi_logger is used from different session to view the output of the second process.
+In this example, we compile the helloworld_gpi.c code using the **gnu compiler** (gcc) and link it to the GPI-2 and ibverbs library. The library search path is compiled in. For execution, we use the qexp queue, 2 nodes 1 core each. The GPI module must be loaded on the master compute node (in this example the cn79), gaspi_logger is used from different session to view the output of the second process.
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
index 34d9f3a4d..75ea44148 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
@@ -23,15 +23,15 @@ In this example, we compile the program enabling interprocedural optimizations b
 The compiler recognizes the omp, simd, vector and ivdep pragmas for OpenMP parallelization and AVX vectorization. Enable the OpenMP parallelization by the **-openmp** compiler switch.
 
 ```bash
-    $ icc -ipo -O3 -vec -xAVX -vec-report1 -openmp myprog.c mysubroutines.c -o myprog.x
-    $ ifort -ipo -O3 -vec -xAVX -vec-report1 -openmp myprog.f mysubroutines.f -o myprog.x
+    $ icc -ipo -O3 -vec -xAVX -vec-report1 -openmp myprog.c mysubroutines.c -o myprog.x
+    $ ifort -ipo -O3 -vec -xAVX -vec-report1 -openmp myprog.f mysubroutines.f -o myprog.x
 ```
 
 Read more at <http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/compiler/cpp-lin/index.htm>
 
 Sandy Bridge/Haswell binary compatibility
 -----------------------------------------
-Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon will use Haswell architecture. >The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. >To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags >designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this:
+Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon will use Haswell architecture. >The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. >To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags >designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this:
 
--   Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge nodes.
--   Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries.
+-   Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge nodes.
+-   Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries.
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
index cb6a8ea1d..62bb0fae8 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
@@ -12,11 +12,11 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e
 - Vector Math Library (VML) routines for optimized mathematical operations on vectors.
 - Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for    several probability distributions, convolution and correlation routines, and summary statistics functions.
 - Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
--   Extended Eigensolver, a shared memory  version of an eigensolver based on the Feast Eigenvalue Solver.
+-   Extended Eigensolver, a shared memory  version of an eigensolver based on the Feast Eigenvalue Solver.
 
 For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm).
 
-Intel MKL version 13.5.192 is available on Anselm
+Intel MKL version 13.5.192 is available on Anselm
 
 ```bash
     $ module load mkl
@@ -40,7 +40,7 @@ The MKL library provides number of interfaces. The fundamental once are the LP64
 
 Linking MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl/#examples) below.
 
-You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include  rpath on the compile line:
+You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include  rpath on the compile line:
 
 ```bash
     $ icc .... -Wl,-rpath=$LIBRARY_PATH ...
@@ -75,7 +75,7 @@ Number of examples, demonstrating use of the MKL library and its linking is avai
     $ make sointel64 function=cblas_dgemm
 ```
 
-In this example, we compile, link and run the cblas_dgemm  example, demonstrating use of MKL example suite installed on Anselm.
+In this example, we compile, link and run the cblas_dgemm  example, demonstrating use of MKL example suite installed on Anselm.
 
 ### Example: MKL and Intel compiler
 
@@ -89,14 +89,14 @@ In this example, we compile, link and run the cblas_dgemm  example, demonstrati
     $ ./cblas_dgemmx.x data/cblas_dgemmx.d
 ```
 
-In this example, we compile, link and run the cblas_dgemm  example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to:
+In this example, we compile, link and run the cblas_dgemm  example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to:
 
 ```bash
     $ icc -w source/cblas_dgemmx.c source/common_func.c -o cblas_dgemmx.x
     -I$MKL_INC_DIR -L$MKL_LIB_DIR -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5
 ```
 
-In this example, we compile and link the cblas_dgemm  example, using LP64 interface to threaded MKL and Intel OMP threads implementation.
+In this example, we compile and link the cblas_dgemm  example, using LP64 interface to threaded MKL and Intel OMP threads implementation.
 
 ### Example: MKL and GNU compiler
 
@@ -112,7 +112,7 @@ In this example, we compile and link the cblas_dgemm  example, using LP64 inter
     $ ./cblas_dgemmx.x data/cblas_dgemmx.d
 ```
 
-In this example, we compile, link and run the cblas_dgemm  example, using LP64 interface to threaded MKL and gnu OMP threads implementation.
+In this example, we compile, link and run the cblas_dgemm  example, using LP64 interface to threaded MKL and gnu OMP threads implementation.
 
 MKL and MIC accelerators
 ------------------------
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md
index ab6defdbe..ccf79fef0 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md
@@ -3,7 +3,7 @@ Intel TBB
 
 Intel Threading Building Blocks
 -------------------------------
-Intel Threading Building Blocks (Intel TBB) is a library that supports scalable parallel programming using standard ISO C++ code. It does not require special languages or compilers.  To use the library, you specify tasks, not threads, and let the library map tasks onto threads in an efficient manner. The tasks are executed by a runtime scheduler and may
+Intel Threading Building Blocks (Intel TBB) is a library that supports scalable parallel programming using standard ISO C++ code. It does not require special languages or compilers.  To use the library, you specify tasks, not threads, and let the library map tasks onto threads in an efficient manner. The tasks are executed by a runtime scheduler and may
 be offloaded to [MIC accelerator](../intel-xeon-phi/).
 
 Intel TBB version 4.1 is available on Anselm
@@ -19,7 +19,7 @@ The module sets up environment variables, required for linking and running tbb e
 
 Examples
 --------
-Number of examples, demonstrating use of TBB and its built-in scheduler is available on Anselm, in the $TBB_EXAMPLES directory.
+Number of examples, demonstrating use of TBB and its built-in scheduler is available on Anselm, in the $TBB_EXAMPLES directory.
 
 ```bash
     $ module load intel
diff --git a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
index 518d629e3..2303a969b 100644
--- a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
+++ b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
@@ -29,7 +29,7 @@ For each license there is a unique text file, which provides the information abo
 |comsol|/apps/user/licenses/comsol_features_state.txt|Commercial|
 |comsol-edu|/apps/user/licenses/comsol-edu_features_state.txt|Non-commercial only|
 |matlab|/apps/user/licenses/matlab_features_state.txt|Commercial|
-|matlab-edu|/apps/user/licenses/matlab-edu_features_state.txt|Non-commercial only|
+|matlab-edu|/apps/user/licenses/matlab-edu_features_state.txt|Non-commercial only|
 
 The file has a header which serves as a legend. All the info in the legend starts with a hash (#) so it can be easily filtered when parsing the file via a script.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/java.md b/docs.it4i/anselm-cluster-documentation/software/java.md
index 4b708c33e..7159f790b 100644
--- a/docs.it4i/anselm-cluster-documentation/software/java.md
+++ b/docs.it4i/anselm-cluster-documentation/software/java.md
@@ -25,5 +25,5 @@ With the module loaded, not only the runtime environment (JRE), but also the dev
     $ which javac
 ```
 
-Java applications may use MPI for inter-process communication, in conjunction with OpenMPI. Read more on <http://www.open-mpi.org/faq/?category=java>. This functionality is currently not supported on Anselm cluster. In case you require the java interface to MPI, please contact [Anselm support](https://support.it4i.cz/rt/).
+Java applications may use MPI for inter-process communication, in conjunction with OpenMPI. Read more on <http://www.open-mpi.org/faq/?category=java>. This functionality is currently not supported on Anselm cluster. In case you require the java interface to MPI, please contact [Anselm support](https://support.it4i.cz/rt/).
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
index 1c8eca589..508fe8bf9 100644
--- a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
+++ b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
@@ -43,7 +43,7 @@ IT4Innovations does not provide any licenses for operating systems and software
 !!! Note "Note"
 	Users are responsible for licensing OS e.g. MS Windows and all software running in their virtual machines.
 
- HOWTO
+ HOWTO
 ----------
 
 ### Virtual Machine Job Workflow
@@ -125,19 +125,19 @@ Example startup script for Windows virtual machine:
     echo. >%LOG%
 
     if exist %MAPDRIVE% (
-      echo %DATE% %TIME% The drive "%MAPDRIVE%" exists>%LOG%
+      echo %DATE% %TIME% The drive "%MAPDRIVE%" exists>%LOG%
 
-      if exist %SCRIPT% (
-        echo %DATE% %TIME% The script file "%SCRIPT%"exists>%LOG%
-        echo %DATE% %TIME% Running script %SCRIPT%>%LOG%
-        set TIMEOUT=0
-        call %SCRIPT%
-      ) else (
-        echo %DATE% %TIME% The script file "%SCRIPT%"does not exist>%LOG%
-      )
+      if exist %SCRIPT% (
+        echo %DATE% %TIME% The script file "%SCRIPT%"exists>%LOG%
+        echo %DATE% %TIME% Running script %SCRIPT%>%LOG%
+        set TIMEOUT=0
+        call %SCRIPT%
+      ) else (
+        echo %DATE% %TIME% The script file "%SCRIPT%"does not exist>%LOG%
+      )
 
     ) else (
-      echo %DATE% %TIME% The drive "%MAPDRIVE%" does not exist>%LOG%
+      echo %DATE% %TIME% The drive "%MAPDRIVE%" does not exist>%LOG%
     )
     echo. >%LOG%
 
@@ -177,18 +177,18 @@ Example job for Windows virtual machine:
     export TMPDIR=/lscratch/${PBS_JOBID}
     module add qemu
     qemu-system-x86_64
-      -enable-kvm
-      -cpu host
-      -smp ${VM_SMP}
-      -m ${VM_MEMORY}
-      -vga std
-      -localtime
-      -usb -usbdevice tablet
-      -device virtio-net-pci,netdev=net0
-      -netdev user,id=net0,smb=${JOB_DIR},hostfwd=tcp::3389-:3389
-      -drive file=${VM_IMAGE},media=disk,if=virtio
-      -snapshot
-      -nographic
+      -enable-kvm
+      -cpu host
+      -smp ${VM_SMP}
+      -m ${VM_MEMORY}
+      -vga std
+      -localtime
+      -usb -usbdevice tablet
+      -device virtio-net-pci,netdev=net0
+      -netdev user,id=net0,smb=${JOB_DIR},hostfwd=tcp::3389-:3389
+      -drive file=${VM_IMAGE},media=disk,if=virtio
+      -snapshot
+      -nographic
 ```
 
 Job script links application data (win), input data (data) and run script (run.bat) into job directory and runs virtual machine.
@@ -201,11 +201,11 @@ Example run script (run.bat) for Windows virtual machine:
     call application.bat z:data z:output
 ```
 
-Run script runs application from shared job directory (mapped as drive z:), process input data (z:data) from job directory  and store output to job directory (z:output).
+Run script runs application from shared job directory (mapped as drive z:), process input data (z:data) from job directory  and store output to job directory (z:output).
 
 ### Run jobs
 
-Run jobs as usual, see  [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction/). Use only full node allocation for virtualization jobs.
+Run jobs as usual, see  [Resource Allocation and Job Execution](../../resource-allocation-and-job-execution/introduction/). Use only full node allocation for virtualization jobs.
 
 ### Running Virtual Machines
 
@@ -235,9 +235,9 @@ You can access virtual machine by VNC viewer (option -vnc) connecting to IP addr
 Install virtual machine from ISO file
 
 ```bash
-    $ qemu-system-x86_64 -hda linux.img -enable-kvm -cpu host -smp 16 -m 32768 -vga std -cdrom linux-install.iso -boot d -vnc :0
+    $ qemu-system-x86_64 -hda linux.img -enable-kvm -cpu host -smp 16 -m 32768 -vga std -cdrom linux-install.iso -boot d -vnc :0
 
-    $ qemu-system-x86_64 -hda win.img   -enable-kvm -cpu host -smp 16 -m 32768 -vga std -localtime -usb -usbdevice tablet -cdrom win-install.iso -boot d -vnc :0
+    $ qemu-system-x86_64 -hda win.img   -enable-kvm -cpu host -smp 16 -m 32768 -vga std -localtime -usb -usbdevice tablet -cdrom win-install.iso -boot d -vnc :0
 ```
 
 Run virtual machine using optimized devices, user network back-end with sharing and port forwarding, in snapshot mode
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md b/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md
index 0ca699766..4f0168872 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md
@@ -3,7 +3,7 @@ Running OpenMPI
 
 OpenMPI program execution
 -------------------------
-The OpenMPI programs may be executed only via the PBS Workload manager, by entering an appropriate queue. On Anselm, the **bullxmpi-1.2.4.1** and **OpenMPI 1.6.5** are OpenMPI based MPI implementations.
+The OpenMPI programs may be executed only via the PBS Workload manager, by entering an appropriate queue. On Anselm, the **bullxmpi-1.2.4.1** and **OpenMPI 1.6.5** are OpenMPI based MPI implementations.
 
 ### Basic usage
 
@@ -99,7 +99,7 @@ In this example, we demonstrate recommended way to run an MPI application, using
 ### OpenMP thread affinity
 
 !!! Note "Note"
-	Important!  Bind every OpenMP thread to a core!
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two examples with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You might want to avoid this by setting these environment variable for GCC OpenMP:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
index 6892dafa2..5f81e8ee4 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
@@ -7,9 +7,9 @@ The Anselm cluster provides several implementations of the MPI library:
 
  |MPI Library |Thread support |
  | --- | --- |
- |The highly optimized and stable **bullxmpi 1.2.4.1** |Partial thread support up to MPI_THREAD_SERIALIZED |
+ |The highly optimized and stable **bullxmpi 1.2.4.1** |Partial thread support up to MPI_THREAD_SERIALIZED |
  |The **Intel MPI 4.1** |Full thread support up to MPI_THREAD_MULTIPLE |
- |The [OpenMPI 1.6.5](href="http://www.open-mpi.org)| Full thread support up to MPI_THREAD_MULTIPLE, BLCR c/r support |
+ |The [OpenMPI 1.6.5](href="http://www.open-mpi.org)| Full thread support up to MPI_THREAD_MULTIPLE, BLCR c/r support |
  |The OpenMPI 1.8.1 |Full thread support up to MPI_THREAD_MULTIPLE, MPI-3.0 support |
  |The **mpich2 1.9** |Full thread support up to MPI_THREAD_MULTIPLE, BLCR c/r support |
 
@@ -127,12 +127,12 @@ Optimal way to run an MPI program depends on its memory requirements, memory acc
 	2. Two MPI processes per node, 8 threads per process
 	3. 16 MPI processes per node, 1 thread per process.
 
-**One MPI** process per node, using 16 threads, is most useful for memory demanding applications, that make good use of processor cache memory and are not memory bound.  This is also a preferred way for communication intensive applications as one process per node enjoys full bandwidth access to the network interface.
+**One MPI** process per node, using 16 threads, is most useful for memory demanding applications, that make good use of processor cache memory and are not memory bound.  This is also a preferred way for communication intensive applications as one process per node enjoys full bandwidth access to the network interface.
 
 **Two MPI** processes per node, using 8 threads each, bound to processor socket is most useful for memory bandwidth bound applications such as BLAS1 or FFT, with scalable memory demand. However, note that the two processes will share access to the network interface. The 8 threads and socket binding should ensure maximum memory access bandwidth and minimize communication, migration and NUMA effect overheads.
 
 !!! Note "Note"
-	Important!  Bind every OpenMP thread to a core!
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two cases with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You want to avoid this by setting the KMP_AFFINITY or GOMP_CPU_AFFINITY environment variables.
 
@@ -140,7 +140,7 @@ In the previous two cases with one or two MPI processes per node, the operating
 
 ### Running OpenMPI
 
-The **bullxmpi-1.2.4.1** and [**OpenMPI 1.6.5**](http://www.open-mpi.org/) are both based on OpenMPI. Read more on [how to run OpenMPI](Running_OpenMPI/) based MPI.
+The **bullxmpi-1.2.4.1** and [**OpenMPI 1.6.5**](http://www.open-mpi.org/) are both based on OpenMPI. Read more on [how to run OpenMPI](Running_OpenMPI/) based MPI.
 
 ### Running MPICH2
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md
index 08ec7b468..6c79215c0 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md
@@ -50,7 +50,7 @@ Examples
 
     print "Hello! I'm rank %d from %d running in total..." % (comm.rank, comm.size)
 
-    comm.Barrier()   # wait for everybody to synchronize
+    comm.Barrier()   # wait for everybody to synchronize
 ```
 
 ###Collective Communication with NumPy arrays
@@ -71,9 +71,9 @@ Examples
     # Prepare a vector of N=5 elements to be broadcasted...
     N = 5
     if comm.rank == 0:
-        A = np.arange(N, dtype=np.float64)    # rank 0 has proper data
+        A = np.arange(N, dtype=np.float64)    # rank 0 has proper data
     else:
-        A = np.empty(N, dtype=np.float64)     # all other just an empty array
+        A = np.empty(N, dtype=np.float64)     # all other just an empty array
 
     # Broadcast A from rank 0 to everybody
     comm.Bcast( [A, MPI.DOUBLE] )
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md
index 52653f029..d5cf06387 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md
@@ -42,7 +42,7 @@ You need to preload the executable, if running on the local scratch /lscratch fi
     Hello world! from rank 3 of 4 on host cn110
 ```
 
-In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the cp command via mpirun, copying the executable from shared home to local scratch . Second  mpirun will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node.
+In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the cp command via mpirun, copying the executable from shared home to local scratch . Second  mpirun will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node.
 
 !!! Note "Note"
 	MPI process mapping may be controlled by PBS parameters.
@@ -94,7 +94,7 @@ In this example, we demonstrate recommended way to run an MPI application, using
 ### OpenMP thread affinity
 
 !!! Note "Note"
-	Important!  Bind every OpenMP thread to a core!
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two examples with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You might want to avoid this by setting these environment variable for GCC OpenMP:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
index b8a9f5901..dbe107990 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
@@ -24,7 +24,7 @@ If you need to use the Matlab GUI to prepare your Matlab programs, you can use M
 
 If you require the Matlab GUI, please follow the general informations about [running graphical applications](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/).
 
-Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-system/)) is recommended.
+Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-system/)) is recommended.
 
 To run Matlab with GUI, use
 
@@ -45,7 +45,7 @@ Running parallel Matlab using Distributed Computing Toolbox / Engine
 !!! Note "Note"
 	Distributed toolbox is available only for the EDU variant
 
-The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1).
+The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1).
 
 Delete previously used file mpiLibConf.m, we have observed crashes when using Intel MPI.
 
@@ -116,7 +116,7 @@ To run matlab in batch mode, write an matlab script, then write a bash jobscript
     cp output.out $PBS_O_WORKDIR/.
 ```
 
-This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs and matlab script are in matlabcode.m file, outputs in output.out file. Note the missing .m extension in the matlab -r matlabcodefile call, **the .m must not be included**. Note that the **shared /scratch must be used**. Further, it is **important to include quit** statement at the end of the matlabcode.m script.
+This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs and matlab script are in matlabcode.m file, outputs in output.out file. Note the missing .m extension in the matlab -r matlabcodefile call, **the .m must not be included**. Note that the **shared /scratch must be used**. Further, it is **important to include quit** statement at the end of the matlabcode.m script.
 
 Submit the jobscript using qsub
 
@@ -165,11 +165,11 @@ The complete example showing how to use Distributed Computing Toolbox in local m
     spmd
     [~, name] = system('hostname')
 
-        T = W*x; % Calculation performed on labs, in parallel.
-                 % T and W are both codistributed arrays here.
+        T = W*x; % Calculation performed on labs, in parallel.
+                 % T and W are both codistributed arrays here.
     end
     T;
-    whos         % T and W are both distributed arrays here.
+    whos         % T and W are both distributed arrays here.
 
     parpool close
     quit
@@ -219,7 +219,7 @@ This method is a "hack" invented by us to emulate the mpiexec functionality foun
 
 Please note that this method is experimental.
 
-For this method, you need to use SalomonDirect profile, import it using [the same way as SalomonPBSPro](matlab/#running-parallel-matlab-using-distributed-computing-toolbox---engine)
+For this method, you need to use SalomonDirect profile, import it using [the same way as SalomonPBSPro](matlab/#running-parallel-matlab-using-distributed-computing-toolbox---engine)
 
 This is an example of m-script using direct mode:
 
@@ -273,7 +273,7 @@ You can use MATLAB on UV2000 in two parallel modes:
 
 ### Threaded mode
 
-Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as  fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes.
+Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as  fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes.
 
 ### Local cluster mode
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
index 2398bbb3f..84b2897ea 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
@@ -27,7 +27,7 @@ If you need to use the Matlab GUI to prepare your Matlab programs, you can use M
 
 If you require the Matlab GUI, please follow the general informations about running graphical applications
 
-Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part) is recommended.
+Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part) is recommended.
 
 To run Matlab with GUI, use
 
@@ -71,7 +71,7 @@ For the performance reasons Matlab should use system MPI. On Anselm the supporte
 System MPI library allows Matlab to communicate through 40 Gbit/s InfiniBand QDR interconnect instead of slower 1 Gbit Ethernet network.
 
 !!! Note "Note"
-	Please note: The path to MPI library in "mpiLibConf.m" has to match with version of loaded Intel MPI module. In this example the version 4.1.1.036 of Intel MPI is used by Matlab and therefore module impi/4.1.1.036  has to be loaded prior to starting Matlab.
+	Please note: The path to MPI library in "mpiLibConf.m" has to match with version of loaded Intel MPI module. In this example the version 4.1.1.036 of Intel MPI is used by Matlab and therefore module impi/4.1.1.036  has to be loaded prior to starting Matlab.
 
 ### Parallel Matlab interactive session
 
@@ -123,7 +123,7 @@ To run matlab in batch mode, write an matlab script, then write a bash jobscript
     cp output.out $PBS_O_WORKDIR/.
 ```
 
-This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs and matlab script are in matlabcode.m file, outputs in output.out file. Note the missing .m extension in the matlab -r matlabcodefile call, **the .m must not be included**.  Note that the **shared /scratch must be used**. Further, it is **important to include quit** statement at the end of the matlabcode.m script.
+This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs and matlab script are in matlabcode.m file, outputs in output.out file. Note the missing .m extension in the matlab -r matlabcodefile call, **the .m must not be included**.  Note that the **shared /scratch must be used**. Further, it is **important to include quit** statement at the end of the matlabcode.m script.
 
 Submit the jobscript using qsub
 
@@ -178,11 +178,11 @@ The complete example showing how to use Distributed Computing Toolbox is show he
     spmd
     [~, name] = system('hostname')
 
-        T = W*x; % Calculation performed on labs, in parallel.
-                 % T and W are both codistributed arrays here.
+        T = W*x; % Calculation performed on labs, in parallel.
+                 % T and W are both codistributed arrays here.
     end
     T;
-    whos         % T and W are both distributed arrays here.
+    whos         % T and W are both distributed arrays here.
 
     matlabpool close
     quit
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
index f77fbe5cf..db07503af 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
@@ -10,10 +10,10 @@ Two versions of octave are available on Anselm, via module
 |Version|module|
 |---|---|
 |Octave 3.8.2, compiled with GCC and Multithreaded MKL|Octave/3.8.2-gimkl-2.11.5|
-|Octave 4.0.1, compiled with GCC and Multithreaded MKL|Octave/4.0.1-gimkl-2.11.5|
-|Octave 4.0.0, compiled with >GCC and OpenBLAS|Octave/4.0.0-foss-2015g|
+|Octave 4.0.1, compiled with GCC and Multithreaded MKL|Octave/4.0.1-gimkl-2.11.5|
+|Octave 4.0.0, compiled with >GCC and OpenBLAS|Octave/4.0.0-foss-2015g|
 
- Modules and execution
+ Modules and execution
 ----------------------
 
     $ module load Octave
@@ -50,7 +50,7 @@ To run octave in batch mode, write an octave script, then write a bash jobscript
     exit
 ```
 
-This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](http://support.it4i.cz/docs/anselm-cluster-documentation/resource-allocation-and-job-execution).
+This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](http://support.it4i.cz/docs/anselm-cluster-documentation/resource-allocation-and-job-execution).
 
 The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment.
 
@@ -62,7 +62,7 @@ Octave may use MPI for interprocess communication This functionality is currentl
 
 Xeon Phi Support
 ----------------
-Octave may take advantage of the Xeon Phi accelerators. This will only work on the  [Intel Xeon Phi](../intel-xeon-phi/) [accelerated nodes](../../compute-nodes/).
+Octave may take advantage of the Xeon Phi accelerators. This will only work on the  [Intel Xeon Phi](../intel-xeon-phi/) [accelerated nodes](../../compute-nodes/).
 
 ### Automatic offload support
 
@@ -77,12 +77,12 @@ Example
     $ octave -q
     octave:1> A=rand(10000); B=rand(10000);
     octave:2> tic; C=A*B; toc
-    [MKL] [MIC --] [AO Function]    DGEMM
-    [MKL] [MIC --] [AO DGEMM Workdivision]    0.32 0.68
-    [MKL] [MIC 00] [AO DGEMM CPU Time]    2.896003 seconds
-    [MKL] [MIC 00] [AO DGEMM MIC Time]    1.967384 seconds
-    [MKL] [MIC 00] [AO DGEMM CPU->MIC Data]    1347200000 bytes
-    [MKL] [MIC 00] [AO DGEMM MIC->CPU Data]    2188800000 bytes
+    [MKL] [MIC --] [AO Function]    DGEMM
+    [MKL] [MIC --] [AO DGEMM Workdivision]    0.32 0.68
+    [MKL] [MIC 00] [AO DGEMM CPU Time]    2.896003 seconds
+    [MKL] [MIC 00] [AO DGEMM MIC Time]    1.967384 seconds
+    [MKL] [MIC 00] [AO DGEMM CPU->MIC Data]    1347200000 bytes
+    [MKL] [MIC 00] [AO DGEMM MIC->CPU Data]    2188800000 bytes
     Elapsed time is 2.93701 seconds.
 ```
 
@@ -95,7 +95,7 @@ A version of [native](../intel-xeon-phi/#section-4) Octave is compiled for Xeon
 -   Only command line support. GUI, graph plotting etc. is not supported.
 -   Command history in interactive mode is not supported.
 
-Octave is linked with parallel Intel MKL, so it best suited for batch processing of tasks that utilize BLAS, LAPACK and FFT operations. By default, number of threads is set to 120, you can control this with > OMP_NUM_THREADS environment
+Octave is linked with parallel Intel MKL, so it best suited for batch processing of tasks that utilize BLAS, LAPACK and FFT operations. By default, number of threads is set to 120, you can control this with > OMP_NUM_THREADS environment
 variable.
 
 !!! Note "Note"
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md
index 88fe95e72..238e7106b 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md
@@ -3,7 +3,7 @@ R
 
 Introduction
 ------------
-The R is a language and environment for statistical computing and graphics.  R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible.
+The R is a language and environment for statistical computing and graphics.  R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible.
 
 One of R's strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed. Great care has been taken over the defaults for the minor design choices in graphics, but the user retains full control.
 
@@ -67,11 +67,11 @@ Example jobscript:
     exit
 ```
 
-This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/job-submission-and-execution/).
+This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/job-submission-and-execution/).
 
 Parallel R
 ----------
-Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where  parallel constructs are directly stated within the R script.
+Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where  parallel constructs are directly stated within the R script.
 
 Package parallel
 --------------------
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md
index 44337602d..8920a402f 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md
@@ -3,7 +3,7 @@ FFTW
 
 The discrete Fourier transform in one or more dimensions, MPI parallel
 
-FFTW is a C subroutine library for computing the discrete Fourier transform  in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, e.g. the discrete cosine/sine transforms or DCT/DST). The FFTW library allows for MPI parallel, in-place discrete Fourier transform, with data distributed over number of nodes.
+FFTW is a C subroutine library for computing the discrete Fourier transform  in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, e.g. the discrete cosine/sine transforms or DCT/DST). The FFTW library allows for MPI parallel, in-place discrete Fourier transform, with data distributed over number of nodes.
 
 Two versions, **3.3.3** and **2.1.5** of FFTW are available on Anselm, each compiled for **Intel MPI** and **OpenMPI** using **intel** and **gnu** compilers. These are available via modules:
 
@@ -31,34 +31,34 @@ Example
     #include <fftw3-mpi.h>
     int main(int argc, char **argv)
     {
-        const ptrdiff_t N0 = 100, N1 = 1000;
-        fftw_plan plan;
-        fftw_complex *data;
-        ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
+        const ptrdiff_t N0 = 100, N1 = 1000;
+        fftw_plan plan;
+        fftw_complex *data;
+        ptrdiff_t alloc_local, local_n0, local_0_start, i, j;
 
-        MPI_Init(&argc, &argv);
-        fftw_mpi_init();
+        MPI_Init(&argc, &argv);
+        fftw_mpi_init();
 
-        /* get local data size and allocate */
-        alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD,
-                                             &local_n0, &local_0_start);
-        data = fftw_alloc_complex(alloc_local);
+        /* get local data size and allocate */
+        alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD,
+                                             &local_n0, &local_0_start);
+        data = fftw_alloc_complex(alloc_local);
 
-        /* create plan for in-place forward DFT */
-        plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD,
-                                    FFTW_FORWARD, FFTW_ESTIMATE);
+        /* create plan for in-place forward DFT */
+        plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD,
+                                    FFTW_FORWARD, FFTW_ESTIMATE);
 
-        /* initialize data  */
-        for (i = 0; i < local_n0; ++i) for (j = 0; j < N1; ++j)
-        {   data[i*N1 + j][0] = i;
-            data[i*N1 + j][1] = j; }
+        /* initialize data  */
+        for (i = 0; i < local_n0; ++i) for (j = 0; j < N1; ++j)
+        {   data[i*N1 + j][0] = i;
+            data[i*N1 + j][1] = j; }
 
-        /* compute transforms, in-place, as many times as desired */
-        fftw_execute(plan);
+        /* compute transforms, in-place, as many times as desired */
+        fftw_execute(plan);
 
-        fftw_destroy_plan(plan);
+        fftw_destroy_plan(plan);
 
-        MPI_Finalize();
+        MPI_Finalize();
     }
 ```
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md
index 172858092..ab2efcd36 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md
@@ -55,7 +55,7 @@ The GSL 1.16 is available on Anselm, compiled for GNU and Intel compiler. These
 |gsl/1.16-icc(default)|icc|
 
 ```bash
-     $ module load gsl
+     $ module load gsl
 ```
 
 The module sets up environment variables, required for linking and running GSL enabled applications. This particular command loads the default module, which is gsl/1.16-icc
@@ -94,46 +94,46 @@ Following is an example of discrete wavelet transform implemented by GSL:
     int
     main (int argc, char **argv)
     {
-      int i, n = 256, nc = 20;
-      double *data = malloc (n * sizeof (double));
-      double *abscoeff = malloc (n * sizeof (double));
-      size_t *p = malloc (n * sizeof (size_t));
+      int i, n = 256, nc = 20;
+      double *data = malloc (n * sizeof (double));
+      double *abscoeff = malloc (n * sizeof (double));
+      size_t *p = malloc (n * sizeof (size_t));
 
-      gsl_wavelet *w;
-      gsl_wavelet_workspace *work;
+      gsl_wavelet *w;
+      gsl_wavelet_workspace *work;
 
-      w = gsl_wavelet_alloc (gsl_wavelet_daubechies, 4);
-      work = gsl_wavelet_workspace_alloc (n);
+      w = gsl_wavelet_alloc (gsl_wavelet_daubechies, 4);
+      work = gsl_wavelet_workspace_alloc (n);
 
-      for (i=0; i<n; i++)
-      data[i] = sin (3.141592654*(double)i/256.0);
+      for (i=0; i<n; i++)
+      data[i] = sin (3.141592654*(double)i/256.0);
 
-      gsl_wavelet_transform_forward (w, data, 1, n, work);
+      gsl_wavelet_transform_forward (w, data, 1, n, work);
 
-      for (i = 0; i < n; i++)
-        {
-          abscoeff[i] = fabs (data[i]);
-        }
+      for (i = 0; i < n; i++)
+        {
+          abscoeff[i] = fabs (data[i]);
+        }
 
-      gsl_sort_index (p, abscoeff, 1, n);
+      gsl_sort_index (p, abscoeff, 1, n);
 
-      for (i = 0; (i + nc) < n; i++)
-        data[p[i]] = 0;
+      for (i = 0; (i + nc) < n; i++)
+        data[p[i]] = 0;
 
-      gsl_wavelet_transform_inverse (w, data, 1, n, work);
+      gsl_wavelet_transform_inverse (w, data, 1, n, work);
 
-      for (i = 0; i < n; i++)
-        {
-          printf ("%gn", data[i]);
-        }
+      for (i = 0; i < n; i++)
+        {
+          printf ("%gn", data[i]);
+        }
 
-      gsl_wavelet_free (w);
-      gsl_wavelet_workspace_free (work);
+      gsl_wavelet_free (w);
+      gsl_wavelet_workspace_free (work);
 
-      free (data);
-      free (abscoeff);
-      free (p);
-      return 0;
+      free (data);
+      free (abscoeff);
+      free (p);
+      return 0;
     }
 ```
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
index dd88fef72..e8c956b5d 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
@@ -24,7 +24,7 @@ Compilation example:
 ```bash
     $ icc -mkl -O3 -DHAVE_MIC -DADD_ -Wall $MAGMA_INC -c testing_dgetrf_mic.cpp -o testing_dgetrf_mic.o
 
-    $ icc -mkl -O3 -DHAVE_MIC -DADD_ -Wall -fPIC -Xlinker -zmuldefs -Wall -DNOCHANGE -DHOST  testing_dgetrf_mic.o  -o testing_dgetrf_mic $MAGMA_LIBS
+    $ icc -mkl -O3 -DHAVE_MIC -DADD_ -Wall -fPIC -Xlinker -zmuldefs -Wall -DNOCHANGE -DHOST  testing_dgetrf_mic.o  -o testing_dgetrf_mic $MAGMA_LIBS
 ```
 
 ### Running MAGMA code
@@ -53,18 +53,18 @@ To test if the MAGMA server runs properly we can run one of examples that are pa
     [lriha@cn204 ~]$ $MAGMAROOT/testing/testing_dgetrf_mic
     Usage: /apps/libs/magma-mic/magmamic-1.3.0/testing/testing_dgetrf_mic [options] [-h|--help]
 
-      M     N     CPU GFlop/s (sec)   MAGMA GFlop/s (sec)   ||PA-LU||/(||A||*N)
+      M     N     CPU GFlop/s (sec)   MAGMA GFlop/s (sec)   ||PA-LU||/(||A||*N)
     =========================================================================
-     1088  1088     ---   (  ---  )     13.93 (   0.06)     ---
-     2112  2112     ---   (  ---  )     77.85 (   0.08)     ---
-     3136  3136     ---   (  ---  )    183.21 (   0.11)     ---
-     4160  4160     ---   (  ---  )    227.52 (   0.21)     ---
-     5184  5184     ---   (  ---  )    258.61 (   0.36)     ---
-     6208  6208     ---   (  ---  )    333.12 (   0.48)     ---
-     7232  7232     ---   (  ---  )    416.52 (   0.61)     ---
-     8256  8256     ---   (  ---  )    446.97 (   0.84)     ---
-     9280  9280     ---   (  ---  )    461.15 (   1.16)     ---
-    10304 10304     ---   (  ---  )    500.70 (   1.46)     ---
+     1088  1088     ---   (  ---  )     13.93 (   0.06)     ---
+     2112  2112     ---   (  ---  )     77.85 (   0.08)     ---
+     3136  3136     ---   (  ---  )    183.21 (   0.11)     ---
+     4160  4160     ---   (  ---  )    227.52 (   0.21)     ---
+     5184  5184     ---   (  ---  )    258.61 (   0.36)     ---
+     6208  6208     ---   (  ---  )    333.12 (   0.48)     ---
+     7232  7232     ---   (  ---  )    416.52 (   0.61)     ---
+     8256  8256     ---   (  ---  )    446.97 (   0.84)     ---
+     9280  9280     ---   (  ---  )    461.15 (   1.16)     ---
+    10304 10304     ---   (  ---  )    500.70 (   1.46)     ---
 ```
 
 !!! Note "Note"
@@ -80,4 +80,4 @@ See more details at [MAGMA home page](http://icl.cs.utk.edu/magma/).
 
 References
 ----------
-[1] MAGMA MIC: Linear Algebra Library for Intel Xeon Phi Coprocessors, Jack Dongarra et. al, [http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf](http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf)
+[1] MAGMA MIC: Linear Algebra Library for Intel Xeon Phi Coprocessors, Jack Dongarra et. al, [http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf](http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf)
diff --git a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
index 015750845..216167428 100644
--- a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
+++ b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
@@ -11,7 +11,7 @@ The default programming model for GPU accelerators on Anselm is Nvidia CUDA. To
     $ module load cuda
 ```
 
-If the user code is hybrid and uses both CUDA and MPI, the MPI environment has to be set up as well. One way to do this is to use the PrgEnv-gnu module, which sets up correct combination of GNU compiler and MPI library.
+If the user code is hybrid and uses both CUDA and MPI, the MPI environment has to be set up as well. One way to do this is to use the PrgEnv-gnu module, which sets up correct combination of GNU compiler and MPI library.
 
 ```bash
     $ module load PrgEnv-gnu
@@ -43,7 +43,7 @@ To run the code user can use PBS interactive session to get access to a node fro
 ```bash
     $ qsub -I -q qnvidia -A OPEN-0-0
     $ module load cuda
-    $ ~/cuda-samples/1_Utilities/deviceQuery/deviceQuery
+    $ ~/cuda-samples/1_Utilities/deviceQuery/deviceQuery
 ```
 
 Expected output of the deviceQuery example executed on a node with Tesla K20m is
@@ -102,80 +102,80 @@ In this section we provide a basic CUDA based vector addition code example. You
 
     // GPU kernel function to add two vectors
     __global__ void add_gpu( int *a, int *b, int *c, int n){
-      int index = threadIdx.x + blockIdx.x * blockDim.x;
-      if (index < n)
-        c[index] = a[index] + b[index];
+      int index = threadIdx.x + blockIdx.x * blockDim.x;
+      if (index < n)
+        c[index] = a[index] + b[index];
     }
 
     // CPU function to add two vectors
     void add_cpu (int *a, int *b, int *c, int n) {
-      for (int i=0; i < n; i++)
+      for (int i=0; i < n; i++)
         c[i] = a[i] + b[i];
     }
 
     // CPU function to generate a vector of random integers
     void random_ints (int *a, int n) {
-      for (int i = 0; i < n; i++)
-      a[i] = rand() % 10000; // random number between 0 and 9999
+      for (int i = 0; i < n; i++)
+      a[i] = rand() % 10000; // random number between 0 and 9999
     }
 
     // CPU function to compare two vectors
     int compare_ints( int *a, int *b, int n ){
-      int pass = 0;
-      for (int i = 0; i < N; i++){
-        if (a[i] != b[i]) {
-          printf("Value mismatch at location %d, values %d and %dn",i, a[i], b[i]);
-          pass = 1;
-        }
-      }
-      if (pass == 0) printf ("Test passedn"); else printf ("Test Failedn");
-      return pass;
+      int pass = 0;
+      for (int i = 0; i < N; i++){
+        if (a[i] != b[i]) {
+          printf("Value mismatch at location %d, values %d and %dn",i, a[i], b[i]);
+          pass = 1;
+        }
+      }
+      if (pass == 0) printf ("Test passedn"); else printf ("Test Failedn");
+      return pass;
     }
 
     int main( void ) {
 
-      int *a, *b, *c; // host copies of a, b, c
-      int *dev_a, *dev_b, *dev_c; // device copies of a, b, c
-      int size = N * sizeof( int ); // we need space for N integers
+      int *a, *b, *c; // host copies of a, b, c
+      int *dev_a, *dev_b, *dev_c; // device copies of a, b, c
+      int size = N * sizeof( int ); // we need space for N integers
 
-      // Allocate GPU/device copies of dev_a, dev_b, dev_c
-      cudaMalloc( (void**)&dev_a, size );
-      cudaMalloc( (void**)&dev_b, size );
-      cudaMalloc( (void**)&dev_c, size );
+      // Allocate GPU/device copies of dev_a, dev_b, dev_c
+      cudaMalloc( (void**)&dev_a, size );
+      cudaMalloc( (void**)&dev_b, size );
+      cudaMalloc( (void**)&dev_c, size );
 
-      // Allocate CPU/host copies of a, b, c
-      a = (int*)malloc( size );
-      b = (int*)malloc( size );
-      c = (int*)malloc( size );
+      // Allocate CPU/host copies of a, b, c
+      a = (int*)malloc( size );
+      b = (int*)malloc( size );
+      c = (int*)malloc( size );
 
-      // Fill input vectors with random integer numbers
-      random_ints( a, N );
-      random_ints( b, N );
+      // Fill input vectors with random integer numbers
+      random_ints( a, N );
+      random_ints( b, N );
 
-      // copy inputs to device
-      cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice );
-      cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice );
+      // copy inputs to device
+      cudaMemcpy( dev_a, a, size, cudaMemcpyHostToDevice );
+      cudaMemcpy( dev_b, b, size, cudaMemcpyHostToDevice );
 
-      // launch add_gpu() kernel with blocks and threads
-      add_gpu<<< N/THREADS_PER_BLOCK, THREADS_PER_BLOCK >>( dev_a, dev_b, dev_c, N );
+      // launch add_gpu() kernel with blocks and threads
+      add_gpu<<< N/THREADS_PER_BLOCK, THREADS_PER_BLOCK >>( dev_a, dev_b, dev_c, N );
 
-      // copy device result back to host copy of c
-      cudaMemcpy( c, dev_c, size, cudaMemcpyDeviceToHost );
+      // copy device result back to host copy of c
+      cudaMemcpy( c, dev_c, size, cudaMemcpyDeviceToHost );
 
-      //Check the results with CPU implementation
-      int *c_h; c_h = (int*)malloc( size );
-      add_cpu (a, b, c_h, N);
-      compare_ints(c, c_h, N);
+      //Check the results with CPU implementation
+      int *c_h; c_h = (int*)malloc( size );
+      add_cpu (a, b, c_h, N);
+      compare_ints(c, c_h, N);
 
-      // Clean CPU memory allocations
-      free( a ); free( b ); free( c ); free (c_h);
+      // Clean CPU memory allocations
+      free( a ); free( b ); free( c ); free (c_h);
 
-      // Clean GPU memory allocations
-      cudaFree( dev_a );
-      cudaFree( dev_b );
-      cudaFree( dev_c );
+      // Clean GPU memory allocations
+      cudaFree( dev_a );
+      cudaFree( dev_b );
+      cudaFree( dev_c );
 
-      return 0;
+      return 0;
     }
 ```
 
@@ -190,7 +190,7 @@ To run the code use interactive PBS session to get access to one of the GPU acce
 ```bash
     $ qsub -I -q qnvidia -A OPEN-0-0
     $ module load cuda
-    $ ./test.cuda
+    $ ./test.cuda
 ```
 
 CUDA Libraries
@@ -214,71 +214,71 @@ SAXPY function multiplies the vector x by the scalar alpha and adds it to the ve
     #include <cublas_v2.h>
 
     /* Vector size */
-    #define N  (32)
+    #define N  (32)
 
     /* Host implementation of a simple version of saxpi */
     void saxpy(int n, float alpha, const float *x, float *y)
     {
-        for (int i = 0; i < n; ++i)
-        y[i] = alpha*x[i] + y[i];
+        for (int i = 0; i < n; ++i)
+        y[i] = alpha*x[i] + y[i];
     }
 
     /* Main */
     int main(int argc, char **argv)
     {
-        float *h_X, *h_Y, *h_Y_ref;
-        float *d_X = 0;
-        float *d_Y = 0;
+        float *h_X, *h_Y, *h_Y_ref;
+        float *d_X = 0;
+        float *d_Y = 0;
 
-        const float alpha = 1.0f;
-        int i;
+        const float alpha = 1.0f;
+        int i;
 
-        cublasHandle_t handle;
+        cublasHandle_t handle;
 
-        /* Initialize CUBLAS */
-        printf("simpleCUBLAS test running..n");
-        cublasCreate(&handle);
+        /* Initialize CUBLAS */
+        printf("simpleCUBLAS test running..n");
+        cublasCreate(&handle);
 
-        /* Allocate host memory for the matrices */
-        h_X = (float *)malloc(N * sizeof(h_X[0]));
-        h_Y = (float *)malloc(N * sizeof(h_Y[0]));
-        h_Y_ref = (float *)malloc(N * sizeof(h_Y_ref[0]));
+        /* Allocate host memory for the matrices */
+        h_X = (float *)malloc(N * sizeof(h_X[0]));
+        h_Y = (float *)malloc(N * sizeof(h_Y[0]));
+        h_Y_ref = (float *)malloc(N * sizeof(h_Y_ref[0]));
 
-        /* Fill the matrices with test data */
-        for (i = 0; i < N; i++)
-        {
-            h_X[i] = rand() / (float)RAND_MAX;
-            h_Y[i] = rand() / (float)RAND_MAX;
-            h_Y_ref[i] = h_Y[i];
-        }
+        /* Fill the matrices with test data */
+        for (i = 0; i < N; i++)
+        {
+            h_X[i] = rand() / (float)RAND_MAX;
+            h_Y[i] = rand() / (float)RAND_MAX;
+            h_Y_ref[i] = h_Y[i];
+        }
 
-        /* Allocate device memory for the matrices */
-        cudaMalloc((void **)&d_X, N * sizeof(d_X[0]));
-        cudaMalloc((void **)&d_Y, N * sizeof(d_Y[0]));
+        /* Allocate device memory for the matrices */
+        cudaMalloc((void **)&d_X, N * sizeof(d_X[0]));
+        cudaMalloc((void **)&d_Y, N * sizeof(d_Y[0]));
 
-        /* Initialize the device matrices with the host matrices */
-        cublasSetVector(N, sizeof(h_X[0]), h_X, 1, d_X, 1);
-        cublasSetVector(N, sizeof(h_Y[0]), h_Y, 1, d_Y, 1);
+        /* Initialize the device matrices with the host matrices */
+        cublasSetVector(N, sizeof(h_X[0]), h_X, 1, d_X, 1);
+        cublasSetVector(N, sizeof(h_Y[0]), h_Y, 1, d_Y, 1);
 
-        /* Performs operation using plain C code */
-        saxpy(N, alpha, h_X, h_Y_ref);
+        /* Performs operation using plain C code */
+        saxpy(N, alpha, h_X, h_Y_ref);
 
-        /* Performs operation using cublas */
-        cublasSaxpy(handle, N, &alpha, d_X, 1, d_Y, 1);
+        /* Performs operation using cublas */
+        cublasSaxpy(handle, N, &alpha, d_X, 1, d_Y, 1);
 
-        /* Read the result back */
-        cublasGetVector(N, sizeof(h_Y[0]), d_Y, 1, h_Y, 1);
+        /* Read the result back */
+        cublasGetVector(N, sizeof(h_Y[0]), d_Y, 1, h_Y, 1);
 
-        /* Check result against reference */
-        for (i = 0; i < N; ++i)
-            printf("CPU res = %f t GPU res = %f t diff = %f n", h_Y_ref[i], h_Y[i], h_Y_ref[i] - h_Y[i]);
+        /* Check result against reference */
+        for (i = 0; i < N; ++i)
+            printf("CPU res = %f t GPU res = %f t diff = %f n", h_Y_ref[i], h_Y[i], h_Y_ref[i] - h_Y[i]);
 
-        /* Memory clean up */
-        free(h_X); free(h_Y); free(h_Y_ref);
-        cudaFree(d_X); cudaFree(d_Y);
+        /* Memory clean up */
+        free(h_X); free(h_Y); free(h_Y_ref);
+        cudaFree(d_X); cudaFree(d_Y);
 
-        /* Shutdown */
-        cublasDestroy(handle);
+        /* Shutdown */
+        cublasDestroy(handle);
     }
 ```
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md
index 1d323a7b8..0fbdc214f 100644
--- a/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md
+++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md
@@ -3,7 +3,7 @@ Diagnostic component (TEAM)
 
 ### Access
 
-TEAM is available at the following address: <http://omics.it4i.cz/team/>
+TEAM is available at the following address: <http://omics.it4i.cz/team/>
 
 !!! Note "Note"
 	The address is accessible only via VPN.
@@ -12,7 +12,7 @@ TEAM is available at the following address: <http://omics.it4i.cz/team/>
 
 VCF files are scanned by this diagnostic tool for known diagnostic disease-associated variants. When no diagnostic mutation is found, the file can be sent to the disease-causing gene discovery tool to see whether new disease associated variants can be found.
 
-TEAM (27) is an intuitive and easy-to-use web tool that fills the gap between the predicted mutations and the final diagnostic in targeted enrichment sequencing analysis. The tool searches for known diagnostic mutations, corresponding to a disease panel, among the predicted patient’s variants. Diagnostic variants for the disease are taken from four databases of disease-related variants (HGMD-public, HUMSAVAR , ClinVar and COSMIC) If no primary diagnostic variant is found, then a list of secondary findings that can help to establish a diagnostic is produced. TEAM also provides with an interface for the definition of and customization of panels, by means of which, genes and mutations can be added or discarded to adjust panel definitions.
+TEAM (27) is an intuitive and easy-to-use web tool that fills the gap between the predicted mutations and the final diagnostic in targeted enrichment sequencing analysis. The tool searches for known diagnostic mutations, corresponding to a disease panel, among the predicted patient’s variants. Diagnostic variants for the disease are taken from four databases of disease-related variants (HGMD-public, HUMSAVAR , ClinVar and COSMIC) If no primary diagnostic variant is found, then a list of secondary findings that can help to establish a diagnostic is produced. TEAM also provides with an interface for the definition of and customization of panels, by means of which, genes and mutations can be added or discarded to adjust panel definitions.
 
 ![Interface of the application. Panels for defining targeted regions of interest can be set up by just drag and drop known disease genes or disease definitions from the lists. Thus, virtual panels can be interactively improved as the knowledge of the disease increases.](../../../img/fig5.png)
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md
index 83179e3c4..f84348160 100644
--- a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md
+++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md
@@ -41,7 +41,7 @@ Input: **FASTQ file.**
 
 Output: **FASTQ file plus an HTML file containing statistics on the data.**
 
-FASTQ format It represents the nucleotide sequence and its corresponding quality scores.
+FASTQ format It represents the nucleotide sequence and its corresponding quality scores.
 
 ![FASTQ file.](../../../img/fig2.png "fig2.png")
 **Figure 2.**FASTQ file.
@@ -70,13 +70,13 @@ corresponding information is unavailable.
  |1 |QNAME |Query NAME of the read or the read pai |
  |2 |FLAG |Bitwise FLAG (pairing,strand,mate strand,etc.) |
  |3 |RNAME |<p>Reference sequence NAME |
- |4 |POS  |<p>1-Based  leftmost POSition of clipped alignment |
- |5 |MAPQ  |<p>MAPping Quality (Phred-scaled) |
+ |4 |POS  |<p>1-Based  leftmost POSition of clipped alignment |
+ |5 |MAPQ  |<p>MAPping Quality (Phred-scaled) |
  |6 |CIGAR |<p>Extended CIGAR string (operations:MIDNSHP) |
- |7 |MRNM  |<p>Mate REference NaMe ('=' if same RNAME) |
- |8 |MPOS  |<p>1-Based leftmost Mate POSition |
+ |7 |MRNM  |<p>Mate REference NaMe ('=' if same RNAME) |
+ |8 |MPOS  |<p>1-Based leftmost Mate POSition |
  |9 |ISIZE |<p>Inferred Insert SIZE |
- |10 |SEQ  |<p>Query SEQuence on the same strand as the reference |
+ |10 |SEQ  |<p>Query SEQuence on the same strand as the reference |
  |11 |QUAL |<p>Query QUALity (ASCII-33=Phred base quality) |
 
 **Table 1.** Mandatory fields in the SAM format.
@@ -115,9 +115,9 @@ Identification of single nucleotide variants and indels on the alignments is per
 
 **Variant Call Format (VCF)**
 
-VCF (3) is a standardized format for storing the most prevalent types of sequence variation, including SNPs, indels and larger structural variants, together with rich annotations. The format was developed with the primary intention to represent human genetic variation, but its use is not restricted >to diploid genomes and can be used in different contexts as well. Its flexibility and user extensibility allows representation of a wide variety of genomic variation with respect to a single reference sequence.
+VCF (3) is a standardized format for storing the most prevalent types of sequence variation, including SNPs, indels and larger structural variants, together with rich annotations. The format was developed with the primary intention to represent human genetic variation, but its use is not restricted >to diploid genomes and can be used in different contexts as well. Its flexibility and user extensibility allows representation of a wide variety of genomic variation with respect to a single reference sequence.
 
-A VCF file consists of a header section and a data section. The header contains an arbitrary number of metainformation lines, each starting with characters ‘##’, and a TAB delimited field definition line, starting with a single ‘#’ character. The meta-information header lines provide a standardized description of tags and annotations used in the data section. The use of meta-information allows the information stored within a VCF file to be tailored to the dataset in question. It can be also used to provide information about the means of file creation, date of creation, version of the reference sequence, software used and any other information relevant to the history of the file. The field definition line names eight mandatory columns, corresponding to data columns representing the chromosome (CHROM), a 1-based position of the start of the variant (POS), unique identifiers of the variant (ID), the reference allele (REF), a comma separated list of  alternate non-reference alleles (ALT), a phred-scaled quality score (QUAL), site filtering information (FILTER) and a semicolon separated list of additional, user extensible annotation (INFO). In addition, if samples are present in the file, the mandatory header columns are followed by a FORMAT column and an arbitrary number of sample IDs that define the samples included in the VCF file. The FORMAT column is used to define  the information contained within each subsequent genotype column, which consists of a colon separated list of fields. For example, the FORMAT field GT:GQ:DP in the fourth data entry of Figure 1a indicates that the subsequent entries contain information regarding the genotype, genotype quality and  read depth for each sample. All data lines are TAB delimited and the number of fields in each data line must match the number of fields in the header line. It is strongly recommended that all annotation tags used are declared in the VCF header section.
+A VCF file consists of a header section and a data section. The header contains an arbitrary number of metainformation lines, each starting with characters ‘##’, and a TAB delimited field definition line, starting with a single ‘#’ character. The meta-information header lines provide a standardized description of tags and annotations used in the data section. The use of meta-information allows the information stored within a VCF file to be tailored to the dataset in question. It can be also used to provide information about the means of file creation, date of creation, version of the reference sequence, software used and any other information relevant to the history of the file. The field definition line names eight mandatory columns, corresponding to data columns representing the chromosome (CHROM), a 1-based position of the start of the variant (POS), unique identifiers of the variant (ID), the reference allele (REF), a comma separated list of  alternate non-reference alleles (ALT), a phred-scaled quality score (QUAL), site filtering information (FILTER) and a semicolon separated list of additional, user extensible annotation (INFO). In addition, if samples are present in the file, the mandatory header columns are followed by a FORMAT column and an arbitrary number of sample IDs that define the samples included in the VCF file. The FORMAT column is used to define  the information contained within each subsequent genotype column, which consists of a colon separated list of fields. For example, the FORMAT field GT:GQ:DP in the fourth data entry of Figure 1a indicates that the subsequent entries contain information regarding the genotype, genotype quality and  read depth for each sample. All data lines are TAB delimited and the number of fields in each data line must match the number of fields in the header line. It is strongly recommended that all annotation tags used are declared in the VCF header section.
 
 ![a) Example of valid VCF. The header lines ##fileformat and #CHROM are mandatory, the rest is optional but strongly recommended. Each line of the body describes variants present in the sampled population at one genomic position or region. All alternate alleles are listed in the ALT column and referenced from the genotype fields as 1-based indexes to
 this list; the reference haplotype is designated as 0. For multiploid data, the separator indicates whether the data are phased (|) or unphased (/). Thus, the two alleles C and G at the positions 2 and 5 in this figure occur on the same chromosome in SAMPLE1. The first data line shows an example of a deletion (present in SAMPLE1) and a replacement of
@@ -139,15 +139,15 @@ VARIANT (VARIant Analysis Tool) (4) reports information on the variants found th
 
 #### CellBase
 
-CellBase(5) is a relational database integrates biological information from different sources and includes:
+CellBase(5) is a relational database integrates biological information from different sources and includes:
 
 **Core features:**
 
-We took genome sequences, genes, transcripts, exons, cytobands or cross references (xrefs) identifiers (IDs) from Ensembl (6). Protein information including sequences, xrefs or protein features (natural variants, mutagenesis sites, post-translational modifications, etc.) were imported from UniProt (7).
+We took genome sequences, genes, transcripts, exons, cytobands or cross references (xrefs) identifiers (IDs) from Ensembl (6). Protein information including sequences, xrefs or protein features (natural variants, mutagenesis sites, post-translational modifications, etc.) were imported from UniProt (7).
 
 **Regulatory:**
 
-CellBase imports miRNA from miRBase (8); curated and non-curated miRNA targets from miRecords (9), miRTarBase (10),
+CellBase imports miRNA from miRBase (8); curated and non-curated miRNA targets from miRecords (9), miRTarBase (10),
 TargetScan(11) and microRNA.org (12) and CpG islands and conserved regions from the UCSC database (13).
 
 **Functional annotation**
@@ -156,11 +156,11 @@ OBO Foundry (14) develops many biomedical ontologies that are implemented in OBO
 
 **Variation**
 
-CellBase includes SNPs from dbSNP (16)^; SNP population frequencies from HapMap (17), 1000 genomes project (18) and Ensembl (6); phenotypically annotated SNPs were imported from NHRI GWAS Catalog (19),HGMD (20), Open Access GWAS Database (21), UniProt (7) and OMIM (22); mutations from COSMIC (23) and structural variations from Ensembl (6).
+CellBase includes SNPs from dbSNP (16)^; SNP population frequencies from HapMap (17), 1000 genomes project (18) and Ensembl (6); phenotypically annotated SNPs were imported from NHRI GWAS Catalog (19),HGMD (20), Open Access GWAS Database (21), UniProt (7) and OMIM (22); mutations from COSMIC (23) and structural variations from Ensembl (6).
 
 **Systems biology**
 
-We also import systems biology information like interactome information from IntAct (24). Reactome (25) stores pathway and interaction information in BioPAX (26) format. BioPAX data exchange format enables the integration of diverse pathway
+We also import systems biology information like interactome information from IntAct (24). Reactome (25) stores pathway and interaction information in BioPAX (26) format. BioPAX data exchange format enables the integration of diverse pathway
 resources. We successfully solved the problem of storing data released in BioPAX format into a SQL relational schema, which allowed us importing Reactome in CellBase.
 
 ### [Diagnostic component (TEAM)](diagnostic-component-team/)
@@ -182,53 +182,53 @@ If we launch ngsPipeline with ‘-h’, we will get the usage help:
 ```bash
     $ ngsPipeline -h
     Usage: ngsPipeline.py [-h] -i INPUT -o OUTPUT -p PED --project PROJECT --queue
-                          QUEUE [--stages-path STAGES_PATH] [--email EMAIL]
+                          QUEUE [--stages-path STAGES_PATH] [--email EMAIL]
      [--prefix PREFIX] [-s START] [-e END] --log
 
     Python pipeline
 
     optional arguments:
-      -h, --help            show this help message and exit
-      -i INPUT, --input INPUT
-      -o OUTPUT, --output OUTPUT
-                            Output Data directory
-      -p PED, --ped PED     Ped file with all individuals
-      --project PROJECT     Project Id
-      --queue QUEUE         Queue Id
-      --stages-path STAGES_PATH
-                            Custom Stages path
-      --email EMAIL         Email
-      --prefix PREFIX       Prefix name for Queue Jobs name
-      -s START, --start START
-                            Initial stage
-      -e END, --end END     Final stage
-      --log                 Log to file
+      -h, --help            show this help message and exit
+      -i INPUT, --input INPUT
+      -o OUTPUT, --output OUTPUT
+                            Output Data directory
+      -p PED, --ped PED     Ped file with all individuals
+      --project PROJECT     Project Id
+      --queue QUEUE         Queue Id
+      --stages-path STAGES_PATH
+                            Custom Stages path
+      --email EMAIL         Email
+      --prefix PREFIX       Prefix name for Queue Jobs name
+      -s START, --start START
+                            Initial stage
+      -e END, --end END     Final stage
+      --log                 Log to file
 
 ```
 
 Let us see a brief description of the arguments:
 
 ```bash
-      *-h --help*. Show the help.
+      *-h --help*. Show the help.
 
-      *-i, --input.* The input data directory. This directory must to have a special structure. We have to create one folder per sample (with the same name). These folders will host the fastq files. These fastq files must have the following pattern “sampleName” + “_” + “1 or 2” + “.fq”. 1 for the first pair (in paired-end sequences), and 2 for the
+      *-i, --input.* The input data directory. This directory must to have a special structure. We have to create one folder per sample (with the same name). These folders will host the fastq files. These fastq files must have the following pattern “sampleName” + “_” + “1 or 2” + “.fq”. 1 for the first pair (in paired-end sequences), and 2 for the
 second one.
 
-      *-o , --output.* The output folder. This folder will contain all the intermediate and final folders. When the pipeline will be executed completely, we could remove the intermediate folders and keep only the final one (with the VCF file containing all the variants)
+      *-o , --output.* The output folder. This folder will contain all the intermediate and final folders. When the pipeline will be executed completely, we could remove the intermediate folders and keep only the final one (with the VCF file containing all the variants)
 
-      *-p , --ped*. The ped file with the pedigree. This file contains all the sample names. These names must coincide with the names of the input folders. If our input folder contains more samples than the .ped file, the pipeline will use only the samples from the .ped file.
+      *-p , --ped*. The ped file with the pedigree. This file contains all the sample names. These names must coincide with the names of the input folders. If our input folder contains more samples than the .ped file, the pipeline will use only the samples from the .ped file.
 
-      *--email.* Email for PBS notifications.
+      *--email.* Email for PBS notifications.
 
-      *--prefix.* Prefix for PBS Job names.
+      *--prefix.* Prefix for PBS Job names.
 
-      *-s, --start & -e, --end.*  Initial and final stage. If we want to launch the pipeline in a specific stage we must use -s. If we want to end the pipeline in a specific stage we must use -e.
+      *-s, --start & -e, --end.*  Initial and final stage. If we want to launch the pipeline in a specific stage we must use -s. If we want to end the pipeline in a specific stage we must use -e.
 
-      *--log*. Using log argument NGSpipeline will prompt all the logs to this file.
+      *--log*. Using log argument NGSpipeline will prompt all the logs to this file.
 
-      *--project*>. Project ID of your supercomputer allocation.
+      *--project*>. Project ID of your supercomputer allocation.
 
-      *--queue*. [Queue](../../resource-allocation-and-job-execution/introduction.html) to run the jobs in.
+      *--queue*. [Queue](../../resource-allocation-and-job-execution/introduction.html) to run the jobs in.
 ```
 
 Input, output and ped arguments are mandatory. If the output folder does not exist, the pipeline will create it.
@@ -240,14 +240,14 @@ This is an example usage of NGSpipeline:
 We have a folder with the following structure in
 
 ```bash
-/apps/bio/omics/1.0/sample_data/ >:
+/apps/bio/omics/1.0/sample_data/ >:
 
     /apps/bio/omics/1.0/sample_data
     └── data
         ├── file.ped
         ├── sample1
-        │   ├── sample1_1.fq
-        │   └── sample1_2.fq
+        │   ├── sample1_1.fq
+        │   └── sample1_2.fq
         └── sample2
             ├── sample2_1.fq
             └── sample2_2.fq
@@ -289,13 +289,13 @@ Details on the pipeline
 The pipeline calls the following tools:
 -   [fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/), quality control tool for high throughput
     sequence data.
--   [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at
+-   [gatk](https://www.broadinstitute.org/gatk/), The Genome Analysis Toolkit or GATK is a software package developed at
     the Broad Institute to analyze high-throughput sequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance. Its robust architecture, powerful processing engine and high-performance computing features make it capable of taking on projects of any size.
--   [hpg-aligner](http://wiki.opencb.org/projects/hpg/doku.php?id=aligner:downloads), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: *Burrows-Wheeler Transform* (BWT) to speed-up mapping high-quality reads, and *Smith-Waterman*> (SW) to increase sensitivity when reads cannot be mapped using BWT.
--   [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data.
--   [hpg-variant](http://wiki.opencb.org/projects/hpg/doku.php?id=variant:downloads), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible.
+-   [hpg-aligner](http://wiki.opencb.org/projects/hpg/doku.php?id=aligner:downloads), HPG Aligner has been designed to align short and long reads with high sensitivity, therefore any number of mismatches or indels are allowed. HPG Aligner implements and combines two well known algorithms: *Burrows-Wheeler Transform* (BWT) to speed-up mapping high-quality reads, and *Smith-Waterman*> (SW) to increase sensitivity when reads cannot be mapped using BWT.
+-   [hpg-fastq](http://docs.bioinfo.cipf.es/projects/fastqhpc/wiki), a quality control tool for high throughput sequence data.
+-   [hpg-variant](http://wiki.opencb.org/projects/hpg/doku.php?id=variant:downloads), The HPG Variant suite is an ambitious project aimed to provide a complete suite of tools to work with genomic variation data, from VCF tools to variant profiling or genomic statistics. It is being implemented using High Performance Computing technologies to provide the best performance possible.
 -   [picard](http://picard.sourceforge.net/), Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (HTSJDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported.
--   [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
+-   [samtools](http://samtools.sourceforge.net/samtools-c.shtml), SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.
 -   [snpEff](http://snpeff.sourceforge.net/), Genetic variant annotation and effect prediction toolbox.
 
 This listing show which tools are used in each step of the pipeline :
@@ -343,7 +343,7 @@ Once the file has been uploaded, a panel must be chosen from the Panel list. The
 
 ![The panel manager. The elements used to define a panel are (A) disease terms, (B) diagnostic mutations and (C) genes. Arrows represent actions that can be taken in the panel manager. Panels can be defined by using the known mutations and genes of a particular disease. This can be done by dragging them to the Primary Diagnostic box (action D). This action, in addition to defining the diseases in the Primary Diagnostic box, automatically adds the corresponding genes to the Genes box. The panels can be customized by adding new genes (action F) or removing undesired genes (action G). New disease mutations can be added independently or associated to an already existing disease term (action E). Disease terms can be removed by simply dragging themback (action H).](../../../img/fig7x.png)
 
-**Figure 7.** *The panel manager. The elements used to define a panel are (**A**) disease terms, (**B**) diagnostic mutations and (**C**) genes. Arrows represent actions that can be taken in the panel manager. Panels can be defined by using the known mutations and genes of a particular disease. This can be done by dragging them to the **Primary Diagnostic** box (action **D**). This action, in addition to defining the diseases in the **Primary Diagnostic** box, automatically adds the corresponding genes to the **Genes** box. The panels can be customized by adding new genes (action **F**) or removing undesired genes (action **G**). New disease mutations can be added independently or associated to an already existing disease term (action **E**). Disease terms can be removed by simply dragging them back (action **H**).*
+**Figure 7.** *The panel manager. The elements used to define a panel are (**A**) disease terms, (**B**) diagnostic mutations and (**C**) genes. Arrows represent actions that can be taken in the panel manager. Panels can be defined by using the known mutations and genes of a particular disease. This can be done by dragging them to the **Primary Diagnostic** box (action **D**). This action, in addition to defining the diseases in the **Primary Diagnostic** box, automatically adds the corresponding genes to the **Genes** box. The panels can be customized by adding new genes (action **F**) or removing undesired genes (action **G**). New disease mutations can be added independently or associated to an already existing disease term (action **E**). Disease terms can be removed by simply dragging them back (action **H**).*
 
 For variant discovering/filtering we should upload the VCF file into BierApp by using the following form:
 
@@ -351,7 +351,7 @@ For variant discovering/filtering we should upload the VCF file into BierApp by
 
 **Figure 8.** *BierApp VCF upload panel. It is recommended to choose a name for the job as well as a description.**
 
-Each prioritization (‘job’) has three associated screens that facilitate the filtering steps. The first one, the ‘Summary’ tab, displays a statistic of the data set analyzed, containing the samples analyzed, the number and types of variants found and its distribution according to consequence types. The second screen, in the ‘Variants and effect’ tab, is the actual filtering tool, and the third one, the ‘Genome view’ tab, offers a representation of the selected variants within the genomic context provided by an embedded version of the Genome Maps Tool (30).
+Each prioritization (‘job’) has three associated screens that facilitate the filtering steps. The first one, the ‘Summary’ tab, displays a statistic of the data set analyzed, containing the samples analyzed, the number and types of variants found and its distribution according to consequence types. The second screen, in the ‘Variants and effect’ tab, is the actual filtering tool, and the third one, the ‘Genome view’ tab, offers a representation of the selected variants within the genomic context provided by an embedded version of the Genome Maps Tool (30).
 
 ![This picture shows all the information associated to the variants. If a variant has an associated phenotype we could see it in the last column. In this case, the variant 7:132481242 C&gt;T is associated to the phenotype: large intestine tumor.](../../../img/fig9.png)
 
@@ -380,12 +380,12 @@ References
 19.  Hindorff,L.A., Sethupathy,P., Junkins,H.A., Ramos,E.M., Mehta,J.P., Collins,F.S. and Manolio,T.A. (2009)   Potential etiologic and functional implications of genome-wide association loci for human diseases and traits. Proc. Natl Acad.    Sci. USA, 106, 9362–9367.
 20.  Stenson,P.D., Ball,E.V., Mort,M., Phillips,A.D., Shiel,J.A., Thomas,N.S., Abeysinghe,S., Krawczak,M.   and Cooper,D.N. (2003) Human gene mutation database (HGMD):    2003 update. Hum. Mutat., 21, 577–581.
 21.  Johnson,A.D. and O’Donnell,C.J. (2009) An open access database of genome-wide association results. BMC Med.    Genet, 10, 6.
-22.  McKusick,V. (1998) A Catalog of Human Genes  and Genetic  Disorders, 12th edn. John Hopkins University    Press,Baltimore, MD.
+22.  McKusick,V. (1998) A Catalog of Human Genes  and Genetic  Disorders, 12th edn. John Hopkins University    Press,Baltimore, MD.
 23.  Forbes,S.A., Bindal,N., Bamford,S., Cole,C.,    Kok,C.Y., Beare,D., Jia,M., Shepherd,R., Leung,K., Menzies,A. et al.    (2011) COSMIC: mining complete cancer genomes in the catalogue of    somatic mutations in cancer. Nucleic Acids Res.,    39, D945–D950.
 24.  Kerrien,S., Aranda,B., Breuza,L., Bridge,A.,    Broackes-Carter,F., Chen,C., Duesbury,M., Dumousseau,M.,    Feuermann,M., Hinz,U. et al. (2012) The Intact molecular interaction    database in 2012. Nucleic Acids Res., 40, D841–D846.
 25.  Croft,D., O’Kelly,G., Wu,G., Haw,R.,    Gillespie,M., Matthews,L., Caudy,M., Garapati,P.,    Gopinath,G., Jassal,B. et al. (2011) Reactome: a database of    reactions, pathways and biological processes. Nucleic Acids Res.,    39, D691–D697.
 26.  Demir,E., Cary,M.P., Paley,S., Fukuda,K.,    Lemer,C., Vastrik,I.,Wu,G., D’Eustachio,P., Schaefer,C., Luciano,J.    et al. (2010) The BioPAX community standard for pathway    data sharing. Nature Biotechnol., 28, 935–942.
 27.  Alemán Z, García-García F, Medina I, Dopazo J    (2014): A web tool for the design and management of panels of genes    for targeted enrichment and massive sequencing for    clinical applications. Nucleic Acids Res 42: W83-7.
-28.  [Alemán    A](http://www.ncbi.nlm.nih.gov/pubmed?term=Alem%C3%A1n%20A%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Garcia-Garcia    F](http://www.ncbi.nlm.nih.gov/pubmed?term=Garcia-Garcia%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Salavert    F](http://www.ncbi.nlm.nih.gov/pubmed?term=Salavert%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Medina    I](http://www.ncbi.nlm.nih.gov/pubmed?term=Medina%20I%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Dopazo    J](http://www.ncbi.nlm.nih.gov/pubmed?term=Dopazo%20J%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)> (2014).    A web-based interactive framework to assist in the prioritization of    disease candidate genes in whole-exome sequencing studies.    [Nucleic    Acids Res.](http://www.ncbi.nlm.nih.gov/pubmed/?term=BiERapp "Nucleic acids research.")>42 :W88-93.
+28.  [Alemán    A](http://www.ncbi.nlm.nih.gov/pubmed?term=Alem%C3%A1n%20A%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Garcia-Garcia    F](http://www.ncbi.nlm.nih.gov/pubmed?term=Garcia-Garcia%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Salavert    F](http://www.ncbi.nlm.nih.gov/pubmed?term=Salavert%20F%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Medina    I](http://www.ncbi.nlm.nih.gov/pubmed?term=Medina%20I%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)>, [Dopazo    J](http://www.ncbi.nlm.nih.gov/pubmed?term=Dopazo%20J%5BAuthor%5D&cauthor=true&cauthor_uid=24803668)> (2014).    A web-based interactive framework to assist in the prioritization of    disease candidate genes in whole-exome sequencing studies.    [Nucleic    Acids Res.](http://www.ncbi.nlm.nih.gov/pubmed/?term=BiERapp "Nucleic acids research.")>42 :W88-93.
 29.  Landrum,M.J., Lee,J.M., Riley,G.R., Jang,W.,    Rubinstein,W.S., Church,D.M. and Maglott,D.R. (2014) ClinVar: public    archive of relationships among sequence variation and    human phenotype. Nucleic Acids Res., 42, D980–D985.
 30.  Medina I, Salavert F, Sanchez R, de Maria A,    Alonso R, Escobar P, Bleda M, Dopazo J: Genome Maps, a new    generation genome browser. Nucleic Acids Res 2013, 41:W41-46.
diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/priorization-component-bierapp.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/priorization-component-bierapp.md
index 439feb9fb..8b5cb8cf6 100644
--- a/docs.it4i/anselm-cluster-documentation/software/omics-master/priorization-component-bierapp.md
+++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/priorization-component-bierapp.md
@@ -3,19 +3,19 @@ Prioritization component (BiERapp)
 
 ### Access
 
-BiERapp is available at the following address: <http://omics.it4i.cz/bierapp/>
+BiERapp is available at the following address: <http://omics.it4i.cz/bierapp/>
 
 !!! Note "Note"
-	The address is accessible onlyvia VPN.
+	The address is accessible onlyvia VPN.
 
 ###BiERapp
 
-**This tool is aimed to discover new disease genes or variants by studying affected families or cases and controls. It carries out a filtering process to sequentially remove: (i) variants which are not no compatible with the disease because are not expected to have impact on the protein function; (ii) variants that exist at frequencies incompatible with the disease; (iii) variants that do not segregate with the disease. The result is a reduced set of disease gene candidates that should be further validated experimentally.**
+**This tool is aimed to discover new disease genes or variants by studying affected families or cases and controls. It carries out a filtering process to sequentially remove: (i) variants which are not no compatible with the disease because are not expected to have impact on the protein function; (ii) variants that exist at frequencies incompatible with the disease; (iii) variants that do not segregate with the disease. The result is a reduced set of disease gene candidates that should be further validated experimentally.**
 
-BiERapp (28) efficiently helps in the identification of causative variants in family and sporadic genetic diseases. The program reads lists of predicted variants (nucleotide substitutions and indels) in affected individuals or tumor samples and controls. In family studies, different modes of inheritance can easily be defined to filter out variants that do not segregate with the disease along the family. Moreover, BiERapp integrates additional information such as allelic frequencies in the general population and the most popular damaging scores to further narrow down the number of putative variants in successive filtering steps. BiERapp provides an interactive and user-friendly interface that implements the filtering strategy used in the context of a large-scale genomic project carried out by the Spanish Network for Research, in Rare Diseases (CIBERER) and the Medical Genome Project. in which more than 800 exomes have been analyzed.
+BiERapp (28) efficiently helps in the identification of causative variants in family and sporadic genetic diseases. The program reads lists of predicted variants (nucleotide substitutions and indels) in affected individuals or tumor samples and controls. In family studies, different modes of inheritance can easily be defined to filter out variants that do not segregate with the disease along the family. Moreover, BiERapp integrates additional information such as allelic frequencies in the general population and the most popular damaging scores to further narrow down the number of putative variants in successive filtering steps. BiERapp provides an interactive and user-friendly interface that implements the filtering strategy used in the context of a large-scale genomic project carried out by the Spanish Network for Research, in Rare Diseases (CIBERER) and the Medical Genome Project. in which more than 800 exomes have been analyzed.
 
 ![Web interface to the prioritization tool. This figure shows the interface of the web tool for candidate gene prioritization with the filters available. The tool includes a genomic viewer (Genome Maps 30) that enables the representation of the variants in the corresponding genomic coordinates.](../../../img/fig6.png)
 
-**Figure 6**. Web interface to the prioritization tool. This figure shows the interface of the web tool for candidate gene
+**Figure 6**. Web interface to the prioritization tool. This figure shows the interface of the web tool for candidate gene
 prioritization with the filters available. The tool includes a genomic viewer (Genome Maps 30) that enables the representation of the variants in the corresponding genomic coordinates.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/openfoam.md b/docs.it4i/anselm-cluster-documentation/software/openfoam.md
index 437a36407..56f9d6985 100644
--- a/docs.it4i/anselm-cluster-documentation/software/openfoam.md
+++ b/docs.it4i/anselm-cluster-documentation/software/openfoam.md
@@ -5,7 +5,7 @@ OpenFOAM
 
 Introduction
 ----------------
-OpenFOAM is a free, open source CFD software package developed by [**OpenCFD Ltd**](http://www.openfoam.com/about) at [**ESI Group**](http://www.esi-group.com/) and distributed by the [**OpenFOAM Foundation **](http://www.openfoam.org/). It has a large user base across most areas of engineering and science, from both commercial and academic organisations.
+OpenFOAM is a free, open source CFD software package developed by [**OpenCFD Ltd**](http://www.openfoam.com/about) at [**ESI Group**](http://www.esi-group.com/) and distributed by the [**OpenFOAM Foundation **](http://www.openfoam.org/). It has a large user base across most areas of engineering and science, from both commercial and academic organisations.
 
 Homepage: <http://www.openfoam.com/>
 
@@ -40,10 +40,10 @@ In /opt/modules/modulefiles/engineering you can see installed engineering softwa
 
 ```bash
     ------------------------------------ /opt/modules/modulefiles/engineering -------------------------------------------------------------
-    ansys/14.5.x               matlab/R2013a-COM                                openfoam/2.2.1-icc-impi4.1.1.036-DP
-    comsol/43b-COM             matlab/R2013a-EDU                                openfoam/2.2.1-icc-openmpi1.6.5-DP
-    comsol/43b-EDU             openfoam/2.2.1-gcc481-openmpi1.6.5-DP            paraview/4.0.1-gcc481-bullxmpi1.2.4.1-osmesa10.0
-    lsdyna/7.x.x               openfoam/2.2.1-gcc481-openmpi1.6.5-SP
+    ansys/14.5.x               matlab/R2013a-COM                                openfoam/2.2.1-icc-impi4.1.1.036-DP
+    comsol/43b-COM             matlab/R2013a-EDU                                openfoam/2.2.1-icc-openmpi1.6.5-DP
+    comsol/43b-EDU             openfoam/2.2.1-gcc481-openmpi1.6.5-DP            paraview/4.0.1-gcc481-bullxmpi1.2.4.1-osmesa10.0
+    lsdyna/7.x.x               openfoam/2.2.1-gcc481-openmpi1.6.5-SP
 ```
 
 For information how to use modules please [look here](../environment-and-modules/ "Environment and Modules ").
@@ -62,10 +62,10 @@ To create OpenFOAM environment on ANSELM give the commands:
 !!! Note "Note"
 	Please load correct module with your requirements “compiler - GCC/ICC, precision - DP/SP”.
 
-Create a project directory within the $HOME/OpenFOAM directory named >&lt;USER&gt;-&lt;OFversion&gt; and create a directory named run within it, e.g. by typing:
+Create a project directory within the $HOME/OpenFOAM directory named >&lt;USER&gt;-&lt;OFversion&gt; and create a directory named run within it, e.g. by typing:
 
 ```bash
-    $ mkdir -p $FOAM_RUN
+    $ mkdir -p $FOAM_RUN
 ```
 
 Project directory is now available by typing:
@@ -82,10 +82,10 @@ or
     $ cd $FOAM_RUN
 ```
 
-Copy the tutorial examples directory in the OpenFOAM distribution to the run directory:
+Copy the tutorial examples directory in the OpenFOAM distribution to the run directory:
 
 ```bash
-    $ cp -r $FOAM_TUTORIALS $FOAM_RUN
+    $ cp -r $FOAM_TUTORIALS $FOAM_RUN
 ```
 
 Now you can run the first case for example incompressible laminar flow in a cavity.
@@ -231,4 +231,4 @@ In directory My_icoFoam give the compilation command:
 ```
 
 ------------------------------------------------------------------------
- **Have a fun with OpenFOAM :)**
+ **Have a fun with OpenFOAM :)**
diff --git a/docs.it4i/anselm-cluster-documentation/software/paraview.md b/docs.it4i/anselm-cluster-documentation/software/paraview.md
index a29e8ec5f..b9deba00e 100644
--- a/docs.it4i/anselm-cluster-documentation/software/paraview.md
+++ b/docs.it4i/anselm-cluster-documentation/software/paraview.md
@@ -6,11 +6,11 @@ ParaView
 Introduction
 ------------
 
-**ParaView** is an open-source, multi-platform data analysis and visualization application. ParaView users can quickly build visualizations to analyze their data using qualitative and quantitative techniques. The data exploration can be done interactively in 3D or programmatically using ParaView's batch processing capabilities.
+**ParaView** is an open-source, multi-platform data analysis and visualization application. ParaView users can quickly build visualizations to analyze their data using qualitative and quantitative techniques. The data exploration can be done interactively in 3D or programmatically using ParaView's batch processing capabilities.
 
 ParaView was developed to analyze extremely large datasets using distributed memory computing resources. It can be run on supercomputers to analyze datasets of exascale size as well as on laptops for smaller data.
 
-Homepage : <http://www.paraview.org/>
+Homepage : <http://www.paraview.org/>
 
 Installed version
 -----------------
@@ -28,7 +28,7 @@ To launch the server, you must first allocate compute nodes, for example
     $ qsub -I -q qprod -A OPEN-0-0 -l select=2
 ```
 
-to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution](../resource-allocation-and-job-execution/introduction/) for details.
+to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution](../resource-allocation-and-job-execution/introduction/) for details.
 
 After the interactive session is opened, load the ParaView module :
 
@@ -55,7 +55,7 @@ Because a direct connection is not allowed to compute nodes on Anselm, you must
     ssh -TN -L 12345:cn77:11111 username@anselm.it4i.cz
 ```
 
-replace  username with your login and cn77 with the name of compute node your ParaView server is running on (see previous step). If you use PuTTY on Windows, load Anselm connection configuration, t>hen go to Connection-&gt; SSH>-&gt;Tunnels to set up the port forwarding. Click Remote radio button. Insert 12345 to Source port textbox. Insert cn77:11111. Click Add button, then Open.
+replace  username with your login and cn77 with the name of compute node your ParaView server is running on (see previous step). If you use PuTTY on Windows, load Anselm connection configuration, t>hen go to Connection-&gt; SSH>-&gt;Tunnels to set up the port forwarding. Click Remote radio button. Insert 12345 to Source port textbox. Insert cn77:11111. Click Add button, then Open.
 
 Now launch ParaView client installed on your desktop PC. Select File-&gt;Connect..., click Add Server. Fill in the following :
 
diff --git a/docs.it4i/anselm-cluster-documentation/storage.md b/docs.it4i/anselm-cluster-documentation/storage.md
index c93d61246..92b31ff2a 100644
--- a/docs.it4i/anselm-cluster-documentation/storage.md
+++ b/docs.it4i/anselm-cluster-documentation/storage.md
@@ -26,13 +26,13 @@ If multiple clients try to read and write the same part of a file at the same ti
 There is default stripe configuration for Anselm Lustre filesystems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance:
 
 1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Anselm Lustre filesystems
-2. stripe_count the number of OSTs to stripe across; default is 1 for Anselm Lustre filesystems  one can specify -1 to use all OSTs in the filesystem.
+2. stripe_count the number of OSTs to stripe across; default is 1 for Anselm Lustre filesystems  one can specify -1 to use all OSTs in the filesystem.
 3. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
 
 !!! Note "Note"
 	Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
 
-Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file access patterns. 
+Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file access patterns. 
 
 ```bash
 $ lfs getstripe dir|filename
@@ -79,7 +79,7 @@ Read more on <http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStriping
 
 ### Lustre on Anselm
 
-The  architecture of Lustre on Anselm is composed of two metadata servers (MDS) and four data/object storage servers (OSS). Two object storage servers are used for file system HOME and another two object storage servers are used for file system SCRATCH.
+The  architecture of Lustre on Anselm is composed of two metadata servers (MDS) and four data/object storage servers (OSS). Two object storage servers are used for file system HOME and another two object storage servers are used for file system SCRATCH.
 
  Configuration of the storages
 
@@ -103,7 +103,7 @@ The  architecture of Lustre on Anselm is composed of two metadata servers (MDS)
 
 ###HOME
 
-The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
+The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
 
 !!! Note "Note"
 	The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
@@ -112,7 +112,7 @@ The HOME filesystem should not be used to archive data of past Projects or other
 
 The files on HOME filesystem will not be deleted until end of the [users lifecycle](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/).
 
-The filesystem is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
+The filesystem is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
 
 The HOME filesystem is realized as Lustre parallel filesystem and is available on all login and computational nodes.
 Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for the HOME filesystem.
@@ -135,7 +135,7 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t
 The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
 
 !!! Note "Note"
-	The Scratch filesystem is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
+	The Scratch filesystem is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
 
     >Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files.
 
@@ -184,10 +184,10 @@ Example for Lustre SCRATCH directory:
 $ lfs quota /scratch
 Disk quotas for user user001 (uid 1234):
      Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
-          /scratch       8       0 100000000000       -       3       0       0       -
+          /scratch       8       0 100000000000       -       3       0       0       -
 Disk quotas for group user001 (gid 1234):
  Filesystem kbytes quota limit grace files quota limit grace
- /scratch       8       0       0       -       3       0       0       -
+ /scratch       8       0       0       -       3       0       0       -
 ```
 
 In this example, we view current quota size limit of 100TB and 8KB currently used by user001.
@@ -210,7 +210,7 @@ $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
 2,7M     .idb_13.0_linux_intel64_app
 ```
 
-This will list all directories which are having MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
+This will list all directories which are having MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
 
 To have a better understanding of previous commands, you can read manpages.
 
@@ -224,7 +224,7 @@ $ man du
 
 ### Extended ACLs
 
-Extended ACLs provide another security mechanism beside the standard POSIX ACLs which are defined by three entries (for owner/group/others). Extended ACLs have more than the three basic entries. In addition, they also contain a mask entry and may contain any number of named user and named group entries.
+Extended ACLs provide another security mechanism beside the standard POSIX ACLs which are defined by three entries (for owner/group/others). Extended ACLs have more than the three basic entries. In addition, they also contain a mask entry and may contain any number of named user and named group entries.
 
 ACLs on a Lustre file system work exactly like ACLs on any Linux file system. They are manipulated with the standard tools in the standard manner. Below, we create a directory and allow a specific user access.
 
@@ -232,7 +232,7 @@ ACLs on a Lustre file system work exactly like ACLs on any Linux file system. Th
 [vop999@login1.anselm ~]$ umask 027
 [vop999@login1.anselm ~]$ mkdir test
 [vop999@login1.anselm ~]$ ls -ld test
-drwxr-x--- 2 vop999 vop999 4096 Nov  5 14:17 test
+drwxr-x--- 2 vop999 vop999 4096 Nov  5 14:17 test
 [vop999@login1.anselm ~]$ getfacl test
 # file: test
 # owner: vop999
@@ -243,7 +243,7 @@ other::---
 
 [vop999@login1.anselm ~]$ setfacl -m user:johnsm:rwx test
 [vop999@login1.anselm ~]$ ls -ld test
-drwxrwx---+ 2 vop999 vop999 4096 Nov  5 14:17 test
+drwxrwx---+ 2 vop999 vop999 4096 Nov  5 14:17 test
 [vop999@login1.anselm ~]$ getfacl test
 # file: test
 # owner: vop999
@@ -257,7 +257,7 @@ other::---
 
 Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory (-d flag to setfacl) will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory. Refer to this page for more information on Linux ACL:
 
-[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html)
+[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html)
 
 Local Filesystems
 -----------------
@@ -271,7 +271,7 @@ Use local scratch in case you need to access large amount of small files during
 
 The local scratch disk is mounted as /lscratch and is accessible to user at /lscratch/$PBS_JOBID directory.
 
-The local scratch filesystem is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs that access large number of small files within the calculation must use the local scratch filesystem as their working directory. This is required for performance reasons, as frequent access to number of small files may overload the metadata servers (MDS) of the Lustre filesystem.
+The local scratch filesystem is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs that access large number of small files within the calculation must use the local scratch filesystem as their working directory. This is required for performance reasons, as frequent access to number of small files may overload the metadata servers (MDS) of the Lustre filesystem.
 
 !!! Note "Note"
 	The local scratch directory /lscratch/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
@@ -293,7 +293,7 @@ Every computational node is equipped with filesystem realized in memory, so call
 
 The local RAM disk is mounted as /ramdisk and is accessible to user at /ramdisk/$PBS_JOBID directory.
 
-The local RAM disk filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk filesystem is limited. Be very careful, use of RAM disk filesystem is at the expense of operational memory.  It is not recommended to allocate large amount of memory and use large amount of data in RAM disk filesystem at the same time.
+The local RAM disk filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk filesystem is limited. Be very careful, use of RAM disk filesystem is at the expense of operational memory.  It is not recommended to allocate large amount of memory and use large amount of data in RAM disk filesystem at the same time.
 
 !!! Note "Note"
 	The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
@@ -354,7 +354,7 @@ Once registered for CESNET Storage, you may [access the storage](https://du.cesn
 !!! Note "Note"
 	SSHFS: The storage will be mounted like a local hard drive
 
-The SSHFS  provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
+The SSHFS  provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
 
 First, create the mount point
 
@@ -399,9 +399,9 @@ Once done, please remember to unmount the storage
 !!! Note "Note"
 	Rsync provides delta transfer for best performance, can resume interrupted transfers
 
-Rsync is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination.  Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
+Rsync is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination.  Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
 
-Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time.  Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated.
+Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time.  Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated.
 
 More about Rsync at <https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele>
 
diff --git a/docs.it4i/downtimes_history.md b/docs.it4i/downtimes_history.md
index ae419d058..2f2a7c368 100644
--- a/docs.it4i/downtimes_history.md
+++ b/docs.it4i/downtimes_history.md
@@ -12,7 +12,7 @@ Full history of important announcements related to IT4I infrastructure, planned
  |2016-06-29 13:50:00 |**Salomon cluster maintenance outage prolonged** Important! Salomon cluster maintenance outage will be prolonged till 2016-06-29 20:00 CEST. |
  |2016-06-16 00:00:00 |**Salomon planned downtime** There's a planned maintenance window from 2016-06-28 09:00 till 2016-06-29 20:00 CEST.Thank you for understanding,the IT4Innovations team |
  |2016-05-26 10:31:44 |**Salomon planned downtime** There's a planned maintenance window from 2016-06-08 09:00 till 2016-06-09 09:00 CEST.Thank you for understanding,the IT4Innovations team |
- |2016-04-27 15:57:28 |**Salomon cluster maintenance outage prolonged** Important! Salomon cluster maintenance outage will be prolonged till 2016-04-28 14:00 CEST |
+ |2016-04-27 15:57:28 |**Salomon cluster maintenance outage prolonged** Important! Salomon cluster maintenance outage will be prolonged till 2016-04-28 14:00 CEST |
  |2016-03-31 19:03:25 |**Failure on Salomon Cooling System** We have very serious issue with Salomon cooling system since 2016-03-31 10:00. We are working to resolve the issue. |
  |2016-03-31 18:59:04 |**Salomon Back in Production** As of 2016-03-31 19:30 CET, the Salomon is back in production. The outage was caused by an issue in cooling system. |
  |2016-03-30 15:57:57 |**PBS malfunction** We've had several issues with PBS scheduler since 2016-03-30 13:00 CEST. We are still working on it. |
@@ -33,7 +33,7 @@ Full history of important announcements related to IT4I infrastructure, planned
  |2015-08-06 00:00:00 |**SCRATCH downtime** Dear IT4I usersSalomon's SCRATCH will not be accessible tomorrow (7th August 2015) from 08:30 till 11:00 CEST.Thank you for understanding,the IT4Innovations team |
  |2014-11-14 10:27:51 |**Unplanned PBS Downtime** Dear Anselm users,we apologize for the unavailability of our PBS scheduler during the last weekend. However, running jobs shouldn't have been affected at that time.Thank you for understanding,Anselm Admins |
  |2014-11-14 10:27:50 |**Login1 troubles** Login1 had a short unplanned downtime. Sorry for the troubles. |
- |2014-10-14 20:30:00 |**Unexpected power failure** Dear Anselm users,>>on Tuesday 14th approximately at 17:20 CEST we encountered power failure during service operation on backup diesel generator. The system shut down. Additional checks after the shutdown took more time than what would expect. The system was back on-line with all services approximately at 21:00 CEST. We are very sorry for any troubles, this matter may caused you.  If some of your jobs ended in incorrect state, please feel free to reclaim your core hours.>>Thank you for understanding, Anselm Administrators |
+ |2014-10-14 20:30:00 |**Unexpected power failure** Dear Anselm users,>>on Tuesday 14th approximately at 17:20 CEST we encountered power failure during service operation on backup diesel generator. The system shut down. Additional checks after the shutdown took more time than what would expect. The system was back on-line with all services approximately at 21:00 CEST. We are very sorry for any troubles, this matter may caused you.  If some of your jobs ended in incorrect state, please feel free to reclaim your core hours.>>Thank you for understanding, Anselm Administrators |
  |2014-07-17 13:50:00 |**Login2(!) downtime** Dear Anselm users,there's an upgrade planned on Friday, 18th July from 13:00 till 16:00 CEST. Please, take in mind that login2.anselm.it4i.cz will be unavailable at the given time-frame. We are sorry for the inconvenience.Thank you for understanding,Anselm Admins |
  |2014-07-16 13:11:34 |**Login1 downtime** Dear Anselm users,there's an upgrade planned on Thursday, 17th July from 13:00 till 16:00 CEST. Please, take in mind that login1.anselm.it4i.cz will be unavailable at the given time-frame. We are sorry for the inconvenience.Thank you for understanding,Anselm Admins |
  |2014-06-18 10:51:56 |**Login2 downtime** Dear Anselm users,there's an upgrade planned on Wednesday, 18th June from 11:20 till 14:20 CEST. Please, take in mind that login2.anselm.it4i.cz will be unavailable at the given time-frame. We are sorry for the inconvenience.Thank you for understanding,Anselm Admins |
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md
index c38214732..d392471ef 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md
@@ -6,7 +6,7 @@ X Window System
 
 The X Window system is a principal way to get GUI access to the clusters.
 
-Read more about configuring [**X Window System**](x-window-system/).
+Read more about configuring [**X Window System**](x-window-system/).
 
 VNC
 ---
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md
index 657cc412b..b136cf4a0 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md
@@ -21,7 +21,7 @@ Start vncserver
 ---------------
 
 !!! Note "Note"
-	To access VNC a local vncserver must be  started first and also a tunnel using SSH port forwarding must be established.
+	To access VNC a local vncserver must be  started first and also a tunnel using SSH port forwarding must be established.
 
     [See below](vnc.md#linux-example-of-creating-a-tunnel) for the details on SSH tunnels. In this example we use port 61.
 
@@ -29,8 +29,8 @@ You can find ports which are already occupied. Here you can see that ports " /us
 
 ```bash
 [username@login2 ~]$ ps aux | grep Xvnc
-username    5971  0.0  0.0 201072 92564 ?        SN   Sep22   4:19 /usr/bin/Xvnc :79 -desktop login2:79 (username) -auth /home/gre196/.Xauthority -geometry 1024x768 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5979 -fp catalogue:/etc/X11/fontpath.d -pn
-username    10296  0.0  0.0 131772 21076 pts/29   SN   13:01   0:01 /usr/bin/Xvnc :60 -desktop login2:61 (username) -auth /home/username/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/jir13/.vnc/passwd -rfbport 5960 -fp catalogue:/etc/X11/fontpath.d -pn
+username    5971  0.0  0.0 201072 92564 ?        SN   Sep22   4:19 /usr/bin/Xvnc :79 -desktop login2:79 (username) -auth /home/gre196/.Xauthority -geometry 1024x768 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5979 -fp catalogue:/etc/X11/fontpath.d -pn
+username    10296  0.0  0.0 131772 21076 pts/29   SN   13:01   0:01 /usr/bin/Xvnc :60 -desktop login2:61 (username) -auth /home/username/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/jir13/.vnc/passwd -rfbport 5960 -fp catalogue:/etc/X11/fontpath.d -pn
 .....
 ```
 
@@ -52,16 +52,16 @@ Check if VNC server is started on the port (in this example 61):
 
 TigerVNC server sessions:
 
-X DISPLAY #     PROCESS ID
-:61              18437
+X DISPLAY #     PROCESS ID
+:61              18437
 ```
 
 Another command:
 
 ```bash
-[username@login2 .vnc]$  ps aux | grep Xvnc
+[username@login2 .vnc]$  ps aux | grep Xvnc
 
-username    10296  0.0  0.0 131772 21076 pts/29   SN   13:01   0:01 /usr/bin/Xvnc :61 -desktop login2:61 (username) -auth /home/jir13/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5961 -fp catalogue:/etc/X11/fontpath.d -pn
+username    10296  0.0  0.0 131772 21076 pts/29   SN   13:01   0:01 /usr/bin/Xvnc :61 -desktop login2:61 (username) -auth /home/jir13/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5961 -fp catalogue:/etc/X11/fontpath.d -pn
 ```
 
 To access the VNC server you have to create a tunnel between the login node using TCP **port 5961** and your machine using a free TCP port (for simplicity the very same, in this case).
@@ -75,7 +75,7 @@ Linux/Mac OS example of creating a tunnel
 At your machine, create the tunnel:
 
 ```bash
-local $  ssh -TN -f username@login2.cluster-name.it4i.cz -L 5961:localhost:5961
+local $  ssh -TN -f username@login2.cluster-name.it4i.cz -L 5961:localhost:5961
 ```
 
 Issue the following command to check the tunnel is established (please note the PID 2022 in the last column, you'll need it for closing the tunnel):
@@ -83,9 +83,9 @@ Issue the following command to check the tunnel is established (please note the
 ```bash
 local $ netstat -natp | grep 5961
 (Not all processes could be identified, non-owned process info
- will not be shown, you would have to be root to see it all.)
-tcp        0      0 127.0.0.1:5961          0.0.0.0:*               LISTEN      2022/ssh
-tcp6       0      0 ::1:5961                :::*                    LISTEN      2022/ssh
+ will not be shown, you would have to be root to see it all.)
+tcp        0      0 127.0.0.1:5961          0.0.0.0:*               LISTEN      2022/ssh
+tcp6       0      0 ::1:5961                :::*                    LISTEN      2022/ssh
 ```
 
 Or on Mac OS use this command:
@@ -121,8 +121,8 @@ Search for the localhost and port number (in this case 127.0.0.1:5961).
 ```bahs
 [username@login2 .vnc]$ netstat -tanp | grep Xvnc
 (Not all processes could be identified, non-owned process info
- will not be shown, you would have to be root to see it all.)
-tcp        0      0 127.0.0.1:5961              0.0.0.0:*                   LISTEN      24031/Xvnc
+ will not be shown, you would have to be root to see it all.)
+tcp        0      0 127.0.0.1:5961              0.0.0.0:*                   LISTEN      24031/Xvnc
 ```
 
 On the PuTTY Configuration screen go to Connection-&gt;SSH-&gt;Tunnels to set up the tunnel.
@@ -172,8 +172,8 @@ If the screen gets locked you have to kill the screensaver. Do not to forget to
 
 ```bash
 [username@login2 .vnc]$ ps aux | grep screen
-username     1503  0.0  0.0 103244   892 pts/4    S+   14:37   0:00 grep screen
-username     24316  0.0  0.0 270564  3528 ?        Ss   14:12   0:00 gnome-screensaver
+username     1503  0.0  0.0 103244   892 pts/4    S+   14:37   0:00 grep screen
+username     24316  0.0  0.0 270564  3528 ?        Ss   14:12   0:00 gnome-screensaver
 
 [username@login2 .vnc]$ kill 24316
 ```
@@ -184,7 +184,7 @@ Kill vncserver after finished work
 You should kill your VNC server using command:
 
 ```bash
-[username@login2 .vnc]$  vncserver  -kill :61
+[username@login2 .vnc]$  vncserver  -kill :61
 Killing Xvnc process ID 7074
 Xvnc process ID 7074 already killed
 ```
@@ -192,10 +192,10 @@ Xvnc process ID 7074 already killed
 Or this way:
 
 ```bash
-[username@login2 .vnc]$  pkill vnc
+[username@login2 .vnc]$  pkill vnc
 ```
 
-GUI applications on compute nodes over VNC
+GUI applications on compute nodes over VNC
 ------------------------------------------
 
 The very same methods as described above, may be used to run the GUI applications on compute nodes. However, for maximum performance, proceed following these steps:
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
index 8e970f764..9952f60e1 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
@@ -9,7 +9,7 @@ The X Window system is a principal way to get GUI access to the clusters. The **
 X display
 ---------
 
-In order to display graphical user interface GUI of various software tools, you need to enable the X display forwarding. On Linux and Mac, log in using the -X option tho ssh client:
+In order to display graphical user interface GUI of various software tools, you need to enable the X display forwarding. On Linux and Mac, log in using the -X option tho ssh client:
 
 ```bash
  local $ ssh -X username@cluster-name.it4i.cz
@@ -92,7 +92,7 @@ In this example, we allocate 2 nodes via qexp queue, interactively. We request X
 $ ssh -X r24u35n680
 ```
 
-In this example, we log in on the r24u35n680 compute node, with the X11 forwarding enabled.
+In this example, we log in on the r24u35n680 compute node, with the X11 forwarding enabled.
 
 The Gnome GUI Environment
 -------------------------
@@ -114,7 +114,7 @@ This will open a new X window with size 1024 x 768 at DISPLAY :1. Next, ssh to t
 ```bash
 local $ DISPLAY=:1.0 ssh -XC yourname@cluster-name.it4i.cz -i ~/.ssh/path_to_your_key
 ... cluster-name MOTD...
-yourname@login1.cluster-namen.it4i.cz $ gnome-session &
+yourname@login1.cluster-namen.it4i.cz $ gnome-session &
 ```
 
 On older systems where Xephyr is not available, you may also try Xnest instead of Xephyr. Another option is to launch a new X server in a separate console, via:
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
index 77aead0c2..a6128d1da 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.md
@@ -4,7 +4,7 @@ PuTTY
 !!! Note "Note"
 	PuTTY -  before we start SSH connection
 
-Windows PuTTY Installer
+Windows PuTTY Installer
 -----------------------
 
 We recommned you to download "**A Windows installer for everything except PuTTYtel**" with **Pageant** (SSH authentication agent) and **PuTTYgen** (PuTTY key generator) which is available [here](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html).
@@ -32,7 +32,7 @@ PuTTY - how to connect to the IT4Innovations cluster
 ----------------------------------------------------
 
 -   Run PuTTY
--   Enter Host name and Save session fields with [Login address](../../../salomon/shell-and-data-access.md) and browse Connection - &gt; SSH -&gt; Auth menu. The *Host Name* input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**.
+-   Enter Host name and Save session fields with [Login address](../../../salomon/shell-and-data-access.md) and browse Connection - &gt; SSH -&gt; Auth menu. The *Host Name* input may be in the format **"username@clustername.it4i.cz"** so you don't have to type your login each time.In this example we will connect to the Salomon cluster using **"salomon.it4i.cz"**.
 
 ![](../../../img/PuTTY_host_Salomon.png)
 
@@ -51,7 +51,7 @@ PuTTY - how to connect to the IT4Innovations cluster
 
 ![](../../../img/PuTTY_open_Salomon.png)
 
--   Enter your username if the *Host Name* input is not in the format "username@salomon.it4i.cz".
+-   Enter your username if the *Host Name* input is not in the format "username@salomon.it4i.cz".
 -   Enter passphrase for selected [private key](ssh-keys/) file if Pageant **SSH authentication agent is not used.**
 
 Another PuTTY Settings
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md
index 6ad210725..74848055b 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md
@@ -18,7 +18,7 @@ You can change the password of your SSH key with "PuTTY Key Generator". Make sur
 Generate a New Public/Private key
 ---------------------------------
 
-You can generate an additional public/private key pair and insert public key into authorized_keys file for authentication with your own private key.
+You can generate an additional public/private key pair and insert public key into authorized_keys file for authentication with your own private key.
 
 -   Start with *Generate* button.
 
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
index 3b8b41b40..ba5c29fdb 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
@@ -12,10 +12,10 @@ After logging in, you can see .ssh/ directory with SSH keys and authorized_keys
     total 24
     drwx------ 2 username username 4096 May 13 15:12 .
     drwxr-x---22 username username 4096 May 13 07:22 ..
-    -rw-r--r-- 1 username username  392 May 21  2014 authorized_keys
-    -rw------- 1 username username 1675 May 21  2014 id_rsa
-    -rw------- 1 username username 1460 May 21  2014 id_rsa.ppk
-    -rw-r--r-- 1 username username  392 May 21  2014 id_rsa.pub
+    -rw-r--r-- 1 username username  392 May 21  2014 authorized_keys
+    -rw------- 1 username username 1675 May 21  2014 id_rsa
+    -rw------- 1 username username 1460 May 21  2014 id_rsa.ppk
+    -rw-r--r-- 1 username username  392 May 21  2014 id_rsa.pub
 ```
 
 !!! Note "Note"
@@ -25,7 +25,7 @@ Access privileges on .ssh folder
 --------------------------------
 
 - .ssh directory: 700 (drwx------)
-- Authorized_keys, known_hosts and public key (.pub file): 644 (-rw-r--r--)
+- Authorized_keys, known_hosts and public key (.pub file): 644 (-rw-r--r--)
 - Private key (id_rsa/id_rsa.ppk): 600 (-rw-------)
 
 ```bash
@@ -35,7 +35,7 @@ Access privileges on .ssh folder
     chmod 644 .ssh/id_rsa.pub
     chmod 644 .ssh/known_hosts
     chmod 600 .ssh/id_rsa
-    chmod 600 .ssh/id_rsa.ppk
+    chmod 600 .ssh/id_rsa.ppk
 ```
 
 Private key
diff --git a/docs.it4i/get-started-with-it4innovations/applying-for-resources.md b/docs.it4i/get-started-with-it4innovations/applying-for-resources.md
index c15173236..0467551d3 100644
--- a/docs.it4i/get-started-with-it4innovations/applying-for-resources.md
+++ b/docs.it4i/get-started-with-it4innovations/applying-for-resources.md
@@ -3,7 +3,7 @@ Applying for Resources
 
 Computational resources may be allocated by any of the following [Computing resources allocation](http://www.it4i.cz/computing-resources-allocation/?lang=en) mechanisms.
 
-Academic researchers can apply for computational resources via  [Open Access Competitions](http://www.it4i.cz/open-access-competition/?lang=en&lang=en).
+Academic researchers can apply for computational resources via  [Open Access Competitions](http://www.it4i.cz/open-access-competition/?lang=en&lang=en).
 
 Anyone is welcomed to apply via the [Directors Discretion.](http://www.it4i.cz/obtaining-computational-resources-through-directors-discretion/?lang=en&lang=en)
 
diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
index 8998d931a..691e66488 100644
--- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
+++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
@@ -3,7 +3,7 @@ Obtaining Login Credentials
 
 Obtaining Authorization
 -----------------------
-The computational resources of IT4I  are allocated by the Allocation Committee to a [Project](/), investigated by a Primary Investigator. By allocating the computational resources, the Allocation Committee is authorizing the PI to access and use the clusters. The PI may decide to authorize a number of her/his Collaborators to access and use the clusters, to consume the resources allocated to her/his Project. These collaborators will be associated to the Project. The Figure below is depicting the authorization chain:
+The computational resources of IT4I  are allocated by the Allocation Committee to a [Project](/), investigated by a Primary Investigator. By allocating the computational resources, the Allocation Committee is authorizing the PI to access and use the clusters. The PI may decide to authorize a number of her/his Collaborators to access and use the clusters, to consume the resources allocated to her/his Project. These collaborators will be associated to the Project. The Figure below is depicting the authorization chain:
 
 ![](../../img/Authorization_chain.png)
 
@@ -68,7 +68,7 @@ Once authorized by PI, every person (PI or Collaborator) wishing to access the c
 2.  Full name and affiliation
 3.  Statement that you have read and accepted the [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf) (AUP).
 4.  Attach the AUP file.
-5.  Your preferred username, max 8 characters long. The preferred username must associate your surname and name or be otherwise derived from it. Only alphanumeric sequences, dash and underscore signs are allowed.
+5.  Your preferred username, max 8 characters long. The preferred username must associate your surname and name or be otherwise derived from it. Only alphanumeric sequences, dash and underscore signs are allowed.
 6.  In case you choose [Alternative way to personal certificate](obtaining-login-credentials/#alternative-way-of-getting-personal-certificate), a **scan of photo ID** (personal ID or passport or driver license) is required
 
 Example (except the subject line which must be in English, you may use Czech or Slovak language for communication with us):
@@ -119,12 +119,12 @@ On Windows, use [PuTTY Key Generator](../accessing-the-clusters/shell-access-and
 Change Password
 ---------------
 
-Change password in your user profile at <https://extranet.it4i.cz/user/>
+Change password in your user profile at <https://extranet.it4i.cz/user/>
 
 The Certificates for Digital Signatures
 ---------------------------------------
 
-We accept personal certificates issued by any widely respected certification authority (CA). This includes certificates by CAs organized in International Grid Trust Federation (<http://www.igtf.net/>), its European branch EUGridPMA - <https://www.eugridpma.org/> and its member organizations, e.g. the CESNET certification authority - <https://tcs-p.cesnet.cz/confusa/>. The Czech *"Qualified certificate" (Kvalifikovaný certifikát)* (provided by <http://www.postsignum.cz/> or <http://www.ica.cz/Kvalifikovany-certifikat.aspx>), that is used in electronic contact with Czech authorities is accepted as well.
+We accept personal certificates issued by any widely respected certification authority (CA). This includes certificates by CAs organized in International Grid Trust Federation (<http://www.igtf.net/>), its European branch EUGridPMA - <https://www.eugridpma.org/> and its member organizations, e.g. the CESNET certification authority - <https://tcs-p.cesnet.cz/confusa/>. The Czech *"Qualified certificate" (Kvalifikovaný certifikát)* (provided by <http://www.postsignum.cz/> or <http://www.ica.cz/Kvalifikovany-certifikat.aspx>), that is used in electronic contact with Czech authorities is accepted as well.
 
 Certificate generation process is well-described here:
 
diff --git a/docs.it4i/get-started-with-it4innovations/vpn-access.md b/docs.it4i/get-started-with-it4innovations/vpn-access.md
index c5228b11e..8c51a6149 100644
--- a/docs.it4i/get-started-with-it4innovations/vpn-access.md
+++ b/docs.it4i/get-started-with-it4innovations/vpn-access.md
@@ -45,7 +45,7 @@ Working with VPN client
 
 You can use graphical user interface or command line interface to run VPN client on all supported operating systems. We suggest using GUI.
 
-Before the first login to VPN, you have to fill URL **[https://vpn.it4i.cz/user](https://vpn.it4i.cz/user)** into the text field.
+Before the first login to VPN, you have to fill URL **[https://vpn.it4i.cz/user](https://vpn.it4i.cz/user)** into the text field.
 
 ![](../img/vpn_contacting_https_cluster.png)
 
diff --git a/docs.it4i/get-started-with-it4innovations/vpn1-access.md b/docs.it4i/get-started-with-it4innovations/vpn1-access.md
index cf4ea8823..dc7f0c5ce 100644
--- a/docs.it4i/get-started-with-it4innovations/vpn1-access.md
+++ b/docs.it4i/get-started-with-it4innovations/vpn1-access.md
@@ -54,7 +54,7 @@ Working with VPN client
 
 You can use graphical user interface or command line interface to run VPN client on all supported operating systems. We suggest using GUI.
 
-Before the first login to VPN, you have to fill URL **https://vpn1.it4i.cz/anselm** into the text field.
+Before the first login to VPN, you have to fill URL **https://vpn1.it4i.cz/anselm** into the text field.
 
 ![](../img/firstrun.jpg)
 
diff --git a/docs.it4i/index.md b/docs.it4i/index.md
index 0b202d4a3..e96213524 100644
--- a/docs.it4i/index.md
+++ b/docs.it4i/index.md
@@ -13,7 +13,7 @@ Welcome to IT4Innovations documentation pages. The IT4Innovations national super
 Getting Help and Support
 ------------------------
 !!! Note "Note"
-	Contact [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz) for help and support regarding the cluster technology at IT4Innovations. Please use **Czech**, **Slovak** or **English** language for communication with us. Follow the status of your request to IT4Innovations at [support.it4i.cz/rt](http://support.it4i.cz/rt).
+	Contact [support [at] it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz) for help and support regarding the cluster technology at IT4Innovations. Please use **Czech**, **Slovak** or **English** language for communication with us. Follow the status of your request to IT4Innovations at [support.it4i.cz/rt](http://support.it4i.cz/rt).
 
 Use your IT4Innotations username and password to log in to the [support](http://support.it4i.cz/) portal.
 
diff --git a/docs.it4i/salomon/7d-enhanced-hypercube.md b/docs.it4i/salomon/7d-enhanced-hypercube.md
index 8de69013b..ec11ddb11 100644
--- a/docs.it4i/salomon/7d-enhanced-hypercube.md
+++ b/docs.it4i/salomon/7d-enhanced-hypercube.md
@@ -10,7 +10,7 @@
 |M-Cell compute nodes w/o accelerator|576|cns1 -cns576|r1i0n0 - r4i7n17|1-4|
 |compute nodes MIC accelerated|432|cns577 - cns1008|r21u01n577 - r37u31n1008|21-38|
 
-###  IB Topology
+###  IB Topology
 
 ![](../img/Salomon_IB_topology.png)
 
diff --git a/docs.it4i/salomon/capacity-computing.md b/docs.it4i/salomon/capacity-computing.md
index 831a1451f..ec853f41f 100644
--- a/docs.it4i/salomon/capacity-computing.md
+++ b/docs.it4i/salomon/capacity-computing.md
@@ -4,7 +4,7 @@ Capacity computing
 Introduction
 ------------
 
-In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization.
+In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization.
 
 However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1500 per job array**
 
@@ -13,7 +13,7 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 
 -   Use [Job arrays](capacity-computing.md#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
 -   Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
--   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
+-   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
 
 Policy
 ------
@@ -74,7 +74,7 @@ cp $PBS_O_WORKDIR/$TASK input ; cp $PBS_O_WORKDIR/myprog.x .
 cp output $PBS_O_WORKDIR/$TASK.out
 ```
 
-In this example, the submit directory holds the 900 input files, executable myprog.x and the jobscript file. As input for each run, we take the filename of input file from created tasklist file. We copy the input file to scratch /scratch/work/user/$USER/$PBS_JOBID, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The myprog.x runs on one node only and must use threads to run in parallel. Be aware, that if the myprog.x **is not multithreaded**, then all the **jobs are run as single thread programs in sequential** manner. Due to allocation of the whole node, the **accounted time is equal to the usage of whole node**, while using only 1/24 of the node!
+In this example, the submit directory holds the 900 input files, executable myprog.x and the jobscript file. As input for each run, we take the filename of input file from created tasklist file. We copy the input file to scratch /scratch/work/user/$USER/$PBS_JOBID, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The myprog.x runs on one node only and must use threads to run in parallel. Be aware, that if the myprog.x **is not multithreaded**, then all the **jobs are run as single thread programs in sequential** manner. Due to allocation of the whole node, the **accounted time is equal to the usage of whole node**, while using only 1/24 of the node!
 
 If huge number of parallel multicore (in means of multinode multithread, e. g. MPI enabled) jobs is needed to run, then a job array approach should also be used. The main difference compared to previous example using one node is that the local scratch should not be used (as it's not shared between nodes) and MPI or other technique for parallel multinode run has to be used properly.
 
@@ -159,7 +159,7 @@ GNU parallel
 !!! Note "Note"
 	Use GNU parallel to run many single core tasks on one node.
 
-GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on  Anselm.
+GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on  Anselm.
 
 For more information and examples see the parallel man page:
 
@@ -174,7 +174,7 @@ The GNU parallel shell executes multiple instances of the jobscript using all co
 
 Example:
 
-Assume we have 101 input files with name beginning with "file" (e. g. file001, ..., file101). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
+Assume we have 101 input files with name beginning with "file" (e. g. file001, ..., file101). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
 
 First, we create a tasklist file, listing all tasks - all input files in our example:
 
@@ -204,13 +204,13 @@ TASK=$1
 cp $PBS_O_WORKDIR/$TASK input
 
 # execute the calculation
-cat  input > output
+cat  input > output
 
 # copy output file to submit directory
 cp output $PBS_O_WORKDIR/$TASK.out
 ```
 
-In this example, tasks from tasklist are executed via the GNU parallel. The jobscript executes multiple instances of itself in parallel, on all cores of the node. Once an instace of jobscript is finished, new instance starts until all entries in tasklist are processed. Currently processed entry of the joblist may be retrieved via $1 variable. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.
+In this example, tasks from tasklist are executed via the GNU parallel. The jobscript executes multiple instances of itself in parallel, on all cores of the node. Once an instace of jobscript is finished, new instance starts until all entries in tasklist are processed. Currently processed entry of the joblist may be retrieved via $1 variable. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.
 
 ### Submit the job
 
@@ -221,7 +221,7 @@ $ qsub -N JOBNAME jobscript
 12345.dm2
 ```
 
-In this example, we submit a job of 101 tasks. 24 input files will be processed in  parallel. The 101 tasks on 24 cores are assumed to complete in less than 2 hours.
+In this example, we submit a job of 101 tasks. 24 input files will be processed in  parallel. The 101 tasks on 24 cores are assumed to complete in less than 2 hours.
 
 Please note the #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
 
@@ -242,7 +242,7 @@ Combined approach, very similar to job arrays, can be taken. Job array is submit
 
 Example:
 
-Assume we have 992 input files with name beginning with "file" (e. g. file001, ..., file992). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
+Assume we have 992 input files with name beginning with "file" (e. g. file001, ..., file992). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
 
 First, we create a tasklist file, listing all tasks - all input files in our example:
 
@@ -286,14 +286,14 @@ cat input > output
 cp output $PBS_O_WORKDIR/$TASK.out
 ```
 
-In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node.  Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks  in numtasks file is reached.
+In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node.  Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks  in numtasks file is reached.
 
 !!! Note "Note"
-	Select  subjob walltime and number of tasks per subjob  carefully
+	Select  subjob walltime and number of tasks per subjob  carefully
 
 When deciding this values, think about following guiding rules :
 
-1.  Let n=N/24.  Inequality (n+1) * T &lt; W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
+1.  Let n=N/24.  Inequality (n+1) * T &lt; W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
 2.  Number of tasks should be modulo 24.
 3.  These rules are valid only when all tasks have similar task walltimes T.
 
@@ -306,14 +306,14 @@ $ qsub -N JOBNAME -J 1-992:32 jobscript
 12345[].dm2
 ```
 
-In this example, we submit a job array of 31 subjobs. Note the  -J 1-992:**48**, this must be the same as the number sent to numtasks file. Each subjob will run on full node and process 24 input files in parallel, 48 in total per subjob.  Every subjob is assumed to complete in less than 2 hours.
+In this example, we submit a job array of 31 subjobs. Note the  -J 1-992:**48**, this must be the same as the number sent to numtasks file. Each subjob will run on full node and process 24 input files in parallel, 48 in total per subjob.  Every subjob is assumed to complete in less than 2 hours.
 
 Please note the #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
 
 Examples
 --------
 
-Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
+Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
 
 Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
 
diff --git a/docs.it4i/salomon/compute-nodes.md b/docs.it4i/salomon/compute-nodes.md
index df903c1d6..af6b97086 100644
--- a/docs.it4i/salomon/compute-nodes.md
+++ b/docs.it4i/salomon/compute-nodes.md
@@ -11,8 +11,8 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 ###Compute Nodes Without Accelerator
 
 -   codename "grafton"
--   576 nodes
--   13 824 cores in total
+-   576 nodes
+-   13 824 cores in total
 -   two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
 -   128 GB of physical memory per node
 
@@ -21,11 +21,11 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 ###Compute Nodes With MIC Accelerator
 
 -   codename "perrin"
--   432 nodes
--   10 368 cores in total
+-   432 nodes
+-   10 368 cores in total
 -   two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
 -   128 GB of physical memory per node
--   MIC accelerator 2 x Intel Xeon Phi 7120P per node, 61-cores, 16 GB per accelerator
+-   MIC accelerator 2 x Intel Xeon Phi 7120P per node, 61-cores, 16 GB per accelerator
 
 ![cn_mic](../img/cn_mic-1)
 
@@ -55,22 +55,22 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 Processor Architecture
 ----------------------
 
-Salomon is equipped with Intel Xeon processors Intel Xeon E5-2680v3. Processors support Advanced Vector Extensions 2.0 (AVX2) 256-bit instruction set.
+Salomon is equipped with Intel Xeon processors Intel Xeon E5-2680v3. Processors support Advanced Vector Extensions 2.0 (AVX2) 256-bit instruction set.
 
 ### Intel Xeon E5-2680v3 Processor
 
 -   12-core
--   speed: 2.5 GHz, up to 3.3 GHz using Turbo Boost Technology
+-   speed: 2.5 GHz, up to 3.3 GHz using Turbo Boost Technology
 -   peak performance:  19.2 GFLOP/s per core
 -   caches:
-    -    Intel® Smart Cache:  30 MB
+    -    Intel® Smart Cache:  30 MB
 -   memory bandwidth at the level of the processor: 68 GB/s
 
-### MIC Accelerator Intel Xeon Phi 7120P Processor
+### MIC Accelerator Intel Xeon Phi 7120P Processor
 
 -   61-core
--   speed:  1.238
-    GHz, up to 1.333 GHz using Turbo Boost Technology
+-   speed:  1.238
+    GHz, up to 1.333 GHz using Turbo Boost Technology
 -   peak performance:  18.4 GFLOP/s per core
 -   caches:
     -   L2:  30.5 MB
@@ -84,9 +84,9 @@ Memory is equally distributed across all CPUs and cores for optimal performance.
 
 -   2 sockets
 -   Memory Controllers are integrated into processors.
-    -   8 DDR4 DIMMs per node
-    -   4 DDR4 DIMMs per CPU
-    -   1 DDR4 DIMMs per channel
+    -   8 DDR4 DIMMs per node
+    -   4 DDR4 DIMMs per CPU
+    -   1 DDR4 DIMMs per channel
 -   Populated memory: 8 x 16 GB DDR4 DIMM >2133 MHz
 
 ### Compute Node With MIC Accelerator
@@ -94,9 +94,9 @@ Memory is equally distributed across all CPUs and cores for optimal performance.
 2 sockets
 Memory Controllers are integrated into processors.
 
--   8 DDR4 DIMMs per node
--   4 DDR4 DIMMs per CPU
--   1 DDR4 DIMMs per channel
+-   8 DDR4 DIMMs per node
+-   4 DDR4 DIMMs per CPU
+-   1 DDR4 DIMMs per channel
 
 Populated memory: 8 x 16 GB DDR4 DIMM 2133 MHz
 MIC Accelerator Intel Xeon Phi 7120P Processor
@@ -105,5 +105,5 @@ MIC Accelerator Intel Xeon Phi 7120P Processor
 -   Memory Controllers are are connected via an
     Interprocessor Network (IPN) ring.
     -   16 GDDR5 DIMMs per node
-    -   8 GDDR5 DIMMs per CPU
-    -   2 GDDR5 DIMMs per channel
+    -   8 GDDR5 DIMMs per CPU
+    -   2 GDDR5 DIMMs per channel
diff --git a/docs.it4i/salomon/environment-and-modules.md b/docs.it4i/salomon/environment-and-modules.md
index 8f6fb590f..b6ae85042 100644
--- a/docs.it4i/salomon/environment-and-modules.md
+++ b/docs.it4i/salomon/environment-and-modules.md
@@ -25,16 +25,16 @@ fi
 ```
 
 !!! Note "Note"
-	Do not run commands outputting to standard output (echo, module list, etc) in .bashrc  for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example.
+	Do not run commands outputting to standard output (echo, module list, etc) in .bashrc  for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example.
 
 How to using modules in examples:
 <tty-player controls src=/src/salomon/modules_salomon.ttyrec></tty-player>
 
 ### Application Modules
 
-In order to configure your shell for  running particular application on Salomon we use Module package interface.
+In order to configure your shell for  running particular application on Salomon we use Module package interface.
 
-Application modules on Salomon cluster are built using [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). The modules are divided into the following structure:
+Application modules on Salomon cluster are built using [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). The modules are divided into the following structure:
 
 ```bash
  base: Default module class
@@ -71,7 +71,7 @@ To check available modules use
 $ module avail
 ```
 
-To load a module, for example the Open MPI module  use
+To load a module, for example the Open MPI module  use
 
 ```bash
 $ module load OpenMPI
@@ -101,13 +101,13 @@ $ man module
 
 As we wrote earlier, we are using EasyBuild for automatized software installation and module creation.
 
-EasyBuild employs so-called **compiler toolchains** or, simply toolchains for short, which are a major concept in handling the build and installation processes.
+EasyBuild employs so-called **compiler toolchains** or, simply toolchains for short, which are a major concept in handling the build and installation processes.
 
 A typical toolchain consists of one or more compilers, usually put together with some libraries for specific functionality, e.g., for using an MPI stack for distributed computing, or which provide optimized routines for commonly used math operations, e.g., the well-known BLAS/LAPACK APIs for linear algebra routines.
 
 For each software package being built, the toolchain to be used must be specified in some way.
 
-The EasyBuild framework prepares the build environment for the different toolchain components, by loading their respective modules and defining environment variables to specify compiler commands (e.g., via `$F90`), compiler and linker options (e.g., via `$CFLAGS` and `$LDFLAGS`), the list of library names to supply to the linker (via `$LIBS`), etc. This enables making easyblocks largely toolchain-agnostic since they can simply rely on these environment variables; that is, unless they need to be aware of, for example, the particular compiler being used to determine the build configuration options.
+The EasyBuild framework prepares the build environment for the different toolchain components, by loading their respective modules and defining environment variables to specify compiler commands (e.g., via `$F90`), compiler and linker options (e.g., via `$CFLAGS` and `$LDFLAGS`), the list of library names to supply to the linker (via `$LIBS`), etc. This enables making easyblocks largely toolchain-agnostic since they can simply rely on these environment variables; that is, unless they need to be aware of, for example, the particular compiler being used to determine the build configuration options.
 
 Recent releases of EasyBuild include out-of-the-box toolchain support for:
 
@@ -121,7 +121,7 @@ On Salomon, we have currently following toolchains installed:
   |---|----|
   |GCC|GCC|
   |ictce|icc, ifort, imkl, impi|
-  |intel|GCC, icc, ifort, imkl, impi|
+  |intel|GCC, icc, ifort, imkl, impi|
   |gompi|GCC, OpenMPI|
   |goolf|BLACS, FFTW, GCC, OpenBLAS, OpenMPI, ScaLAPACK|
   |iompi|OpenMPI, icc, ifort|
diff --git a/docs.it4i/salomon/hardware-overview.md b/docs.it4i/salomon/hardware-overview.md
index 990312f7c..a7465b809 100644
--- a/docs.it4i/salomon/hardware-overview.md
+++ b/docs.it4i/salomon/hardware-overview.md
@@ -3,7 +3,7 @@ Hardware Overview
 
 Introduction
 ------------
-The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a  powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128 GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5 PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes.
+The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a  powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128 GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5 PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes.
 
 [More about schematic representation of the Salomon cluster compute nodes IB topology](ib-single-plane-topology/).
 
@@ -28,7 +28,7 @@ General information
 |w/o accelerator|576|
 |MIC accelerated|432|
 |**In total**||
-|Total theoretical peak performance  (Rpeak)|2011 TFLOP/s|
+|Total theoretical peak performance  (Rpeak)|2011 TFLOP/s|
 |Total amount of RAM|129.024 TB|
 
 Compute nodes
diff --git a/docs.it4i/salomon/ib-single-plane-topology.md b/docs.it4i/salomon/ib-single-plane-topology.md
index 34c43034e..8bb8d0d02 100644
--- a/docs.it4i/salomon/ib-single-plane-topology.md
+++ b/docs.it4i/salomon/ib-single-plane-topology.md
@@ -11,7 +11,7 @@ The SGI ICE X IB Premium Blade provides the first level of interconnection via d
 
 ###IB single-plane topology - ICEX M-Cell
 
-Each color in each physical IRU represents one dual-switch ASIC switch.
+Each color in each physical IRU represents one dual-switch ASIC switch.
 
 [IB single-plane topology - ICEX Mcell.pdf](../src/IB single-plane topology - ICEX Mcell.pdf)
 
diff --git a/docs.it4i/salomon/introduction.md b/docs.it4i/salomon/introduction.md
index cd71cd664..87950f422 100644
--- a/docs.it4i/salomon/introduction.md
+++ b/docs.it4i/salomon/introduction.md
@@ -1,7 +1,7 @@
 Introduction
 ============
 
-Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totaling 24192 compute cores with 129 TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, at least 128 GB RAM. Nodes are interconnected by 7D Enhanced hypercube InfiniBand network and equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
+Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totaling 24192 compute cores with 129 TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, at least 128 GB RAM. Nodes are interconnected by 7D Enhanced hypercube InfiniBand network and equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
 
 The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the  RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
 
diff --git a/docs.it4i/salomon/job-priority.md b/docs.it4i/salomon/job-priority.md
index bb7d3e6b4..97bc6968c 100644
--- a/docs.it4i/salomon/job-priority.md
+++ b/docs.it4i/salomon/job-priority.md
@@ -1,11 +1,11 @@
 Job scheduling
 ==============
 
-Job execution priority
+Job execution priority
 ----------------------
 Scheduler gives each job an execution priority and then uses this job execution priority to select which job(s) to run.
 
-Job execution priority is determined by these job properties (in order of importance):
+Job execution priority is determined by these job properties (in order of importance):
 
 1.  queue priority
 2.  fair-share priority
@@ -15,7 +15,7 @@ Job execution priority is determined by these job properties (in order of impor
 
 Queue priority is priority of queue where job is queued before execution.
 
-Queue priority has the biggest impact on job execution priority. Execution priority of jobs in higher priority queues is always greater than execution priority of jobs in lower priority queues. Other properties of job used for determining job execution priority (fair-share priority, eligible time) cannot compete with queue priority.
+Queue priority has the biggest impact on job execution priority. Execution priority of jobs in higher priority queues is always greater than execution priority of jobs in lower priority queues. Other properties of job used for determining job execution priority (fair-share priority, eligible time) cannot compete with queue priority.
 
 Queue priorities can be seen at <https://extranet.it4i.cz/rsweb/salomon/queues>
 
@@ -48,7 +48,7 @@ Eligible time is amount (in seconds) of eligible time job accrued while waiting
 
 Eligible time has the least impact on execution priority. Eligible time is used for sorting jobs with equal queue priority and fair-share priority. It is very, very difficult for eligible time to compete with fair-share priority.
 
-Eligible time can be seen as eligible_time attribute of job.
+Eligible time can be seen as eligible_time attribute of job.
 
 ### Formula
 
diff --git a/docs.it4i/salomon/job-submission-and-execution.md b/docs.it4i/salomon/job-submission-and-execution.md
index 3f81a3728..23f97bb9b 100644
--- a/docs.it4i/salomon/job-submission-and-execution.md
+++ b/docs.it4i/salomon/job-submission-and-execution.md
@@ -44,13 +44,13 @@ In this example, we allocate 4 nodes, 24 cores per node, for 1 hour. We allocate
 $ qsub -A OPEN-0-0 -q qlong -l select=10:ncpus=24 ./myjob
 ```
 
-In this example, we allocate 10 nodes, 24 cores per node, for  72 hours. We allocate these resources via the qlong queue. Jobscript myjob will be executed on the first node in the allocation.
+In this example, we allocate 10 nodes, 24 cores per node, for  72 hours. We allocate these resources via the qlong queue. Jobscript myjob will be executed on the first node in the allocation.
 
 ```bash
 $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=24 ./myjob
 ```
 
-In this example, we allocate 10  nodes, 24 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation.
+In this example, we allocate 10  nodes, 24 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation.
 
 ### Intel Xeon Phi co-processors
 
@@ -77,13 +77,13 @@ In this example, we allocate 4 nodes, with 24 cores per node (totalling 96 cores
     Per NUMA node allocation.
     Jobs are isolated by cpusets.
 
-The UV2000 (node uv1) offers 3328GB of RAM and 112 cores, distributed in 14 NUMA nodes. A NUMA node packs 8 cores and approx. 236GB RAM. In the PBS  the UV2000 provides 14 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of  the NUMA nodes allocated to itself.
+The UV2000 (node uv1) offers 3328GB of RAM and 112 cores, distributed in 14 NUMA nodes. A NUMA node packs 8 cores and approx. 236GB RAM. In the PBS  the UV2000 provides 14 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of  the NUMA nodes allocated to itself.
 
 ```bash
- $ qsub -A OPEN-0-0 -q qfat -l select=14 ./myjob
+ $ qsub -A OPEN-0-0 -q qfat -l select=14 ./myjob
 ```
 
-In this example, we allocate all 14 NUMA nodes (corresponds to 14 chunks), 112 cores of the SGI UV2000 node  for 72 hours. Jobscript myjob will be executed on the node uv1.
+In this example, we allocate all 14 NUMA nodes (corresponds to 14 chunks), 112 cores of the SGI UV2000 node  for 72 hours. Jobscript myjob will be executed on the node uv1.
 
 ```bash
 $ qsub -A OPEN-0-0 -q qfat -l select=1:mem=2000GB ./myjob
@@ -125,7 +125,7 @@ Specific nodes may be selected using PBS resource attribute cname (for short nam
 qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=24:host=cns680+1:ncpus=24:host=cns681 -I
 ```
 
-In this example, we allocate nodes r24u35n680 and r24u36n681, all 24 cores per node, for 24 hours.  Consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. The resources will be available interactively.
+In this example, we allocate nodes r24u35n680 and r24u36n681, all 24 cores per node, for 24 hours.  Consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. The resources will be available interactively.
 
 ### Placement by network location
 
@@ -306,8 +306,8 @@ $ check-pbs-jobs --jobid 35141.dm2 --print-job-out
 JOB 35141.dm2, session_id 71995, user user2, nodes r3i6n2,r3i6n3
 Print job standard output:
 ======================== Job start  ==========================
-Started at    : Fri Aug 30 02:47:53 CEST 2013
-Script name   : script
+Started at    : Fri Aug 30 02:47:53 CEST 2013
+Script name   : script
 Run loop 1
 Run loop 2
 Run loop 3
@@ -361,7 +361,7 @@ Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
    r21u01n577/0*24+r21u02n578/0*24+r21u03n579/0*24+r21u04n580/0*24
 ```
 
-In this example, the nodes r21u01n577, r21u02n578, r21u03n579, r21u04n580 were allocated for 1 hour via the qexp queue. The jobscript myjob will be executed on the node r21u01n577, while the nodes r21u02n578, r21u03n579, r21u04n580 are available for use as well.
+In this example, the nodes r21u01n577, r21u02n578, r21u03n579, r21u04n580 were allocated for 1 hour via the qexp queue. The jobscript myjob will be executed on the node r21u01n577, while the nodes r21u02n578, r21u03n579, r21u04n580 are available for use as well.
 
 !!! Note "Note"
 	The jobscript or interactive shell is by default executed in home directory
@@ -378,7 +378,7 @@ $ pwd
 In this example, 4 nodes were allocated interactively for 1 hour via the qexp queue. The interactive shell is executed in the home directory.
 
 !!! Note "Note"
-	All nodes within the allocation may be accessed via ssh.  Unallocated nodes are not accessible to user.
+	All nodes within the allocation may be accessed via ssh.  Unallocated nodes are not accessible to user.
 
 The allocated nodes are accessible via ssh from login nodes. The nodes may access each other via ssh as well.
 
diff --git a/docs.it4i/salomon/network.md b/docs.it4i/salomon/network.md
index d634a9e24..9fbf17d25 100644
--- a/docs.it4i/salomon/network.md
+++ b/docs.it4i/salomon/network.md
@@ -1,12 +1,12 @@
 Network
 =======
 
-All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [InfiniBand](http://en.wikipedia.org/wiki/InfiniBand) network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet)
+All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [InfiniBand](http://en.wikipedia.org/wiki/InfiniBand) network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet)
 network. Only [InfiniBand](http://en.wikipedia.org/wiki/InfiniBand) network may be used to transfer user data.
 
 InfiniBand Network
 ------------------
-All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network (56 Gbps). The network topology is a [7D Enhanced hypercube](7d-enhanced-hypercube/).
+All compute and login nodes of Salomon are interconnected by 7D Enhanced hypercube [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) network (56 Gbps). The network topology is a [7D Enhanced hypercube](7d-enhanced-hypercube/).
 
 Read more about schematic representation of the Salomon cluster [IB single-plain topology](ib-single-plane-topology/)
 ([hypercube dimension](7d-enhanced-hypercube/) 0).
@@ -35,7 +35,7 @@ $ ssh 10.17.35.19
 ```
 
 In this example, we  get
-information of the Infiniband network.
+information of the Infiniband network.
 
 ```bash
 $ ifconfig
diff --git a/docs.it4i/salomon/prace.md b/docs.it4i/salomon/prace.md
index 7ceb1b805..f64d7a31b 100644
--- a/docs.it4i/salomon/prace.md
+++ b/docs.it4i/salomon/prace.md
@@ -251,15 +251,15 @@ PRACE users should check their project accounting using the [PRACE Accounting To
 Users who have undergone the full local registration procedure (including signing the IT4Innovations Acceptable Use Policy) and who have received local password may check at any time, how many core-hours have been consumed by themselves and their projects using the command "it4ifree". Please note that you need to know your user password to use the command and that the displayed core hours are "system core hours" which differ from PRACE "standardized core hours".
 
 !!! Note "Note"
-	The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
+	The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
 
 ```bash
     $ it4ifree
     Password:
-         PID    Total   Used   ...by me Free
-       -------- ------- ------ -------- -------
-       OPEN-0-0 1500000 400644   225265 1099356
-       DD-13-1    10000   2606     2606    7394
+         PID    Total   Used   ...by me Free
+       -------- ------- ------ -------- -------
+       OPEN-0-0 1500000 400644   225265 1099356
+       DD-13-1    10000   2606     2606    7394
 ```
 
 By default file system quota is applied. To check the current status of the quota (separate for HOME and SCRATCH) use
diff --git a/docs.it4i/salomon/resources-allocation-policy.md b/docs.it4i/salomon/resources-allocation-policy.md
index ae1e38e57..dd5f736ad 100644
--- a/docs.it4i/salomon/resources-allocation-policy.md
+++ b/docs.it4i/salomon/resources-allocation-policy.md
@@ -30,11 +30,11 @@ The resources are allocated to the job in a fair-share fashion, subject to const
 - **qviz**, the Visualization queue: Intended for pre-/post-processing using OpenGL accelerated graphics. Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 73 GB of RAM and 1/7 of the GPU capacity (default "chunk"). If more GPU power or RAM is required, it is recommended to allocate more chunks (with 4 cores each) up to one whole node per user, so that all 28 cores, 512 GB RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
 
 !!! Note "Note"
-	To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
+	To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
 
 ### Notes
 
-The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be  [set manually, see examples](job-submission-and-execution/).
+The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be  [set manually, see examples](job-submission-and-execution/).
 
 Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. Wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R).
 
@@ -60,56 +60,56 @@ $ rspbs
 Usage: rspbs [options]
 
 Options:
-  --version             show program's version number and exit
-  -h, --help            show this help message and exit
-  --get-server-details  Print server
-  --get-queues          Print queues
-  --get-queues-details  Print queues details
-  --get-reservations    Print reservations
-  --get-reservations-details
-                        Print reservations details
-  --get-nodes           Print nodes of PBS complex
-  --get-nodeset         Print nodeset of PBS complex
-  --get-nodes-details   Print nodes details
-  --get-jobs            Print jobs
-  --get-jobs-details    Print jobs details
-  --get-jobs-check-params
-                        Print jobid, job state, session_id, user, nodes
-  --get-users           Print users of jobs
-  --get-allocated-nodes
-                        Print allocated nodes of jobs
-  --get-allocated-nodeset
-                        Print allocated nodeset of jobs
-  --get-node-users      Print node users
-  --get-node-jobs       Print node jobs
-  --get-node-ncpus      Print number of ncpus per node
-  --get-node-allocated-ncpus
-                        Print number of allocated ncpus per node
-  --get-node-qlist      Print node qlist
-  --get-node-ibswitch   Print node ibswitch
-  --get-user-nodes      Print user nodes
-  --get-user-nodeset    Print user nodeset
-  --get-user-jobs       Print user jobs
-  --get-user-jobc       Print number of jobs per user
-  --get-user-nodec      Print number of allocated nodes per user
-  --get-user-ncpus      Print number of allocated ncpus per user
-  --get-qlist-nodes     Print qlist nodes
-  --get-qlist-nodeset   Print qlist nodeset
-  --get-ibswitch-nodes  Print ibswitch nodes
-  --get-ibswitch-nodeset
-                        Print ibswitch nodeset
-  --summary             Print summary
-  --get-node-ncpu-chart
-                        Obsolete. Print chart of allocated ncpus per node
-  --server=SERVER       Use given PBS server
-  --state=STATE         Only for given job state
-  --jobid=JOBID         Only for given job ID
-  --user=USER           Only for given user
-  --node=NODE           Only for given node
-  --nodestate=NODESTATE
-                        Only for given node state (affects only --get-node*
-                        --get-qlist-* --get-ibswitch-* actions)
-  --incl-finished       Include finished jobs
+  --version             show program's version number and exit
+  -h, --help            show this help message and exit
+  --get-server-details  Print server
+  --get-queues          Print queues
+  --get-queues-details  Print queues details
+  --get-reservations    Print reservations
+  --get-reservations-details
+                        Print reservations details
+  --get-nodes           Print nodes of PBS complex
+  --get-nodeset         Print nodeset of PBS complex
+  --get-nodes-details   Print nodes details
+  --get-jobs            Print jobs
+  --get-jobs-details    Print jobs details
+  --get-jobs-check-params
+                        Print jobid, job state, session_id, user, nodes
+  --get-users           Print users of jobs
+  --get-allocated-nodes
+                        Print allocated nodes of jobs
+  --get-allocated-nodeset
+                        Print allocated nodeset of jobs
+  --get-node-users      Print node users
+  --get-node-jobs       Print node jobs
+  --get-node-ncpus      Print number of ncpus per node
+  --get-node-allocated-ncpus
+                        Print number of allocated ncpus per node
+  --get-node-qlist      Print node qlist
+  --get-node-ibswitch   Print node ibswitch
+  --get-user-nodes      Print user nodes
+  --get-user-nodeset    Print user nodeset
+  --get-user-jobs       Print user jobs
+  --get-user-jobc       Print number of jobs per user
+  --get-user-nodec      Print number of allocated nodes per user
+  --get-user-ncpus      Print number of allocated ncpus per user
+  --get-qlist-nodes     Print qlist nodes
+  --get-qlist-nodeset   Print qlist nodeset
+  --get-ibswitch-nodes  Print ibswitch nodes
+  --get-ibswitch-nodeset
+                        Print ibswitch nodeset
+  --summary             Print summary
+  --get-node-ncpu-chart
+                        Obsolete. Print chart of allocated ncpus per node
+  --server=SERVER       Use given PBS server
+  --state=STATE         Only for given job state
+  --jobid=JOBID         Only for given job ID
+  --user=USER           Only for given user
+  --node=NODE           Only for given node
+  --nodestate=NODESTATE
+                        Only for given node state (affects only --get-node*
+                        --get-qlist-* --get-ibswitch-* actions)
+  --incl-finished       Include finished jobs
 ```
 
 Resources Accounting Policy
@@ -129,8 +129,8 @@ User may check at any time, how many core-hours have been consumed by himself/he
 ```bash
 $ it4ifree
 Password:
-     PID    Total   Used   ...by me Free
-   -------- ------- ------ -------- -------
-   OPEN-0-0 1500000 400644   225265 1099356
-   DD-13-1    10000   2606     2606    7394
+     PID    Total   Used   ...by me Free
+   -------- ------- ------ -------- -------
+   OPEN-0-0 1500000 400644   225265 1099356
+   DD-13-1    10000   2606     2606    7394
 ```
diff --git a/docs.it4i/salomon/shell-and-data-access.md b/docs.it4i/salomon/shell-and-data-access.md
index f0b470ca1..b214bd5f1 100644
--- a/docs.it4i/salomon/shell-and-data-access.md
+++ b/docs.it4i/salomon/shell-and-data-access.md
@@ -6,7 +6,7 @@ Shell Access
 The Salomon cluster is accessed by SSH protocol via login nodes login1, login2, login3 and login4 at address salomon.it4i.cz. The login nodes may be addressed specifically, by prepending the login node name to the address.
 
 !!! Note "Note"
-	The alias salomon.it4i.cz is currently not available through VPN connection. Please use loginX.salomon.it4i.cz when connected to VPN.
+	The alias salomon.it4i.cz is currently not available through VPN connection. Please use loginX.salomon.it4i.cz when connected to VPN.
 
   |Login address|Port|Protocol|Login node|
   |---|---|---|---|
@@ -20,8 +20,8 @@ The authentication is by the [private key](../get-started-with-it4innovations/ac
 
 !!! Note "Note"
 	Please verify SSH fingerprints during the first logon. They are identical on all login nodes:
-	f6:28:98:e4:f9:b2:a6:8f:f2:f4:2d:0a:09:67:69:80 (DSA)
-	70:01:c9:9a:5d:88:91:c7:1b:c0:84:d1:fa:4e:83:5c (RSA)
+	f6:28:98:e4:f9:b2:a6:8f:f2:f4:2d:0a:09:67:69:80 (DSA)
+	70:01:c9:9a:5d:88:91:c7:1b:c0:84:d1:fa:4e:83:5c (RSA)
 
 Private key authentication:
 
@@ -136,13 +136,13 @@ Port forwarding
 
 It works by tunneling the connection from Salomon back to users workstation and forwarding from the workstation to the remote host.
 
-Pick some unused port on Salomon login node  (for example 6000) and establish the port forwarding:
+Pick some unused port on Salomon login node  (for example 6000) and establish the port forwarding:
 
 ```bash
 local $ ssh -R 6000:remote.host.com:1234 salomon.it4i.cz
 ```
 
-In this example, we establish port forwarding between port 6000 on Salomon and  port 1234 on the remote.host.com. By accessing localhost:6000 on Salomon, an application will see response of remote.host.com:1234. The traffic will run via users local workstation.
+In this example, we establish port forwarding between port 6000 on Salomon and  port 1234 on the remote.host.com. By accessing localhost:6000 on Salomon, an application will see response of remote.host.com:1234. The traffic will run via users local workstation.
 
 Port forwarding may be done **using PuTTY** as well. On the PuTTY Configuration screen, load your Salomon configuration first. Then go to Connection-&gt;SSH-&gt;Tunnels to set up the port forwarding. Click Remote radio button. Insert 6000 to Source port textbox. Insert remote.host.com:1234. Click Add button, then Open.
 
@@ -163,7 +163,7 @@ First, establish the remote port forwarding form the login node, as [described a
 Second, invoke port forwarding from the compute node to the login node. Insert following line into your jobscript or interactive shell
 
 ```bash
-$ ssh  -TN -f -L 6000:localhost:6000 login1
+$ ssh  -TN -f -L 6000:localhost:6000 login1
 ```
 
 In this example, we assume that port forwarding from login1:6000 to remote.host.com:1234 has been established beforehand. By accessing localhost:6000, an application running on a compute node will see response of remote.host.com:1234
@@ -189,7 +189,7 @@ Once the proxy server is running, establish ssh port forwarding from Salomon to
 local $ ssh -R 6000:localhost:1080 salomon.it4i.cz
 ```
 
-Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding  to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well.
+Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding  to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well.
 
 Graphical User Interface
 ------------------------
diff --git a/docs.it4i/salomon/software/ansys/ansys-cfx.md b/docs.it4i/salomon/software/ansys/ansys-cfx.md
index 93dfb1d2b..9bd7ced93 100644
--- a/docs.it4i/salomon/software/ansys/ansys-cfx.md
+++ b/docs.it4i/salomon/software/ansys/ansys-cfx.md
@@ -48,9 +48,9 @@ echo Machines: $hl
 /ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r
 ```
 
-Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
+Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
 
-Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def
+Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def
 
-**License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**.
+**License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**.
 [More about licensing here](licensing/)
diff --git a/docs.it4i/salomon/software/ansys/ansys-fluent.md b/docs.it4i/salomon/software/ansys/ansys-fluent.md
index e9812bddb..4d8aa0035 100644
--- a/docs.it4i/salomon/software/ansys/ansys-fluent.md
+++ b/docs.it4i/salomon/software/ansys/ansys-fluent.md
@@ -39,9 +39,9 @@ NCORES=`wc -l $PBS_NODEFILE |awk '{print $1}'`
 /ansys_inc/v145/fluent/bin/fluent 3d -t$NCORES -cnf=$PBS_NODEFILE -g -i fluent.jou
 ```
 
-Header of the pbs file (above) is common and description can be find on [this site](../../resources-allocation-policy/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
+Header of the pbs file (above) is common and description can be find on [this site](../../resources-allocation-policy/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
 
-Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common Fluent journal file which is attached to the Fluent solver via parameter -i fluent.jou
+Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common Fluent journal file which is attached to the Fluent solver via parameter -i fluent.jou
 
 Journal file with definition of the input geometry and boundary conditions and defined process of solution has e.g. the following structure:
 
@@ -64,11 +64,11 @@ The appropriate dimension of the problem has to be set by parameter (2d/3d).
 fluent solver_version [FLUENT_options] -i journal_file -pbs
 ```
 
-This syntax will start the ANSYS FLUENT job under PBS Professional using the  qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of *job_ID.hostname*. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as  qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o *job_ID*.
+This syntax will start the ANSYS FLUENT job under PBS Professional using the  qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of *job_ID.hostname*. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as  qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o *job_ID*.
 
 3. Running Fluent via user's config file
 ----------------------------------------
-The sample script uses a configuration file called pbs_fluent.conf  if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of  pbs_fluent.conf can be:
+The sample script uses a configuration file called pbs_fluent.conf  if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of  pbs_fluent.conf can be:
 
 ```bash
 input="example_small.flin"
@@ -82,9 +82,9 @@ The following is an explanation of the parameters:
 
 input is the name of the input file.
 
-case is the name of the .cas file that the input file will utilize.
+case is the name of the .cas file that the input file will utilize.
 
-fluent_args are extra ANSYS FLUENT arguments. As shown in the previous example, you can specify the interconnect by using the  -p interconnect command. The available interconnects include ethernet (the default), myrinet, infiniband,  vendor, altix, and crayx. The MPI is selected automatically, based on the specified interconnect.
+fluent_args are extra ANSYS FLUENT arguments. As shown in the previous example, you can specify the interconnect by using the  -p interconnect command. The available interconnects include ethernet (the default), myrinet, infiniband,  vendor, altix, and crayx. The MPI is selected automatically, based on the specified interconnect.
 
 outfile is the name of the file to which the standard output will be sent.
 
diff --git a/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md b/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md
index 24a16a848..c2ce77786 100644
--- a/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md
+++ b/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md
@@ -1,7 +1,7 @@
 ANSYS LS-DYNA
 =============
 
-**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern  graphical user environment.
+**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern  graphical user environment.
 
 To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command.
 
@@ -51,6 +51,6 @@ echo Machines: $hl
 /ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl
 ```
 
-Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
+Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
 
-Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i=
+Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i=
diff --git a/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md b/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md
index 4f7a97bc4..939a599b9 100644
--- a/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md
+++ b/docs.it4i/salomon/software/ansys/ansys-mechanical-apdl.md
@@ -2,7 +2,7 @@ ANSYS MAPDL
 ===========
 
 **[ANSYS Multiphysics](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/ANSYS+Multiphysics)**
-software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high- and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver.
+software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high- and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver.
 
 To run ANSYS MAPDL in batch mode you can utilize/modify the default mapdl.pbs script and execute it via the qsub command.
 
@@ -52,6 +52,6 @@ echo Machines: $hl
 
 Header of the pbs file (above) is common and description can be find on [this site](../../resources-allocation-policy/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
 
-Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ansys solver via parameter -i
+Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ansys solver via parameter -i
 
 **License** should be selected by parameter -p. Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**, aa_r_dy (ANSYS **Academic** AUTODYN) [More about licensing here](licensing/)
diff --git a/docs.it4i/salomon/software/ansys/ansys.md b/docs.it4i/salomon/software/ansys/ansys.md
index 4140db0e9..093bcf956 100644
--- a/docs.it4i/salomon/software/ansys/ansys.md
+++ b/docs.it4i/salomon/software/ansys/ansys.md
@@ -1,9 +1,9 @@
 Overview of ANSYS Products
 ==========================
 
-**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
+**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
 
-Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More  about licensing here](licensing/)
+Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of  license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More  about licensing here](licensing/)
 
 To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module:
 
@@ -13,5 +13,5 @@ To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,
 
 ANSYS supports interactive regime, but due to assumed solution of extremely difficult tasks it is not recommended.
 
-If user needs to work in interactive regime we recommend to configure the RSM service on the client machine which allows to forward the solution to the Anselm directly from the client's Workbench project (see ANSYS RSM service).
+If user needs to work in interactive regime we recommend to configure the RSM service on the client machine which allows to forward the solution to the Anselm directly from the client's Workbench project (see ANSYS RSM service).
 
diff --git a/docs.it4i/salomon/software/ansys/setting-license-preferences.md b/docs.it4i/salomon/software/ansys/setting-license-preferences.md
index 3b673a6af..44e0b8bde 100644
--- a/docs.it4i/salomon/software/ansys/setting-license-preferences.md
+++ b/docs.it4i/salomon/software/ansys/setting-license-preferences.md
@@ -1,9 +1,9 @@
 Setting license preferences
 ===========================
 
-Some ANSYS tools allow you to explicitly specify usage of academic or commercial licenses in the command line (eg. ansys161 -p aa_r to select Academic Research license). However, we have observed that not all tools obey this option and choose commercial license.
+Some ANSYS tools allow you to explicitly specify usage of academic or commercial licenses in the command line (eg. ansys161 -p aa_r to select Academic Research license). However, we have observed that not all tools obey this option and choose commercial license.
 
-Thus you need to configure preferred license order with ANSLIC_ADMIN. Please follow these steps and move Academic Research license to the  top or bottom of the list accordingly.
+Thus you need to configure preferred license order with ANSLIC_ADMIN. Please follow these steps and move Academic Research license to the  top or bottom of the list accordingly.
 
 Launch the ANSLIC_ADMIN utility in a graphical environment:
 
diff --git a/docs.it4i/salomon/software/ansys/workbench.md b/docs.it4i/salomon/software/ansys/workbench.md
index d6592d535..af5c9f9ff 100644
--- a/docs.it4i/salomon/software/ansys/workbench.md
+++ b/docs.it4i/salomon/software/ansys/workbench.md
@@ -13,7 +13,7 @@ Enable Distribute Solution checkbox and enter number of cores (eg. 48 to run on
     -mpifile /path/to/my/job/mpifile.txt
 ```
 
-Where /path/to/my/job is the directory where your project is saved. We will create the file mpifile.txt programatically later in the batch script. For more information, refer to *ANSYS Mechanical APDL Parallel Processing* *Guide*.
+Where /path/to/my/job is the directory where your project is saved. We will create the file mpifile.txt programatically later in the batch script. For more information, refer to *ANSYS Mechanical APDL Parallel Processing* *Guide*.
 
 Now, save the project and close Workbench. We will use this script to launch the job:
 
diff --git a/docs.it4i/salomon/software/chemistry/molpro.md b/docs.it4i/salomon/software/chemistry/molpro.md
index 526dbd662..787f86054 100644
--- a/docs.it4i/salomon/software/chemistry/molpro.md
+++ b/docs.it4i/salomon/software/chemistry/molpro.md
@@ -9,7 +9,7 @@ Molpro is a software package used for accurate ab-initio quantum chemistry calcu
 
 License
 -------
-Molpro software package is available only to users that have a valid license. Please contact support to enable access to Molpro if you have a valid license appropriate for running on our cluster (eg. academic research group licence, parallel execution).
+Molpro software package is available only to users that have a valid license. Please contact support to enable access to Molpro if you have a valid license appropriate for running on our cluster (eg. academic research group licence, parallel execution).
 
 To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee).
 
diff --git a/docs.it4i/salomon/software/chemistry/nwchem.md b/docs.it4i/salomon/software/chemistry/nwchem.md
index 8dfa68223..7b8723087 100644
--- a/docs.it4i/salomon/software/chemistry/nwchem.md
+++ b/docs.it4i/salomon/software/chemistry/nwchem.md
@@ -23,7 +23,7 @@ For a current list of installed versions, execute:
     module avail NWChem
 ```
 
-The recommend to use version 6.5. Version 6.3 fails on Salomon nodes with accelerator, because it attempts to communicate over scif0 interface. In 6.5 this is avoided by setting ARMCI_OPENIB_DEVICE=mlx4_0, this setting is included in the module.
+The recommend to use version 6.5. Version 6.3 fails on Salomon nodes with accelerator, because it attempts to communicate over scif0 interface. In 6.5 this is avoided by setting ARMCI_OPENIB_DEVICE=mlx4_0, this setting is included in the module.
 
 Running
 -------
@@ -41,7 +41,7 @@ Running
 
 Options
 --------------------
-Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
+Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
 
 -   MEMORY : controls the amount of memory NWChem will use
--   SCRATCH_DIR : set this to a directory in [SCRATCH     filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
+-   SCRATCH_DIR : set this to a directory in [SCRATCH     filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
diff --git a/docs.it4i/salomon/software/chemistry/phono3py.md b/docs.it4i/salomon/software/chemistry/phono3py.md
index be2ed6ed1..5d5487f97 100644
--- a/docs.it4i/salomon/software/chemistry/phono3py.md
+++ b/docs.it4i/salomon/software/chemistry/phono3py.md
@@ -17,26 +17,26 @@ Example of calculating thermal conductivity of Si using VASP code.
 
 ### Calculating force constants
 
-One needs to calculate second order and third order force constants using the diamond structure of silicon stored in [POSCAR](poscar-si)  (the same form as in VASP) using single displacement calculations within supercell.
+One needs to calculate second order and third order force constants using the diamond structure of silicon stored in [POSCAR](poscar-si)  (the same form as in VASP) using single displacement calculations within supercell.
 
 ```bash
 $ cat POSCAR
- Si
-   1.0
-     5.4335600309153529    0.0000000000000000    0.0000000000000000
-     0.0000000000000000    5.4335600309153529    0.0000000000000000
-     0.0000000000000000    0.0000000000000000    5.4335600309153529
- Si
-   8
+ Si
+   1.0
+     5.4335600309153529    0.0000000000000000    0.0000000000000000
+     0.0000000000000000    5.4335600309153529    0.0000000000000000
+     0.0000000000000000    0.0000000000000000    5.4335600309153529
+ Si
+   8
 Direct
-   0.8750000000000000  0.8750000000000000  0.8750000000000000
-   0.8750000000000000  0.3750000000000000  0.3750000000000000
-   0.3750000000000000  0.8750000000000000  0.3750000000000000
-   0.3750000000000000  0.3750000000000000  0.8750000000000000
-   0.1250000000000000  0.1250000000000000  0.1250000000000000
-   0.1250000000000000  0.6250000000000000  0.6250000000000000
-   0.6250000000000000  0.1250000000000000  0.6250000000000000
-   0.6250000000000000  0.6250000000000000  0.1250000000000000
+   0.8750000000000000  0.8750000000000000  0.8750000000000000
+   0.8750000000000000  0.3750000000000000  0.3750000000000000
+   0.3750000000000000  0.8750000000000000  0.3750000000000000
+   0.3750000000000000  0.3750000000000000  0.8750000000000000
+   0.1250000000000000  0.1250000000000000  0.1250000000000000
+   0.1250000000000000  0.6250000000000000  0.6250000000000000
+   0.6250000000000000  0.1250000000000000  0.6250000000000000
+   0.6250000000000000  0.6250000000000000  0.1250000000000000
 ```
 
 ### Generating displacement using 2 x 2 x 2 supercell for both second and third order force constants
@@ -49,15 +49,15 @@ $ phono3py -d --dim="2 2 2" -c POSCAR
 disp_fc3.yaml, and the structure input files with this displacements are POSCAR-00XXX, where the XXX=111.
 
 ```bash
-disp_fc3.yaml  POSCAR-00008  POSCAR-00017  POSCAR-00026  POSCAR-00035  POSCAR-00044  POSCAR-00053  POSCAR-00062  POSCAR-00071  POSCAR-00080  POSCAR-00089  POSCAR-00098  POSCAR-00107
-POSCAR         POSCAR-00009  POSCAR-00018  POSCAR-00027  POSCAR-00036  POSCAR-00045  POSCAR-00054  POSCAR-00063  POSCAR-00072  POSCAR-00081  POSCAR-00090  POSCAR-00099  POSCAR-00108
-POSCAR-00001   POSCAR-00010  POSCAR-00019  POSCAR-00028  POSCAR-00037  POSCAR-00046  POSCAR-00055  POSCAR-00064  POSCAR-00073  POSCAR-00082  POSCAR-00091  POSCAR-00100  POSCAR-00109
-POSCAR-00002   POSCAR-00011  POSCAR-00020  POSCAR-00029  POSCAR-00038  POSCAR-00047  POSCAR-00056  POSCAR-00065  POSCAR-00074  POSCAR-00083  POSCAR-00092  POSCAR-00101  POSCAR-00110
-POSCAR-00003   POSCAR-00012  POSCAR-00021  POSCAR-00030  POSCAR-00039  POSCAR-00048  POSCAR-00057  POSCAR-00066  POSCAR-00075  POSCAR-00084  POSCAR-00093  POSCAR-00102  POSCAR-00111
-POSCAR-00004   POSCAR-00013  POSCAR-00022  POSCAR-00031  POSCAR-00040  POSCAR-00049  POSCAR-00058  POSCAR-00067  POSCAR-00076  POSCAR-00085  POSCAR-00094  POSCAR-00103
-POSCAR-00005   POSCAR-00014  POSCAR-00023  POSCAR-00032  POSCAR-00041  POSCAR-00050  POSCAR-00059  POSCAR-00068  POSCAR-00077  POSCAR-00086  POSCAR-00095  POSCAR-00104
-POSCAR-00006   POSCAR-00015  POSCAR-00024  POSCAR-00033  POSCAR-00042  POSCAR-00051  POSCAR-00060  POSCAR-00069  POSCAR-00078  POSCAR-00087  POSCAR-00096  POSCAR-00105
-POSCAR-00007   POSCAR-00016  POSCAR-00025  POSCAR-00034  POSCAR-00043  POSCAR-00052  POSCAR-00061  POSCAR-00070  POSCAR-00079  POSCAR-00088  POSCAR-00097  POSCAR-00106
+disp_fc3.yaml  POSCAR-00008  POSCAR-00017  POSCAR-00026  POSCAR-00035  POSCAR-00044  POSCAR-00053  POSCAR-00062  POSCAR-00071  POSCAR-00080  POSCAR-00089  POSCAR-00098  POSCAR-00107
+POSCAR         POSCAR-00009  POSCAR-00018  POSCAR-00027  POSCAR-00036  POSCAR-00045  POSCAR-00054  POSCAR-00063  POSCAR-00072  POSCAR-00081  POSCAR-00090  POSCAR-00099  POSCAR-00108
+POSCAR-00001   POSCAR-00010  POSCAR-00019  POSCAR-00028  POSCAR-00037  POSCAR-00046  POSCAR-00055  POSCAR-00064  POSCAR-00073  POSCAR-00082  POSCAR-00091  POSCAR-00100  POSCAR-00109
+POSCAR-00002   POSCAR-00011  POSCAR-00020  POSCAR-00029  POSCAR-00038  POSCAR-00047  POSCAR-00056  POSCAR-00065  POSCAR-00074  POSCAR-00083  POSCAR-00092  POSCAR-00101  POSCAR-00110
+POSCAR-00003   POSCAR-00012  POSCAR-00021  POSCAR-00030  POSCAR-00039  POSCAR-00048  POSCAR-00057  POSCAR-00066  POSCAR-00075  POSCAR-00084  POSCAR-00093  POSCAR-00102  POSCAR-00111
+POSCAR-00004   POSCAR-00013  POSCAR-00022  POSCAR-00031  POSCAR-00040  POSCAR-00049  POSCAR-00058  POSCAR-00067  POSCAR-00076  POSCAR-00085  POSCAR-00094  POSCAR-00103
+POSCAR-00005   POSCAR-00014  POSCAR-00023  POSCAR-00032  POSCAR-00041  POSCAR-00050  POSCAR-00059  POSCAR-00068  POSCAR-00077  POSCAR-00086  POSCAR-00095  POSCAR-00104
+POSCAR-00006   POSCAR-00015  POSCAR-00024  POSCAR-00033  POSCAR-00042  POSCAR-00051  POSCAR-00060  POSCAR-00069  POSCAR-00078  POSCAR-00087  POSCAR-00096  POSCAR-00105
+POSCAR-00007   POSCAR-00016  POSCAR-00025  POSCAR-00034  POSCAR-00043  POSCAR-00052  POSCAR-00061  POSCAR-00070  POSCAR-00079  POSCAR-00088  POSCAR-00097  POSCAR-00106
 ```
 
 For each displacement the forces needs to be calculated, i.e. in form of the output file of VASP (vasprun.xml). For a single VASP calculations one needs [KPOINTS](KPOINTS), [POTCAR](POTCAR), [INCAR](INCAR) in your case directory (where you have POSCARS) and those 111 displacements calculations can be generated by [prepare.sh](prepare.sh) script. Then each of the single 111 calculations is submitted [run.sh](run.sh) by [submit.sh](submit.sh).
@@ -65,14 +65,14 @@ For each displacement the forces needs to be calculated, i.e. in form of the out
 ```bash
 $./prepare.sh
 $ls
-disp-00001  disp-00009  disp-00017  disp-00025  disp-00033  disp-00041  disp-00049  disp-00057  disp-00065  disp-00073  disp-00081  disp-00089  disp-00097  disp-00105     INCAR
-disp-00002  disp-00010  disp-00018  disp-00026  disp-00034  disp-00042  disp-00050  disp-00058  disp-00066  disp-00074  disp-00082  disp-00090  disp-00098  disp-00106     KPOINTS
-disp-00003  disp-00011  disp-00019  disp-00027  disp-00035  disp-00043  disp-00051  disp-00059  disp-00067  disp-00075  disp-00083  disp-00091  disp-00099  disp-00107     POSCAR
-disp-00004  disp-00012  disp-00020  disp-00028  disp-00036  disp-00044  disp-00052  disp-00060  disp-00068  disp-00076  disp-00084  disp-00092  disp-00100  disp-00108     POTCAR
-disp-00005  disp-00013  disp-00021  disp-00029  disp-00037  disp-00045  disp-00053  disp-00061  disp-00069  disp-00077  disp-00085  disp-00093  disp-00101  disp-00109     prepare.sh
-disp-00006  disp-00014  disp-00022  disp-00030  disp-00038  disp-00046  disp-00054  disp-00062  disp-00070  disp-00078  disp-00086  disp-00094  disp-00102  disp-00110     run.sh
-disp-00007  disp-00015  disp-00023  disp-00031  disp-00039  disp-00047  disp-00055  disp-00063  disp-00071  disp-00079  disp-00087  disp-00095  disp-00103  disp-00111     submit.sh
-disp-00008  disp-00016  disp-00024  disp-00032  disp-00040  disp-00048  disp-00056  disp-00064  disp-00072  disp-00080  disp-00088  disp-00096  disp-00104  disp_fc3.yaml
+disp-00001  disp-00009  disp-00017  disp-00025  disp-00033  disp-00041  disp-00049  disp-00057  disp-00065  disp-00073  disp-00081  disp-00089  disp-00097  disp-00105     INCAR
+disp-00002  disp-00010  disp-00018  disp-00026  disp-00034  disp-00042  disp-00050  disp-00058  disp-00066  disp-00074  disp-00082  disp-00090  disp-00098  disp-00106     KPOINTS
+disp-00003  disp-00011  disp-00019  disp-00027  disp-00035  disp-00043  disp-00051  disp-00059  disp-00067  disp-00075  disp-00083  disp-00091  disp-00099  disp-00107     POSCAR
+disp-00004  disp-00012  disp-00020  disp-00028  disp-00036  disp-00044  disp-00052  disp-00060  disp-00068  disp-00076  disp-00084  disp-00092  disp-00100  disp-00108     POTCAR
+disp-00005  disp-00013  disp-00021  disp-00029  disp-00037  disp-00045  disp-00053  disp-00061  disp-00069  disp-00077  disp-00085  disp-00093  disp-00101  disp-00109     prepare.sh
+disp-00006  disp-00014  disp-00022  disp-00030  disp-00038  disp-00046  disp-00054  disp-00062  disp-00070  disp-00078  disp-00086  disp-00094  disp-00102  disp-00110     run.sh
+disp-00007  disp-00015  disp-00023  disp-00031  disp-00039  disp-00047  disp-00055  disp-00063  disp-00071  disp-00079  disp-00087  disp-00095  disp-00103  disp-00111     submit.sh
+disp-00008  disp-00016  disp-00024  disp-00032  disp-00040  disp-00048  disp-00056  disp-00064  disp-00072  disp-00080  disp-00088  disp-00096  disp-00104  disp_fc3.yaml
 ```
 
 Taylor your run.sh script to fit into your project and other needs and submit all 111 calculations using submit.sh script
@@ -110,7 +110,7 @@ $ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" --sigma 0.1 --wgp
 ```bash
 $ grep grid_point ir_grid_points.yaml
 num_reduced_ir_grid_points: 35
-ir_grid_points:  # [address, weight]
+ir_grid_points:  # [address, weight]
 - grid_point: 0
 - grid_point: 1
 - grid_point: 2
@@ -151,7 +151,7 @@ ir_grid_points:  # [address, weight]
 one finds which grid points needed to be calculated, for instance using following
 
 ```bash
-$ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR  --sigma 0.1 --br --write-gamma --gp="0 1 2
+$ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR  --sigma 0.1 --br --write-gamma --gp="0 1 2
 ```
 
 one calculates grid points 0, 1, 2. To automize one can use for instance scripts to submit 5 points in series, see [gofree-cond1.sh](gofree-cond1.sh)
diff --git a/docs.it4i/salomon/software/compilers.md b/docs.it4i/salomon/software/compilers.md
index 283761ac0..b14287af3 100644
--- a/docs.it4i/salomon/software/compilers.md
+++ b/docs.it4i/salomon/software/compilers.md
@@ -107,12 +107,12 @@ Simple program to test the compiler
     #include <stdio.h>
 
     int main() {
-      if (MYTHREAD == 0) {
-        printf("Welcome to GNU UPC!!!n");
-      }
-      upc_barrier;
-      printf(" - Hello from thread %in", MYTHREAD);
-      return 0;
+      if (MYTHREAD == 0) {
+        printf("Welcome to GNU UPC!!!n");
+      }
+      upc_barrier;
+      printf(" - Hello from thread %in", MYTHREAD);
+      return 0;
     }
 ```
 
@@ -153,12 +153,12 @@ Example UPC code:
     #include <stdio.h>
 
     int main() {
-      if (MYTHREAD == 0) {
-        printf("Welcome to Berkeley UPC!!!n");
-      }
-      upc_barrier;
-      printf(" - Hello from thread %in", MYTHREAD);
-      return 0;
+      if (MYTHREAD == 0) {
+        printf("Welcome to Berkeley UPC!!!n");
+      }
+      upc_barrier;
+      printf(" - Hello from thread %in", MYTHREAD);
+      return 0;
     }
 ```
 
diff --git a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
index e40a879be..a9f06a442 100644
--- a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
+++ b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
@@ -20,7 +20,7 @@ On the clusters COMSOL is available in the latest stable version. There are two
 
 -   **Non commercial** or so called >**EDU variant**>, which can be used for research and educational purposes.
 
--   **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU  variant** available. More about licensing will be posted here soon.
+-   **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU  variant** available. More about licensing will be posted here soon.
 
 To load the of COMSOL load the module
 
@@ -50,7 +50,7 @@ To run COMSOL in batch mode, without the COMSOL Desktop GUI environment, user ca
 #PBS -l select=3:ppn=24
 #PBS -q qprod
 #PBS -N JOB_NAME
-#PBS -A PROJECT_ID
+#PBS -A PROJECT_ID
 
 cd /scratch/work/user/$USER/ || exit
 
@@ -72,7 +72,7 @@ comsol -nn ${ntask} batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tm
 
 Working directory has to be created before sending the (comsol.pbs) job script into the queue. Input file (name_input_f.mph) has to be in working directory or full path to input file has to be specified. The appropriate path to the temp directory of the job has to be set by command option (-tmpdir).
 
-LiveLink™* *for MATLAB®
+LiveLink™* *for MATLAB®
 -------------------------
 COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL®API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB.
 
@@ -95,7 +95,7 @@ To run LiveLink for MATLAB in batch mode with (comsol_matlab.pbs) job script you
 #PBS -l select=3:ppn=24
 #PBS -q qprod
 #PBS -N JOB_NAME
-#PBS -A PROJECT_ID
+#PBS -A PROJECT_ID
 
 cd /scratch/work/user/$USER || exit
 
diff --git a/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
index c9a1473ef..332601743 100644
--- a/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
+++ b/docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
@@ -3,7 +3,7 @@ Intel VTune Amplifier XE
 
 Introduction
 ------------
-Intel*® *VTune™ Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
+Intel*® *VTune™ Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
 
 -   Hotspot analysis
 -   Locks and waits analysis
@@ -37,7 +37,7 @@ To launch the GUI, first load the module:
 and launch the GUI :
 
 ```bash
-    $ amplxe-gui
+    $ amplxe-gui
 ```
 
 The GUI will open in new window. Click on "New Project..." to create a new project. After clicking OK, a new window with project properties will appear.  At "Application:", select the bath to your binary you want to profile (the binary should be compiled with -g flag). Some additional options such as command line arguments can be selected. At "Managed code profiling mode:" select "Native" (unless you want to profile managed mode .NET/Mono applications). After clicking OK, your project is created.
@@ -63,11 +63,11 @@ It is possible to analyze both native and offloaded Xeon Phi applications.
 
 ### Native mode
 
-This mode is useful for native Xeon Phi applications launched directly on the card. In *Analysis Target* window, select *Intel Xeon Phi coprocessor (native), *choose path to the binary and MIC card to run on.
+This mode is useful for native Xeon Phi applications launched directly on the card. In *Analysis Target* window, select *Intel Xeon Phi coprocessor (native), *choose path to the binary and MIC card to run on.
 
 ### Offload mode
 
-This mode is useful for applications that are launched from the host and use offload, OpenCL or mpirun. In *Analysis Target* window, select *Intel Xeon Phi coprocessor (native), *choose path to the binaryand MIC card to run on.
+This mode is useful for applications that are launched from the host and use offload, OpenCL or mpirun. In *Analysis Target* window, select *Intel Xeon Phi coprocessor (native), *choose path to the binaryand MIC card to run on.
 
 !!! Note "Note"
 	If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
@@ -90,6 +90,6 @@ You can obtain this command line by pressing the "Command line..." button on Ana
 
 References
 ----------
-1.  <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors
-2.  <https://software.intel.com/en-us/intel-vtune-amplifier-xe-support/documentation> >Intel® VTune™ Amplifier Support
+1.  <https://www.rcac.purdue.edu/tutorials/phi/PerformanceTuningXeonPhi-Tullos.pdf> Performance Tuning for Intel® Xeon Phi™ Coprocessors
+2.  <https://software.intel.com/en-us/intel-vtune-amplifier-xe-support/documentation> >Intel® VTune™ Amplifier Support
 3.  <https://software.intel.com/en-us/amplifier_help_linux>
diff --git a/docs.it4i/salomon/software/debuggers/total-view.md b/docs.it4i/salomon/software/debuggers/total-view.md
index daff0abb0..7781ab410 100644
--- a/docs.it4i/salomon/software/debuggers/total-view.md
+++ b/docs.it4i/salomon/software/debuggers/total-view.md
@@ -8,8 +8,8 @@ License and Limitations for cluster Users
 On the cluster users can debug OpenMP or MPI code that runs up to 64 parallel processes. These limitation means that:
 
 ```bash
-    1 user can debug up 64 processes, or
-    32 users can debug 2 processes, etc.
+    1 user can debug up 64 processes, or
+    32 users can debug 2 processes, etc.
 ```
 
 Debugging of GPU accelerated codes is also supported.
@@ -26,7 +26,7 @@ Load all necessary modules to compile the code. For example:
 ```bash
     module load intel
 
-    module load impi   ... or ... module load OpenMPI/X.X.X-icc
+    module load impi   ... or ... module load OpenMPI/X.X.X-icc
 ```
 
 Load the TotalView module:
@@ -87,16 +87,16 @@ To debug a parallel code compiled with **OpenMPI** you need to setup your TotalV
 
 ```bash
     proc mpi_auto_run_starter {loaded_id} {
-        set starter_programs {mpirun mpiexec orterun}
-        set executable_name [TV::symbol get $loaded_id full_pathname]
-        set file_component [file tail $executable_name]
-
-        if {[lsearch -exact $starter_programs $file_component] != -1} {
-            puts "*************************************"
-            puts "Automatically starting $file_component"
-            puts "*************************************"
-            dgo
-        }
+        set starter_programs {mpirun mpiexec orterun}
+        set executable_name [TV::symbol get $loaded_id full_pathname]
+        set file_component [file tail $executable_name]
+
+        if {[lsearch -exact $starter_programs $file_component] != -1} {
+            puts "*************************************"
+            puts "Automatically starting $file_component"
+            puts "*************************************"
+            dgo
+        }
     }
 
     # Append this function to TotalView's image load callbacks so that
@@ -114,7 +114,7 @@ The source code of this function can be also found in
 You can also add only following line to you ~/.tvdrc file instead of
 the entire function:
 
-**source /apps/all/OpenMPI/1.10.1-GNU-4.9.3-2.25/etc/openmpi-totalview.tcl**
+**source /apps/all/OpenMPI/1.10.1-GNU-4.9.3-2.25/etc/openmpi-totalview.tcl**
 
 You need to do this step only once. See also [OpenMPI FAQ entry](https://www.open-mpi.org/faq/?category=running#run-with-tv)
 
@@ -146,7 +146,7 @@ The following example shows how to start debugging session with Intel MPI:
 
 After running previous command you will see the same window as shown in the screenshot above.
 
-More information regarding the command line parameters of the TotalView can be found TotalView Reference Guide, Chapter 7: TotalView Command Syntax.
+More information regarding the command line parameters of the TotalView can be found TotalView Reference Guide, Chapter 7: TotalView Command Syntax.
 
 Documentation
 -------------
diff --git a/docs.it4i/salomon/software/debuggers/valgrind.md b/docs.it4i/salomon/software/debuggers/valgrind.md
index 9c0798d20..df3bda344 100644
--- a/docs.it4i/salomon/software/debuggers/valgrind.md
+++ b/docs.it4i/salomon/software/debuggers/valgrind.md
@@ -170,13 +170,13 @@ Lets look at this MPI example:
 
     int main(int argc, char *argv[])
     {
-            int *data = malloc(sizeof(int)*99);
+            int *data = malloc(sizeof(int)*99);
 
-            MPI_Init(&argc, &argv);
-            MPI_Bcast(data, 100, MPI_INT, 0, MPI_COMM_WORLD);
-            MPI_Finalize();
+            MPI_Init(&argc, &argv);
+            MPI_Bcast(data, 100, MPI_INT, 0, MPI_COMM_WORLD);
+            MPI_Finalize();
 
-            return 0;
+            return 0;
     }
 ```
 
diff --git a/docs.it4i/salomon/software/intel-suite/intel-advisor.md b/docs.it4i/salomon/software/intel-suite/intel-advisor.md
index f02d18643..cf25a765c 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-advisor.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-advisor.md
@@ -1,7 +1,7 @@
 Intel Advisor
 =============
 
-is tool aiming to assist you in vectorization and threading of your code. You can use it to profile your application and identify loops, that could benefit from vectorization and/or threading parallelism.
+is tool aiming to assist you in vectorization and threading of your code. You can use it to profile your application and identify loops, that could benefit from vectorization and/or threading parallelism.
 
 Installed versions
 ------------------
diff --git a/docs.it4i/salomon/software/intel-suite/intel-compilers.md b/docs.it4i/salomon/software/intel-suite/intel-compilers.md
index 8185db79f..0b61d00af 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-compilers.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-compilers.md
@@ -23,15 +23,15 @@ In this example, we compile the program enabling interprocedural optimizations b
 The compiler recognizes the omp, simd, vector and ivdep pragmas for OpenMP parallelization and AVX2 vectorization. Enable the OpenMP parallelization by the **-openmp** compiler switch.
 
 ```bash
-    $ icc -ipo -O3 -xCORE-AVX2 -qopt-report1 -qopt-report-phase=vec -openmp myprog.c mysubroutines.c -o myprog.x
-    $ ifort -ipo -O3 -xCORE-AVX2 -qopt-report1 -qopt-report-phase=vec -openmp myprog.f mysubroutines.f -o myprog.x
+    $ icc -ipo -O3 -xCORE-AVX2 -qopt-report1 -qopt-report-phase=vec -openmp myprog.c mysubroutines.c -o myprog.x
+    $ ifort -ipo -O3 -xCORE-AVX2 -qopt-report1 -qopt-report-phase=vec -openmp myprog.f mysubroutines.f -o myprog.x
 ```
 
-Read more at <https://software.intel.com/en-us/intel-cplusplus-compiler-16.0-user-and-reference-guide>
+Read more at <https://software.intel.com/en-us/intel-cplusplus-compiler-16.0-user-and-reference-guide>
 
 Sandy Bridge/Ivy Bridge/Haswell binary compatibility
 ----------------------------------------------------
- Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon compute nodes are equipped with Haswell based architecture. The UV1 SMP compute server has Ivy Bridge CPUs, which are equivalent to Sandy Bridge (only smaller manufacturing technology). The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this:
+ Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon compute nodes are equipped with Haswell based architecture. The UV1 SMP compute server has Ivy Bridge CPUs, which are equivalent to Sandy Bridge (only smaller manufacturing technology). The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this:
 
--   Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge/Ivy Bridge nodes.
--   Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This   will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge/Ivy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this    will result in larger binaries.
+-   Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge/Ivy Bridge nodes.
+-   Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This   will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge/Ivy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this    will result in larger binaries.
diff --git a/docs.it4i/salomon/software/intel-suite/intel-debugger.md b/docs.it4i/salomon/software/intel-suite/intel-debugger.md
index c447a6277..7452cbb50 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-debugger.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-debugger.md
@@ -5,7 +5,7 @@ IDB is no longer available since Intel Parallel Studio 2015
 
 Debugging serial applications
 -----------------------------
-The intel debugger version  13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) for running the GUI.
+The intel debugger version  13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) for running the GUI.
 
 ```bash
     $ module load intel/2014.06
diff --git a/docs.it4i/salomon/software/intel-suite/intel-mkl.md b/docs.it4i/salomon/software/intel-suite/intel-mkl.md
index 4050e14a8..43a2ff1e3 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-mkl.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-mkl.md
@@ -13,11 +13,11 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e
 -   Vector Math Library (VML) routines for optimized mathematical operations on vectors.
 -   Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for several probability distributions, convolution and correlation routines, and summary statistics functions.
 -   Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search.
--   Extended Eigensolver, a shared memory  version of an eigensolver based on the Feast Eigenvalue Solver.
+-   Extended Eigensolver, a shared memory  version of an eigensolver based on the Feast Eigenvalue Solver.
 
 For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm).
 
-Intel MKL version 11.2.3.187 is available on the cluster
+Intel MKL version 11.2.3.187 is available on the cluster
 
 ```bash
     $ module load imkl
@@ -40,7 +40,7 @@ Intel MKL library provides number of interfaces. The fundamental once are the LP
 
 Linking Intel MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl/#examples) below.
 
-You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include  rpath on the compile line:
+You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include  rpath on the compile line:
 
 ```bash
     $ icc .... -Wl,-rpath=$LIBRARY_PATH ...
@@ -74,7 +74,7 @@ Number of examples, demonstrating use of the Intel MKL library and its linking i
     $ make sointel64 function=cblas_dgemm
 ```
 
-In this example, we compile, link and run the cblas_dgemm  example, demonstrating use of MKL example suite installed on clusters.
+In this example, we compile, link and run the cblas_dgemm  example, demonstrating use of MKL example suite installed on clusters.
 
 ### Example: MKL and Intel compiler
 
@@ -88,14 +88,14 @@ In this example, we compile, link and run the cblas_dgemm  example, demonstrati
     $ ./cblas_dgemmx.x data/cblas_dgemmx.d
 ```
 
-In this example, we compile, link and run the cblas_dgemm  example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to:
+In this example, we compile, link and run the cblas_dgemm  example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to:
 
 ```bash
     $ icc -w source/cblas_dgemmx.c source/common_func.c -o cblas_dgemmx.x
     -I$MKL_INC_DIR -L$MKL_LIB_DIR -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5
 ```
 
-In this example, we compile and link the cblas_dgemm  example, using LP64 interface to threaded MKL and Intel OMP threads implementation.
+In this example, we compile and link the cblas_dgemm  example, using LP64 interface to threaded MKL and Intel OMP threads implementation.
 
 ### Example: Intel MKL and GNU compiler
 
@@ -111,7 +111,7 @@ In this example, we compile and link the cblas_dgemm  example, using LP64 inter
     $ ./cblas_dgemmx.x data/cblas_dgemmx.d
 ```
 
-In this example, we compile, link and run the cblas_dgemm  example, using LP64 interface to threaded MKL and gnu OMP threads implementation.
+In this example, we compile, link and run the cblas_dgemm  example, using LP64 interface to threaded MKL and gnu OMP threads implementation.
 
 MKL and MIC accelerators
 ------------------------
diff --git a/docs.it4i/salomon/software/intel-suite/intel-tbb.md b/docs.it4i/salomon/software/intel-suite/intel-tbb.md
index 3a83e9698..7d05e24e1 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-tbb.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-tbb.md
@@ -3,9 +3,9 @@ Intel TBB
 
 Intel Threading Building Blocks
 ------------------------------
-Intel Threading Building Blocks (Intel TBB) is a library that supports scalable parallel programming using standard ISO C++ code. It does not require special languages or compilers.  To use the library, you specify tasks, not threads, and let the library map tasks onto threads in an efficient manner. The tasks are executed by a runtime scheduler and may be offloaded to [MIC accelerator](../intel-xeon-phi/).
+Intel Threading Building Blocks (Intel TBB) is a library that supports scalable parallel programming using standard ISO C++ code. It does not require special languages or compilers.  To use the library, you specify tasks, not threads, and let the library map tasks onto threads in an efficient manner. The tasks are executed by a runtime scheduler and may be offloaded to [MIC accelerator](../intel-xeon-phi/).
 
-Intel TBB version 4.3.5.187 is available on the cluster.
+Intel TBB version 4.3.5.187 is available on the cluster.
 
 ```bash
     $ module load tbb
@@ -17,7 +17,7 @@ Link the tbb library, using -ltbb
 
 Examples
 --------
-Number of examples, demonstrating use of TBB and its built-in scheduler is available on Anselm, in the $TBB_EXAMPLES directory.
+Number of examples, demonstrating use of TBB and its built-in scheduler is available on Anselm, in the $TBB_EXAMPLES directory.
 
 ```bash
     $ module load intel
diff --git a/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md b/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
index 6d0703ed0..e88fff56b 100644
--- a/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
+++ b/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
@@ -7,7 +7,7 @@ ITAC is a offline analysis tool - first you run your application to collect a tr
 
 Installed version
 -----------------
-Currently on Salomon is version 9.1.2.024 available as module  itac/9.1.2.024
+Currently on Salomon is version 9.1.2.024 available as module  itac/9.1.2.024
 
 Collecting traces
 -----------------
diff --git a/docs.it4i/salomon/software/intel-xeon-phi.md b/docs.it4i/salomon/software/intel-xeon-phi.md
index 90b099d99..19ec77df6 100644
--- a/docs.it4i/salomon/software/intel-xeon-phi.md
+++ b/docs.it4i/salomon/software/intel-xeon-phi.md
@@ -19,7 +19,7 @@ To set up the environment module "Intel" has to be loaded
     $ module load intel/13.5.192
 ```
 
-Information about the hardware can be obtained by running the micinfo program on the host.
+Information about the hardware can be obtained by running the micinfo program on the host.
 
 ```bash
     $ /usr/bin/micinfo
@@ -32,61 +32,61 @@ The output of the "micinfo" utility executed on one of the Anselm node is as fol
 
     Created Mon Jul 22 00:23:50 2013
 
-            System Info
-                    HOST OS                 : Linux
-                    OS Version              : 2.6.32-279.5.2.bl6.Bull.33.x86_64
-                    Driver Version          : 6720-15
-                    MPSS Version            : 2.1.6720-15
-                    Host Physical Memory    : 98843 MB
+            System Info
+                    HOST OS                 : Linux
+                    OS Version              : 2.6.32-279.5.2.bl6.Bull.33.x86_64
+                    Driver Version          : 6720-15
+                    MPSS Version            : 2.1.6720-15
+                    Host Physical Memory    : 98843 MB
 
     Device No: 0, Device Name: mic0
 
-            Version
-                    Flash Version            : 2.1.03.0386
-                    SMC Firmware Version     : 1.15.4830
-                    SMC Boot Loader Version  : 1.8.4326
-                    uOS Version              : 2.6.38.8-g2593b11
-                    Device Serial Number     : ADKC30102482
-
-            Board
-                    Vendor ID                : 0x8086
-                    Device ID                : 0x2250
-                    Subsystem ID             : 0x2500
-                    Coprocessor Stepping ID  : 3
-                    PCIe Width               : x16
-                    PCIe Speed               : 5 GT/s
-                    PCIe Max payload size    : 256 bytes
-                    PCIe Max read req size   : 512 bytes
-                    Coprocessor Model        : 0x01
-                    Coprocessor Model Ext    : 0x00
-                    Coprocessor Type         : 0x00
-                    Coprocessor Family       : 0x0b
-                    Coprocessor Family Ext   : 0x00
-                    Coprocessor Stepping     : B1
-                    Board SKU                : B1PRQ-5110P/5120D
-                    ECC Mode                 : Enabled
-                    SMC HW Revision          : Product 225W Passive CS
-
-            Cores
-                    Total No of Active Cores : 60
-                    Voltage                  : 1032000 uV
-                    Frequency                : 1052631 kHz
-
-            Thermal
-                    Fan Speed Control        : N/A
-                    Fan RPM                  : N/A
-                    Fan PWM                  : N/A
-                    Die Temp                 : 49 C
-
-            GDDR
-                    GDDR Vendor              : Elpida
-                    GDDR Version             : 0x1
-                    GDDR Density             : 2048 Mb
-                    GDDR Size                : 7936 MB
-                    GDDR Technology          : GDDR5
-                    GDDR Speed               : 5.000000 GT/s
-                    GDDR Frequency           : 2500000 kHz
-                    GDDR Voltage             : 1501000 uV
+            Version
+                    Flash Version            : 2.1.03.0386
+                    SMC Firmware Version     : 1.15.4830
+                    SMC Boot Loader Version  : 1.8.4326
+                    uOS Version              : 2.6.38.8-g2593b11
+                    Device Serial Number     : ADKC30102482
+
+            Board
+                    Vendor ID                : 0x8086
+                    Device ID                : 0x2250
+                    Subsystem ID             : 0x2500
+                    Coprocessor Stepping ID  : 3
+                    PCIe Width               : x16
+                    PCIe Speed               : 5 GT/s
+                    PCIe Max payload size    : 256 bytes
+                    PCIe Max read req size   : 512 bytes
+                    Coprocessor Model        : 0x01
+                    Coprocessor Model Ext    : 0x00
+                    Coprocessor Type         : 0x00
+                    Coprocessor Family       : 0x0b
+                    Coprocessor Family Ext   : 0x00
+                    Coprocessor Stepping     : B1
+                    Board SKU                : B1PRQ-5110P/5120D
+                    ECC Mode                 : Enabled
+                    SMC HW Revision          : Product 225W Passive CS
+
+            Cores
+                    Total No of Active Cores : 60
+                    Voltage                  : 1032000 uV
+                    Frequency                : 1052631 kHz
+
+            Thermal
+                    Fan Speed Control        : N/A
+                    Fan RPM                  : N/A
+                    Fan PWM                  : N/A
+                    Die Temp                 : 49 C
+
+            GDDR
+                    GDDR Vendor              : Elpida
+                    GDDR Version             : 0x1
+                    GDDR Density             : 2048 Mb
+                    GDDR Size                : 7936 MB
+                    GDDR Technology          : GDDR5
+                    GDDR Speed               : 5.000000 GT/s
+                    GDDR Frequency           : 2500000 kHz
+                    GDDR Voltage             : 1501000 uV
 ```
 
 Offload Mode
@@ -113,16 +113,16 @@ A very basic example of code that employs offload programming technique is shown
 
     int main(int argc, char* argv[])
     {
-        const int niter = 100000;
-        double result = 0;
-
-     #pragma offload target(mic)
-        for (int i = 0; i < niter; ++i) {
-            const double t = (i + 0.5) / niter;
-            result += 4.0 / (t * t + 1.0);
-        }
-        result /= niter;
-        std::cout << "Pi ~ " << result << 'n';
+        const int niter = 100000;
+        double result = 0;
+
+     #pragma offload target(mic)
+        for (int i = 0; i < niter; ++i) {
+            const double t = (i + 0.5) / niter;
+            result += 4.0 / (t * t + 1.0);
+        }
+        result /= niter;
+        std::cout << "Pi ~ " << result << 'n';
     }
 ```
 
@@ -159,63 +159,63 @@ One way of paralelization a code for Xeon Phi is using OpenMP directives. The fo
 
     // MIC function to add two vectors
     __attribute__((target(mic))) add_mic(T *a, T *b, T *c, int size) {
-      int i = 0;
-      #pragma omp parallel for
-        for (i = 0; i < size; i++)
-          c[i] = a[i] + b[i];
+      int i = 0;
+      #pragma omp parallel for
+        for (i = 0; i < size; i++)
+          c[i] = a[i] + b[i];
     }
 
     // CPU function to add two vectors
     void add_cpu (T *a, T *b, T *c, int size) {
-      int i;
-      for (i = 0; i < size; i++)
-        c[i] = a[i] + b[i];
+      int i;
+      for (i = 0; i < size; i++)
+        c[i] = a[i] + b[i];
     }
 
     // CPU function to generate a vector of random numbers
     void random_T (T *a, int size) {
-      int i;
-      for (i = 0; i < size; i++)
-        a[i] = rand() % 10000; // random number between 0 and 9999
+      int i;
+      for (i = 0; i < size; i++)
+        a[i] = rand() % 10000; // random number between 0 and 9999
     }
 
     // CPU function to compare two vectors
     int compare(T *a, T *b, T size ){
-      int pass = 0;
-      int i;
-      for (i = 0; i < size; i++){
-        if (a[i] != b[i]) {
-          printf("Value mismatch at location %d, values %d and %dn",i, a[i], b[i]);
-          pass = 1;
-        }
-      }
-      if (pass == 0) printf ("Test passedn"); else printf ("Test Failedn");
-      return pass;
+      int pass = 0;
+      int i;
+      for (i = 0; i < size; i++){
+        if (a[i] != b[i]) {
+          printf("Value mismatch at location %d, values %d and %dn",i, a[i], b[i]);
+          pass = 1;
+        }
+      }
+      if (pass == 0) printf ("Test passedn"); else printf ("Test Failedn");
+      return pass;
     }
 
     int main()
     {
-      int i;
-      random_T(in1, SIZE);
-      random_T(in2, SIZE);
+      int i;
+      random_T(in1, SIZE);
+      random_T(in2, SIZE);
 
-      #pragma offload target(mic) in(in1,in2)  inout(res)
-      {
+      #pragma offload target(mic) in(in1,in2)  inout(res)
+      {
 
-        // Parallel loop from main function
-        #pragma omp parallel for
-        for (i=0; i<SIZE; i++)
-          res[i] = in1[i] + in2[i];
+        // Parallel loop from main function
+        #pragma omp parallel for
+        for (i=0; i<SIZE; i++)
+          res[i] = in1[i] + in2[i];
 
-        // or parallel loop is called inside the function
-        add_mic(in1, in2, res, SIZE);
+        // or parallel loop is called inside the function
+        add_mic(in1, in2, res, SIZE);
 
-      }
+      }
 
-      //Check the results with CPU implementation
-      T res_cpu[SIZE];
-      add_cpu(in1, in2, res_cpu, SIZE);
-      compare(res, res_cpu, SIZE);
+      //Check the results with CPU implementation
+      T res_cpu[SIZE];
+      add_cpu(in1, in2, res_cpu, SIZE);
+      compare(res, res_cpu, SIZE);
 
     }
 ```
@@ -281,48 +281,48 @@ Following example show how to automatically offload an SGEMM (single precision -
 
     int main(int argc, char **argv)
     {
-            float *A, *B, *C; /* Matrices */
+            float *A, *B, *C; /* Matrices */
 
-            MKL_INT N = 2560; /* Matrix dimensions */
-            MKL_INT LD = N; /* Leading dimension */
-            int matrix_bytes; /* Matrix size in bytes */
-            int matrix_elements; /* Matrix size in elements */
+            MKL_INT N = 2560; /* Matrix dimensions */
+            MKL_INT LD = N; /* Leading dimension */
+            int matrix_bytes; /* Matrix size in bytes */
+            int matrix_elements; /* Matrix size in elements */
 
-            float alpha = 1.0, beta = 1.0; /* Scaling factors */
-            char transa = 'N', transb = 'N'; /* Transposition options */
+            float alpha = 1.0, beta = 1.0; /* Scaling factors */
+            char transa = 'N', transb = 'N'; /* Transposition options */
 
-            int i, j; /* Counters */
+            int i, j; /* Counters */
 
-            matrix_elements = N * N;
-            matrix_bytes = sizeof(float) * matrix_elements;
+            matrix_elements = N * N;
+            matrix_bytes = sizeof(float) * matrix_elements;
 
-            /* Allocate the matrices */
-            A = malloc(matrix_bytes); B = malloc(matrix_bytes); C = malloc(matrix_bytes);
+            /* Allocate the matrices */
+            A = malloc(matrix_bytes); B = malloc(matrix_bytes); C = malloc(matrix_bytes);
 
-            /* Initialize the matrices */
-            for (i = 0; i < matrix_elements; i++) {
-                    A[i] = 1.0; B[i] = 2.0; C[i] = 0.0;
-            }
+            /* Initialize the matrices */
+            for (i = 0; i < matrix_elements; i++) {
+                    A[i] = 1.0; B[i] = 2.0; C[i] = 0.0;
+            }
 
-            printf("Computing SGEMM on the hostn");
-            sgemm(&transa, &transb, &N, &N, &N, &alpha, A, &N, B, &N, &beta, C, &N);
+            printf("Computing SGEMM on the hostn");
+            sgemm(&transa, &transb, &N, &N, &N, &alpha, A, &N, B, &N, &beta, C, &N);
 
-            printf("Enabling Automatic Offloadn");
-            /* Alternatively, set environment variable MKL_MIC_ENABLE=1 */
-            mkl_mic_enable();
+            printf("Enabling Automatic Offloadn");
+            /* Alternatively, set environment variable MKL_MIC_ENABLE=1 */
+            mkl_mic_enable();
 
-            int ndevices = mkl_mic_get_device_count(); /* Number of MIC devices */
-            printf("Automatic Offload enabled: %d MIC devices presentn",   ndevices);
+            int ndevices = mkl_mic_get_device_count(); /* Number of MIC devices */
+            printf("Automatic Offload enabled: %d MIC devices presentn",   ndevices);
 
-            printf("Computing SGEMM with automatic workdivisionn");
-            sgemm(&transa, &transb, &N, &N, &N, &alpha, A, &N, B, &N, &beta, C, &N);
+            printf("Computing SGEMM with automatic workdivisionn");
+            sgemm(&transa, &transb, &N, &N, &N, &alpha, A, &N, B, &N, &beta, C, &N);
 
-            /* Free the matrix memory */
-            free(A); free(B); free(C);
+            /* Free the matrix memory */
+            free(A); free(B); free(C);
 
-            printf("Donen");
+            printf("Donen");
 
-        return 0;
+        return 0;
     }
 ```
 
@@ -348,10 +348,10 @@ The output of a code should look similar to following listing, where lines start
     Enabling Automatic Offload
     Automatic Offload enabled: 1 MIC devices present
     Computing SGEMM with automatic workdivision
-    [MKL] [MIC --] [AO Function]    SGEMM
-    [MKL] [MIC --] [AO SGEMM Workdivision]  0.00 1.00
-    [MKL] [MIC 00] [AO SGEMM CPU Time]      0.463351 seconds
-    [MKL] [MIC 00] [AO SGEMM MIC Time]      0.179608 seconds
+    [MKL] [MIC --] [AO Function]    SGEMM
+    [MKL] [MIC --] [AO SGEMM Workdivision]  0.00 1.00
+    [MKL] [MIC 00] [AO SGEMM CPU Time]      0.463351 seconds
+    [MKL] [MIC 00] [AO SGEMM MIC Time]      0.179608 seconds
     [MKL] [MIC 00] [AO SGEMM CPU->MIC Data] 52428800 bytes
     [MKL] [MIC 00] [AO SGEMM MIC->CPU Data] 26214400 bytes
     Done
@@ -478,23 +478,23 @@ After executing the complied binary file, following output should be displayed.
 
     Number of available platforms: 1
     Platform names:
-        [0] Intel(R) OpenCL [Selected]
+        [0] Intel(R) OpenCL [Selected]
     Number of devices available for each type:
-        CL_DEVICE_TYPE_CPU: 1
-        CL_DEVICE_TYPE_GPU: 0
-        CL_DEVICE_TYPE_ACCELERATOR: 1
+        CL_DEVICE_TYPE_CPU: 1
+        CL_DEVICE_TYPE_GPU: 0
+        CL_DEVICE_TYPE_ACCELERATOR: 1
 
     ** Detailed information for each device ***
 
     CL_DEVICE_TYPE_CPU[0]
-        CL_DEVICE_NAME:        Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz
-        CL_DEVICE_AVAILABLE: 1
+        CL_DEVICE_NAME:        Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz
+        CL_DEVICE_AVAILABLE: 1
 
     ...
 
     CL_DEVICE_TYPE_ACCELERATOR[0]
-        CL_DEVICE_NAME: Intel(R) Many Integrated Core Acceleration Card
-        CL_DEVICE_AVAILABLE: 1
+        CL_DEVICE_NAME: Intel(R) Many Integrated Core Acceleration Card
+        CL_DEVICE_AVAILABLE: 1
 
     ...
 ```
@@ -578,23 +578,23 @@ An example of basic MPI version of "hello-world" example in C language, that can
     #include <mpi.h>
 
     int main (argc, argv)
-         int argc;
-         char *argv[];
+         int argc;
+         char *argv[];
     {
-      int rank, size;
+      int rank, size;
 
-      int len;
-      char node[MPI_MAX_PROCESSOR_NAME];
+      int len;
+      char node[MPI_MAX_PROCESSOR_NAME];
 
-      MPI_Init (&argc, &argv);      /* starts MPI */
-      MPI_Comm_rank (MPI_COMM_WORLD, &rank);        /* get current process id */
-      MPI_Comm_size (MPI_COMM_WORLD, &size);        /* get number of processes */
+      MPI_Init (&argc, &argv);      /* starts MPI */
+      MPI_Comm_rank (MPI_COMM_WORLD, &rank);        /* get current process id */
+      MPI_Comm_size (MPI_COMM_WORLD, &size);        /* get number of processes */
 
-      MPI_Get_processor_name(node,&len);
+      MPI_Get_processor_name(node,&len);
 
-      printf( "Hello world from process %d of %d on host %s n", rank, size, node );
-      MPI_Finalize();
-      return 0;
+      printf( "Hello world from process %d of %d on host %s n", rank, size, node );
+      MPI_Finalize();
+      return 0;
     }
 ```
 
@@ -722,8 +722,8 @@ The output should be again similar to:
 	Please note that the **"mpiexec.hydra"** requires a file the MIC filesystem. If the file is missing please contact the system administrators. A simple test to see if the file is present is to execute:
 
 ```bash
-      $ ssh mic0 ls /bin/pmi_proxy
-      /bin/pmi_proxy
+      $ ssh mic0 ls /bin/pmi_proxy
+      /bin/pmi_proxy
 ```
 
 **Execution on host - MPI processes distributed over multiple accelerators on multiple nodes**
@@ -769,8 +769,8 @@ The launch the MPI program use:
 ```bash
     $ mpiexec.hydra -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/
      -genv I_MPI_FABRICS_LIST tcp
-     -genv I_MPI_FABRICS shm:tcp
-     -genv I_MPI_TCP_NETMASK=10.1.0.0/16
+     -genv I_MPI_FABRICS shm:tcp
+     -genv I_MPI_TCP_NETMASK=10.1.0.0/16
      -host cn204-mic0 -n 4 ~/mpi-test-mic
     : -host cn205-mic0 -n 6 ~/mpi-test-mic
 ```
@@ -779,8 +779,8 @@ or using mpirun:
 ```bash
     $ mpirun -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/
      -genv I_MPI_FABRICS_LIST tcp
-     -genv I_MPI_FABRICS shm:tcp
-     -genv I_MPI_TCP_NETMASK=10.1.0.0/16
+     -genv I_MPI_FABRICS shm:tcp
+     -genv I_MPI_TCP_NETMASK=10.1.0.0/16
      -host cn204-mic0 -n 4 ~/mpi-test-mic
     : -host cn205-mic0 -n 6 ~/mpi-test-mic
 ```
@@ -805,8 +805,8 @@ The same way MPI program can be executed on multiple hosts:
 ```bash
     $ mpiexec.hydra -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/
      -genv I_MPI_FABRICS_LIST tcp
-     -genv I_MPI_FABRICS shm:tcp
-     -genv I_MPI_TCP_NETMASK=10.1.0.0/16
+     -genv I_MPI_FABRICS shm:tcp
+     -genv I_MPI_TCP_NETMASK=10.1.0.0/16
      -host cn204 -n 4 ~/mpi-test
     : -host cn205 -n 6 ~/mpi-test
 ```
@@ -822,7 +822,7 @@ In the previous section we have compiled two binary files, one for hosts "**mpi-
     $ mpiexec.hydra
      -genv I_MPI_FABRICS_LIST tcp
      -genv I_MPI_FABRICS shm:tcp
-     -genv I_MPI_TCP_NETMASK=10.1.0.0/16
+     -genv I_MPI_TCP_NETMASK=10.1.0.0/16
      -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/
      -host cn205 -n 2 ~/mpi-test
     : -host cn205-mic0 -n 2 ~/mpi-test-mic
@@ -851,7 +851,7 @@ An example of a machine file that uses 2 >hosts (**cn205** and **cn206**) and 2
     cn206-mic0:2
 ```
 
-In addition if a naming convention is set in a way that the name of the binary for host is **"bin_name"**  and the name of the binary for the accelerator is **"bin_name-mic"** then by setting up the environment variable **I_MPI_MIC_POSTFIX** to **"-mic"** user do not have to specify the names of booth binaries. In this case mpirun needs just the name of the host binary file (i.e. "mpi-test") and uses the suffix to get a name of the binary for accelerator (i..e. "mpi-test-mic").
+In addition if a naming convention is set in a way that the name of the binary for host is **"bin_name"**  and the name of the binary for the accelerator is **"bin_name-mic"** then by setting up the environment variable **I_MPI_MIC_POSTFIX** to **"-mic"** user do not have to specify the names of booth binaries. In this case mpirun needs just the name of the host binary file (i.e. "mpi-test") and uses the suffix to get a name of the binary for accelerator (i..e. "mpi-test-mic").
 
 ```bash
     $ export I_MPI_MIC_POSTFIX=-mic
@@ -864,8 +864,8 @@ To run the MPI code using mpirun and the machine file "hosts_file_mix" use:
      -genv I_MPI_FABRICS shm:tcp
      -genv LD_LIBRARY_PATH /apps/intel/impi/4.1.1.036/mic/lib/
      -genv I_MPI_FABRICS_LIST tcp
-     -genv I_MPI_FABRICS shm:tcp
-     -genv I_MPI_TCP_NETMASK=10.1.0.0/16
+     -genv I_MPI_FABRICS shm:tcp
+     -genv I_MPI_TCP_NETMASK=10.1.0.0/16
      -machinefile hosts_file_mix
      ~/mpi-test
 ```
@@ -901,4 +901,4 @@ Please note each host or accelerator is listed only per files. User has to speci
 
 Optimization
 ------------
-For more details about optimization techniques please read Intel document [Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors](http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization "http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization")
+For more details about optimization techniques please read Intel document [Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors](http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization "http://software.intel.com/en-us/articles/optimization-and-performance-tuning-for-intel-xeon-phi-coprocessors-part-1-optimization")
diff --git a/docs.it4i/salomon/software/java.md b/docs.it4i/salomon/software/java.md
index ca5b0bf39..70b522d0b 100644
--- a/docs.it4i/salomon/software/java.md
+++ b/docs.it4i/salomon/software/java.md
@@ -25,4 +25,4 @@ With the module loaded, not only the runtime environment (JRE), but also the dev
     $ which javac
 ```
 
-Java applications may use MPI for inter-process communication, in conjunction with Open MPI. Read more on <http://www.open-mpi.org/faq/?category=java>. This functionality is currently not supported on Anselm cluster. In case you require the java interface to MPI, please contact [cluster support](https://support.it4i.cz/rt/).
+Java applications may use MPI for inter-process communication, in conjunction with Open MPI. Read more on <http://www.open-mpi.org/faq/?category=java>. This functionality is currently not supported on Anselm cluster. In case you require the java interface to MPI, please contact [cluster support](https://support.it4i.cz/rt/).
diff --git a/docs.it4i/salomon/software/mpi/Running_OpenMPI.md b/docs.it4i/salomon/software/mpi/Running_OpenMPI.md
index 92c8d475c..d6656a122 100644
--- a/docs.it4i/salomon/software/mpi/Running_OpenMPI.md
+++ b/docs.it4i/salomon/software/mpi/Running_OpenMPI.md
@@ -96,7 +96,7 @@ In this example, we demonstrate recommended way to run an MPI application, using
 ### OpenMP thread affinity
 
 !!! Note "Note"
-	Important!  Bind every OpenMP thread to a core!
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two examples with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You might want to avoid this by setting these environment variable for GCC OpenMP:
 
diff --git a/docs.it4i/salomon/software/mpi/mpi.md b/docs.it4i/salomon/software/mpi/mpi.md
index 0af8a881a..e17e0a08c 100644
--- a/docs.it4i/salomon/software/mpi/mpi.md
+++ b/docs.it4i/salomon/software/mpi/mpi.md
@@ -42,7 +42,7 @@ Examples:
     $ module load gompi/2015b
 ```
 
-In this example, we activate the latest OpenMPI with latest GNU compilers (OpenMPI 1.8.6 and GCC 5.1). Please see more information about toolchains in section [Environment and Modules](../../environment-and-modules/) .
+In this example, we activate the latest OpenMPI with latest GNU compilers (OpenMPI 1.8.6 and GCC 5.1). Please see more information about toolchains in section [Environment and Modules](../../environment-and-modules/) .
 
 To use OpenMPI with the intel compiler suite, use
 
@@ -125,12 +125,12 @@ Consider these ways to run an MPI program:
 2. Two MPI processes per node, 12 threads per process
 3. 24 MPI processes per node, 1 thread per process.
 
-**One MPI** process per node, using 24 threads, is most useful for memory demanding applications, that make good use of processor cache memory and are not memory bound.  This is also a preferred way for communication intensive applications as one process per node enjoys full bandwidth access to the network interface.
+**One MPI** process per node, using 24 threads, is most useful for memory demanding applications, that make good use of processor cache memory and are not memory bound.  This is also a preferred way for communication intensive applications as one process per node enjoys full bandwidth access to the network interface.
 
 **Two MPI** processes per node, using 12 threads each, bound to processor socket is most useful for memory bandwidth bound applications such as BLAS1 or FFT, with scalable memory demand. However, note that the two processes will share access to the network interface. The 12 threads and socket binding should ensure maximum memory access bandwidth and minimize communication, migration and numa effect overheads.
 
 !!! Note "Note"
-	Important!  Bind every OpenMP thread to a core!
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two cases with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You want to avoid this by setting the KMP_AFFINITY or GOMP_CPU_AFFINITY environment variables.
 
diff --git a/docs.it4i/salomon/software/mpi/mpi4py-mpi-for-python.md b/docs.it4i/salomon/software/mpi/mpi4py-mpi-for-python.md
index 21adfada0..490b2cfc8 100644
--- a/docs.it4i/salomon/software/mpi/mpi4py-mpi-for-python.md
+++ b/docs.it4i/salomon/software/mpi/mpi4py-mpi-for-python.md
@@ -51,7 +51,7 @@ Examples
 
     print "Hello! I'm rank %d from %d running in total..." % (comm.rank, comm.size)
 
-    comm.Barrier()   # wait for everybody to synchronize
+    comm.Barrier()   # wait for everybody to synchronize
 ```
 
 ###Collective Communication with NumPy arrays
@@ -72,9 +72,9 @@ Examples
     # Prepare a vector of N=5 elements to be broadcasted...
     N = 5
     if comm.rank == 0:
-        A = np.arange(N, dtype=np.float64)    # rank 0 has proper data
+        A = np.arange(N, dtype=np.float64)    # rank 0 has proper data
     else:
-        A = np.empty(N, dtype=np.float64)     # all other just an empty array
+        A = np.empty(N, dtype=np.float64)     # all other just an empty array
 
     # Broadcast A from rank 0 to everybody
     comm.Bcast( [A, MPI.DOUBLE] )
diff --git a/docs.it4i/salomon/software/numerical-languages/matlab.md b/docs.it4i/salomon/software/numerical-languages/matlab.md
index f1631d175..7eebf17ad 100644
--- a/docs.it4i/salomon/software/numerical-languages/matlab.md
+++ b/docs.it4i/salomon/software/numerical-languages/matlab.md
@@ -24,7 +24,7 @@ If you need to use the Matlab GUI to prepare your Matlab programs, you can use M
 
 If you require the Matlab GUI, please follow the general informations about [running graphical applications](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/).
 
-Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/)) is recommended.
+Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/)) is recommended.
 
 To run Matlab with GUI, use
 
@@ -44,7 +44,7 @@ Running parallel Matlab using Distributed Computing Toolbox / Engine
 ------------------------------------------------------------------------
 Distributed toolbox is available only for the EDU variant
 
-The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1).
+The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1).
 
 Delete previously used file mpiLibConf.m, we have observed crashes when using Intel MPI.
 
@@ -112,7 +112,7 @@ To run matlab in batch mode, write an matlab script, then write a bash jobscript
     cp output.out $PBS_O_WORKDIR/.
 ```
 
-This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs and matlab script are in matlabcode.m file, outputs in output.out file. Note the missing .m extension in the matlab -r matlabcodefile call, **the .m must not be included**.  Note that the **shared /scratch must be used**. Further, it is **important to include quit** statement at the end of the matlabcode.m script.
+This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs and matlab script are in matlabcode.m file, outputs in output.out file. Note the missing .m extension in the matlab -r matlabcodefile call, **the .m must not be included**.  Note that the **shared /scratch must be used**. Further, it is **important to include quit** statement at the end of the matlabcode.m script.
 
 Submit the jobscript using qsub
 
@@ -161,11 +161,11 @@ The complete example showing how to use Distributed Computing Toolbox in local m
     spmd
     [~, name] = system('hostname')
 
-        T = W*x; % Calculation performed on labs, in parallel.
-                 % T and W are both codistributed arrays here.
+        T = W*x; % Calculation performed on labs, in parallel.
+                 % T and W are both codistributed arrays here.
     end
     T;
-    whos         % T and W are both distributed arrays here.
+    whos         % T and W are both distributed arrays here.
 
     parpool close
     quit
@@ -215,7 +215,7 @@ This method is a "hack" invented by us to emulate the mpiexec functionality foun
 
 Please note that this method is experimental.
 
-For this method, you need to use SalomonDirect profile, import it using [the same way as SalomonPBSPro](matlab.md#running-parallel-matlab-using-distributed-computing-toolbox---engine)
+For this method, you need to use SalomonDirect profile, import it using [the same way as SalomonPBSPro](matlab.md#running-parallel-matlab-using-distributed-computing-toolbox---engine)
 
 This is an example of m-script using direct mode:
 
@@ -271,7 +271,7 @@ You can use MATLAB on UV2000 in two parallel modes:
 
 ### Threaded mode
 
-Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set  maxNumCompThreads accordingly and certain operations, such as  fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes.
+Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set  maxNumCompThreads accordingly and certain operations, such as  fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes.
 
 ### Local cluster mode
 
diff --git a/docs.it4i/salomon/software/numerical-languages/octave.md b/docs.it4i/salomon/software/numerical-languages/octave.md
index a827c9d81..a73c43bb1 100644
--- a/docs.it4i/salomon/software/numerical-languages/octave.md
+++ b/docs.it4i/salomon/software/numerical-languages/octave.md
@@ -46,7 +46,7 @@ To run octave in batch mode, write an octave script, then write a bash jobscript
     exit
 ```
 
-This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/).
+This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/).
 
 The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment.
 
diff --git a/docs.it4i/salomon/software/numerical-languages/r.md b/docs.it4i/salomon/software/numerical-languages/r.md
index 838b43f86..721c67cbe 100644
--- a/docs.it4i/salomon/software/numerical-languages/r.md
+++ b/docs.it4i/salomon/software/numerical-languages/r.md
@@ -3,7 +3,7 @@ R
 
 Introduction
 ------------
-The R is a language and environment for statistical computing and graphics.  R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible.
+The R is a language and environment for statistical computing and graphics.  R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible.
 
 One of R's strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed. Great care has been taken over the defaults for the minor design choices in graphics, but the user retains full control.
 
@@ -68,11 +68,11 @@ Example jobscript:
     exit
 ```
 
-This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/job-submission-and-execution/).
+This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/job-submission-and-execution/).
 
 Parallel R
 ----------
-Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where  parallel constructs are directly stated within the R script.
+Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where  parallel constructs are directly stated within the R script.
 
 Package parallel
 --------------------
@@ -349,7 +349,7 @@ mpi.apply Rmpi example:
     mpi.quit()
 ```
 
-The above is the mpi.apply MPI example for calculating the number π. Only the slave processes carry out the calculation. Note the **mpi.parSapply(), ** function call. The package parallel [example](r/#package-parallel)[above](r/#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply().
+The above is the mpi.apply MPI example for calculating the number π. Only the slave processes carry out the calculation. Note the **mpi.parSapply(), ** function call. The package parallel [example](r/#package-parallel)[above](r/#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply().
 
 Execute the example as:
 
diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md
index 7a90bd10f..a8860f734 100644
--- a/docs.it4i/salomon/storage.md
+++ b/docs.it4i/salomon/storage.md
@@ -24,7 +24,7 @@ Please don't use shared file systems as a backup for large amount of data or lon
 
 Shared File systems
 ----------------------
-Salomon computer provides two main shared file systems, the [HOME file system](#home-filesystem) and the [SCRATCH file system](#scratch-filesystem). The SCRATCH file system is partitioned to [WORK and TEMP workspaces](#shared-workspaces). The HOME file system is realized as a tiered NFS disk storage. The SCRATCH file system is realized as a parallel Lustre file system. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both HOME/SCRATCH file systems for the purpose of sharing data with other users using fine-grained control.
+Salomon computer provides two main shared file systems, the [HOME file system](#home-filesystem) and the [SCRATCH file system](#scratch-filesystem). The SCRATCH file system is partitioned to [WORK and TEMP workspaces](#shared-workspaces). The HOME file system is realized as a tiered NFS disk storage. The SCRATCH file system is realized as a parallel Lustre file system. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both HOME/SCRATCH file systems for the purpose of sharing data with other users using fine-grained control.
 
 ###HOME file system
 
@@ -32,7 +32,7 @@ The HOME file system is realized as a Tiered file system, exported via NFS. The
 
 ###SCRATCH file system
 
-The  architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH file system hosts the [WORK and TEMP workspaces](#shared-workspaces).
+The  architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH file system hosts the [WORK and TEMP workspaces](#shared-workspaces).
 
 Configuration of the SCRATCH Lustre storage
 
@@ -124,10 +124,10 @@ Example for Lustre SCRATCH directory:
 $ lfs quota /scratch
 Disk quotas for user user001 (uid 1234):
      Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
-          /scratch       8       0 100000000000       -       3       0       0       -
+          /scratch       8       0 100000000000       -       3       0       0       -
 Disk quotas for group user001 (gid 1234):
  Filesystem kbytes quota limit grace files quota limit grace
- /scratch       8       0       0       -       3       0       0       -
+ /scratch       8       0       0       -       3       0       0       -
 ```
 
 In this example, we view current quota size limit of 100TB and 8KB currently used by user001.
@@ -135,7 +135,7 @@ In this example, we view current quota size limit of 100TB and 8KB currently use
 HOME directory is mounted via NFS, so a different command must be used to obtain quota information:
 
 ```bash
-     $ quota
+     $ quota
 ```
 
 Example output:
@@ -166,7 +166,7 @@ $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
 2,7M     .idb_13.0_linux_intel64_app
 ```
 
-This will list all directories which are having MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
+This will list all directories which are having MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
 
 To have a better understanding of previous commands, you can read manpages.
 
@@ -180,7 +180,7 @@ $ man du
 
 Extended Access Control List (ACL)
 ----------------------------------
-Extended ACLs provide another security mechanism beside the standard POSIX ACLs which are defined by three entries (for owner/group/others). Extended ACLs have more than the three basic entries. In addition, they also contain a mask entry and may contain any number of named user and named group entries.
+Extended ACLs provide another security mechanism beside the standard POSIX ACLs which are defined by three entries (for owner/group/others). Extended ACLs have more than the three basic entries. In addition, they also contain a mask entry and may contain any number of named user and named group entries.
 
 ACLs on a Lustre file system work exactly like ACLs on any Linux file system. They are manipulated with the standard tools in the standard manner. Below, we create a directory and allow a specific user access.
 
@@ -188,7 +188,7 @@ ACLs on a Lustre file system work exactly like ACLs on any Linux file system. Th
 [vop999@login1.salomon ~]$ umask 027
 [vop999@login1.salomon ~]$ mkdir test
 [vop999@login1.salomon ~]$ ls -ld test
-drwxr-x--- 2 vop999 vop999 4096 Nov  5 14:17 test
+drwxr-x--- 2 vop999 vop999 4096 Nov  5 14:17 test
 [vop999@login1.salomon ~]$ getfacl test
 # file: test
 # owner: vop999
@@ -199,7 +199,7 @@ other::---
 
 [vop999@login1.salomon ~]$ setfacl -m user:johnsm:rwx test
 [vop999@login1.salomon ~]$ ls -ld test
-drwxrwx---+ 2 vop999 vop999 4096 Nov  5 14:17 test
+drwxrwx---+ 2 vop999 vop999 4096 Nov  5 14:17 test
 [vop999@login1.salomon ~]$ getfacl test
 # file: test
 # owner: vop999
@@ -213,23 +213,23 @@ other::---
 
 Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory (-d flag to setfacl) will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory. Refer to this page for more information on Linux ACL:
 
-[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html)
+[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html)
 
 Shared Workspaces
 ---------------------
 
 ###HOME
 
-Users home directories /home/username reside on HOME file system. Accessible capacity is 0.5 PB, shared among all users. Individual users are restricted by file system usage quotas, set to 250 GB per user. If 250 GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
+Users home directories /home/username reside on HOME file system. Accessible capacity is 0.5 PB, shared among all users. Individual users are restricted by file system usage quotas, set to 250 GB per user. If 250 GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
 
 !!! Note "Note"
 	The HOME file system is intended for preparation, evaluation, processing and storage of data generated by active Projects.
 
-The HOME  should not be used to archive data of past Projects or other unrelated data.
+The HOME  should not be used to archive data of past Projects or other unrelated data.
 
 The files on HOME will not be deleted until end of the [users lifecycle](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/).
 
-The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
+The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
 
 |HOME workspace||
 |---|---|
@@ -241,10 +241,10 @@ The workspace is backed up, such that it can be restored in case of catasthropi
 
 ### WORK
 
-The WORK workspace resides on SCRATCH file system.  Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid.
+The WORK workspace resides on SCRATCH file system.  Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid.
 
 !!! Note "Note"
-	The WORK workspace is intended  to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up.
+	The WORK workspace is intended  to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up.
 
 	Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project.
 
@@ -266,10 +266,10 @@ The WORK workspace is hosted on SCRATCH file system. The SCRATCH is realized as
 
 ### TEMP
 
-The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is  /scratch/temp.  Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. >If 100 TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
+The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is  /scratch/temp.  Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. >If 100 TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
 
 !!! Note "Note"
-	The TEMP workspace is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
+	The TEMP workspace is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
 
 	Users are advised to save the necessary data from the TEMP workspace to HOME or WORK after the calculations and clean up the scratch files.
 
@@ -300,7 +300,7 @@ Every computational node is equipped with file system realized in memory, so cal
 
 The local RAM disk is mounted as /ramdisk and is accessible to user at /ramdisk/$PBS_JOBID directory.
 
-The local RAM disk file system is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk file system is limited. Be very careful, use of RAM disk file system is at the expense of operational memory.  It is not recommended to allocate large amount of memory and use large amount of data in RAM disk file system at the same time.
+The local RAM disk file system is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk file system is limited. Be very careful, use of RAM disk file system is at the expense of operational memory.  It is not recommended to allocate large amount of memory and use large amount of data in RAM disk file system at the same time.
 
 !!! Note "Note"
 	The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
@@ -358,7 +358,7 @@ Once registered for CESNET Storage, you may [access the storage](https://du.cesn
 !!! Note "Note"
 	SSHFS: The storage will be mounted like a local hard drive
 
-The SSHFS  provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
+The SSHFS  provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
 
 First, create the mount point
 
@@ -403,9 +403,9 @@ Once done, please remember to unmount the storage
 !!! Note "Note"
 	Rsync provides delta transfer for best performance, can resume interrupted transfers
 
-Rsync is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination.  Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
+Rsync is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination.  Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
 
-Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time.  Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated.
+Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time.  Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated.
 
 More about Rsync at <https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele>
 
-- 
GitLab