diff --git a/docs.it4i/anselm/capacity-computing.md b/docs.it4i/anselm/capacity-computing.md
index 2f7d0cb006540a50f990625026de582bd3fef4e2..4b191d815f60c55da8facc68533ece9721a6c578 100644
--- a/docs.it4i/anselm/capacity-computing.md
+++ b/docs.it4i/anselm/capacity-computing.md
@@ -2,14 +2,14 @@
 
 ## Introduction
 
-In many cases, it is useful to submit huge (>100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization.
+In many cases, it is useful to submit a huge (>100+) number of computational jobs into the PBS queue system. A huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving the best runtime, throughput, and computer utilization.
 
-However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1000 per job array**
+However, executing a huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling, and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1000 per job array**
 
 !!! note
     Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
 
-* Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
+* Use [Job arrays](capacity-computing/#job-arrays) when running a huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
 * Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
 * Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
 
@@ -21,7 +21,7 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 ## Job Arrays
 
 !!! note
-    Huge number of jobs may be easily submitted and managed as a job array.
+    A huge number of jobs may easily be submitted and managed as a job array.
 
 A job array is a compact representation of many jobs, called subjobs. The subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
 
@@ -29,15 +29,15 @@ A job array is a compact representation of many jobs, called subjobs. The subjob
 * job Identifiers of subjobs only differ by their indices
 * the state of subjobs can differ (R,Q,...etc.)
 
-All subjobs within a job array have the same scheduling priority and schedule as independent jobs. Entire job array is submitted through a single qsub command and may be managed by qdel, qalter, qhold, qrls and qsig commands as a single job.
+All subjobs within a job array have the same scheduling priority and schedule as independent jobs. An entire job array is submitted through a single qsub command and may be managed by qdel, qalter, qhold, qrls, and qsig commands as a single job.
 
 ### Shared Jobscript
 
-All subjobs in job array use the very same, single jobscript. Each subjob runs its own instance of the jobscript. The instances execute different work controlled by $PBS_ARRAY_INDEX variable.
+All subjobs in a job array use the very same, single jobscript. Each subjob runs its own instance of the jobscript. The instances execute different work controlled by the $PBS_ARRAY_INDEX variable.
 
 Example:
 
-Assume we have 900 input files with name beginning with "file" (e. g. file001, ..., file900). Assume we would like to use each of these input files with program executable myprog.x, each as a separate job.
+Assume we have 900 input files with the name of each beginning with "file" (e. g. file001, ..., file900). Assume we would like to use each of these input files with program executable myprog.x, each as a separate job.
 
 First, we create a tasklist file (or subjobs list), listing all tasks (subjobs) - all input files in our example:
 
@@ -45,7 +45,7 @@ First, we create a tasklist file (or subjobs list), listing all tasks (subjobs)
 $ find . -name 'file*' > tasklist
 ```
 
-Then we create jobscript:
+Then we create the jobscript:
 
 ```bash
 #!/bin/bash
@@ -70,9 +70,9 @@ cp $PBS_O_WORKDIR/$TASK input ; cp $PBS_O_WORKDIR/myprog.x .
 cp output $PBS_O_WORKDIR/$TASK.out
 ```
 
-In this example, the submit directory holds the 900 input files, executable myprog.x and the jobscript file. As input for each run, we take the filename of input file from created tasklist file. We copy the input file to local scratch /lscratch/$PBS_JOBID, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The myprog.x runs on one node only and must use threads to run in parallel. Be aware, that if the myprog.x **is not multithreaded**, then all the **jobs are run as single thread programs in sequential** manner. Due to allocation of the whole node, the accounted time is equal to the usage of whole node, while using only 1/16 of the node!
+In this example, the submit directory holds the 900 input files, the executable myprog.x, and the jobscript file. As an input for each run, we take the filename of the input file from the created tasklist file. We copy the input file to the local scratch memory /lscratch/$PBS_JOBID, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The myprog.x runs on one node only and must use threads to run in parallel. Be aware, that if the myprog.x **is not multithreaded**, then all the **jobs are run as single thread programs in a sequential** manner. Due to the allocation of the whole node, the accounted time is equal to the usage of the whole node, while using only 1/16 of the node!
 
-If huge number of parallel multicore (in means of multinode multithread, e. g. MPI enabled) jobs is needed to run, then a job array approach should also be used. The main difference compared to previous example using one node is that the local scratch should not be used (as it's not shared between nodes) and MPI or other technique for parallel multinode run has to be used properly.
+If running a huge number of parallel multicore (in means of multinode multithread, e. g. MPI enabled) jobs is needed, then a job array approach should be used. The main difference as compared to previous examples using one node is that the local scratch memory should not be used (as it's not shared between nodes) and MPI or other techniques for parallel multinode processing has to be used properly.
 
 ### Submit the Job Array
 
@@ -83,9 +83,9 @@ $ qsub -N JOBNAME -J 1-900 jobscript
 12345[].dm2
 ```
 
-In this example, we submit a job array of 900 subjobs. Each subjob will run on full node and is assumed to take less than 2 hours (please note the #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue).
+In this example, we submit a job array of 900 subjobs. Each subjob will run on one full node and is assumed to take less than 2 hours (please note the #PBS directives in the beginning of the jobscript file, don't forget to set your valid PROJECT_ID and desired queue).
 
-Sometimes for testing purposes, you may need to submit only one-element array. This is not allowed by PBSPro, but there's a workaround:
+Sometimes for testing purposes, you may need to submit a one-element only array. This is not allowed by PBSPro, but there's a workaround:
 
 ```console
 $ qsub -N JOBNAME -J 9-10:2 jobscript
@@ -95,7 +95,7 @@ This will only choose the lower index (9 in this example) for submitting/running
 
 ### Manage the Job Array
 
-Check status of the job array by the qstat command.
+Check status of the job array using the qstat command.
 
 ```console
 $ qstat -a 12345[].dm2
@@ -107,8 +107,8 @@ Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
 12345[].dm2     user2    qprod    xx          13516   1 16    --  00:50 B 00:02
 ```
 
-The status B means that some subjobs are already running.
-Check status of the first 100 subjobs by the qstat command.
+When the status is B it means that some subjobs are already running.
+Check the status of the first 100 subjobs using the qstat command.
 
 ```console
 $ qstat -a 12345[1-100].dm2
@@ -152,7 +152,7 @@ Read more on job arrays in the [PBSPro Users guide](../pbspro/).
 !!! note
     Use GNU parallel to run many single core tasks on one node.
 
-GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on Anselm.
+GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful when running single core jobs via the queue system on Anselm.
 
 For more information and examples see the parallel man page:
 
@@ -175,7 +175,7 @@ First, we create a tasklist file, listing all tasks - all input files in our exa
 $ find . -name 'file*' > tasklist
 ```
 
-Then we create jobscript:
+Then we create a jobscript:
 
 ```bash
 #!/bin/bash
@@ -203,11 +203,11 @@ cat input > output
 cp output $PBS_O_WORKDIR/$TASK.out
 ```
 
-In this example, tasks from tasklist are executed via the GNU parallel. The jobscript executes multiple instances of itself in parallel, on all cores of the node. Once an instace of jobscript is finished, new instance starts until all entries in tasklist are processed. Currently processed entry of the joblist may be retrieved via $1 variable. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.
+In this example, tasks from the tasklist are executed via the GNU parallel. The jobscript executes multiple instances of itself in parallel, on all cores of the node. Once an instace of the jobscript is finished, a new instance starts until all entries in the tasklist are processed. Currently processed entries of the joblist may be retrieved via $1 variable. The variable $TASK expands to one of the input filenames from the tasklist. We copy the input file to local scratch memory, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.
 
 ### Submit the Job
 
-To submit the job, use the qsub command. The 101 tasks' job of the [example above](capacity-computing/#gp_example) may be submitted like this:
+To submit the job, use the qsub command. The 101 task job of the [example above](capacity-computing/#gp_example) may be submitted as follows:
 
 ```console
 $ qsub -N JOBNAME jobscript
@@ -217,25 +217,25 @@ $ qsub -N JOBNAME jobscript
 In this example, we submit a job of 101 tasks. 16 input files will be processed in parallel. The 101 tasks on 16 cores are assumed to complete in less than 2 hours.
 
 !!! hint
-    Use #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
+    Use #PBS directives at the beginning of the jobscript file, don't forget to set your valid PROJECT_ID and desired queue.
 
 ## Job Arrays and GNU Parallel
 
 !!! note
-    Combine the Job arrays and GNU parallel for best throughput of single core jobs
+    Combine the Job arrays and GNU parallel for the best throughput of single core jobs
 
-While job arrays are able to utilize all available computational nodes, the GNU parallel can be used to efficiently run multiple single-core jobs on single node. The two approaches may be combined to utilize all available (current and future) resources to execute single core jobs.
+While job arrays are able to utilize all available computational nodes, the GNU parallel can be used to efficiently run multiple single-core jobs on a single node. The two approaches may be combined to utilize all available (current and future) resources to execute single core jobs.
 
 !!! note
     Every subjob in an array runs GNU parallel to utilize all cores on the node
 
 ### GNU Parallel, Shared jobscript
 
-Combined approach, very similar to job arrays, can be taken. Job array is submitted to the queuing system. The subjobs run GNU parallel. The GNU parallel shell executes multiple instances of the jobscript using all cores on the node. The instances execute different work, controlled by the $PBS_JOB_ARRAY and $PARALLEL_SEQ variables.
+A combined approach, very similar to job arrays, can be taken. A job array is submitted to the queuing system. The subjobs run GNU parallel. The GNU parallel shell executes multiple instances of the jobscript using all of the cores on the node. The instances execute different work, controlled by the $PBS_JOB_ARRAY and $PARALLEL_SEQ variables.
 
 Example:
 
-Assume we have 992 input files with name beginning with "file" (e. g. file001, ..., file992). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
+Assume we have 992 input files with each name beginning with "file" (e. g. file001, ..., file992). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
 
 First, we create a tasklist file, listing all tasks - all input files in our example:
 
@@ -243,13 +243,13 @@ First, we create a tasklist file, listing all tasks - all input files in our exa
 $ find . -name 'file*' > tasklist
 ```
 
-Next we create a file, controlling how many tasks will be executed in one subjob
+Next we create a file, controlling how many tasks will be executed in one subjob:
 
 ```console
 $ seq 32 > numtasks
 ```
 
-Then we create jobscript:
+Then we create a jobscript:
 
 ```bash
 #!/bin/bash
@@ -279,34 +279,34 @@ cat input > output
 cp output $PBS_O_WORKDIR/$TASK.out
 ```
 
-In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node.  Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.  The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks in numtasks file is reached.
+In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node. The variable $TASK expands to one of the input filenames from the tasklist. We copy the input file to local scratch memory, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.  The numtasks file controls how many tasks will be run per subjob. Once a task is finished, a new task starts, until the number of tasks in the numtasks file is reached.
 
 !!! note
     Select subjob walltime and number of tasks per subjob carefully
 
-When deciding this values, think about following guiding rules:
+When deciding this values, keep in mind the following guiding rules:
 
-1. Let n=N/16.  Inequality (n+1) \* T < W should hold. The N is number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
-1. Number of tasks should be modulo 16.
+1. Let n=N/16.  Inequality (n+1) \* T < W should hold. N is the number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
+1. The number of tasks should be modulo 16.
 1. These rules are valid only when all tasks have similar task walltimes T.
 
 ### Submit the Job Array (-J)
 
-To submit the job array, use the qsub -J command. The 992 tasks' job of the [example above](capacity-computing/#combined_example) may be submitted like this:
+To submit the job array, use the qsub -J command. The 992 task job of the [example above](capacity-computing/#combined_example) may be submitted like this:
 
 ```console
 $ qsub -N JOBNAME -J 1-992:32 jobscript
 12345[].dm2
 ```
 
-In this example, we submit a job array of 31 subjobs. Note the  -J 1-992:**32**, this must be the same as the number sent to numtasks file. Each subjob will run on full node and process 16 input files in parallel, 32 in total per subjob.  Every subjob is assumed to complete in less than 2 hours.
+In this example, we submit a job array of 31 subjobs. Note the -J 1-992:**32**, this must be the same as the number sent to numtasks file. Each subjob will run on one full node and process 16 input files in parallel, 32 in total per subjob.  Every subjob is assumed to complete in less than 2 hours.
 
 !!! hint
-    Use #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
+    Use #PBS directives at the beginning of the jobscript file, don't forget to set your valid PROJECT_ID and desired queue.
 
 ## Examples
 
-Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
+Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run a huge number of jobs. We recommend trying out the examples before using this for running production jobs.
 
 Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
 
diff --git a/docs.it4i/anselm/compute-nodes.md b/docs.it4i/anselm/compute-nodes.md
index d02fd3dba30505d50f299e7e87ffa292118abfb1..36c0367c73c5d3c3e672b9b0af23da52e72e366c 100644
--- a/docs.it4i/anselm/compute-nodes.md
+++ b/docs.it4i/anselm/compute-nodes.md
@@ -1,10 +1,10 @@
 # Compute Nodes
 
-## Nodes Configuration
+## Node Configuration
 
-Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bullx technology. The cluster contains four types of compute nodes.
+Anselm is cluster of x86-64 Intel based nodes built with Bull Extreme Computing bullx technology. The cluster contains four types of compute nodes.
 
-### Compute Nodes Without Accelerator
+### Compute Nodes Without Accelerators
 
 * 180 nodes
 * 2880 cores in total
@@ -14,7 +14,7 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu
 * bullx B510 blade servers
 * cn[1-180]
 
-### Compute Nodes With GPU Accelerator
+### Compute Nodes With a GPU Accelerator
 
 * 23 nodes
 * 368 cores in total
@@ -25,7 +25,7 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu
 * bullx B515 blade servers
 * cn[181-203]
 
-### Compute Nodes With MIC Accelerator
+### Compute Nodes With a MIC Accelerator
 
 * 4 nodes
 * 64 cores in total
@@ -42,26 +42,26 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu
 * 32 cores in total
 * 2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
 * 512 GB of physical memory per node
-* two 300GB SAS 3,5”15krpm HDD (RAID1) per node
+* two 300GB SAS 3,5” 15krpm HDD (RAID1) per node
 * two 100GB SLC SSD per node
 * bullx R423-E3 servers
 * cn[208-209]
 
 ![](../img/bullxB510.png)
-**Figure Anselm bullx B510 servers**
+**Anselm bullx B510 servers**
 
-### Compute Nodes Summary
+### Compute Node Summary
 
-| Node type                  | Count | Range       | Memory | Cores       | [Access](resources-allocation-policy/)    |
-| -------------------------- | ----- | ----------- | ------ | ----------- | --------------------------------------    |
-| Nodes without accelerator  | 180   | cn[1-180]   | 64GB   | 16 @ 2.4GHz | qexp, qprod, qlong, qfree, qatlas, qprace |
-| Nodes with GPU accelerator | 23    | cn[181-203] | 96GB   | 16 @ 2.3GHz | qnvidia, qexp, qatlas                     |
-| Nodes with MIC accelerator | 4     | cn[204-207] | 96GB   | 16 @ 2.3GHz | qmic, qexp                                |
-| Fat compute nodes          | 2     | cn[208-209] | 512GB  | 16 @ 2.4GHz | qfat, qexp                                |
+| Node type                    | Count | Range       | Memory | Cores       | [Access](resources-allocation-policy/)    |
+| ---------------------------- | ----- | ----------- | ------ | ----------- | --------------------------------------    |
+| Nodes without an accelerator | 180   | cn[1-180]   | 64GB   | 16 @ 2.4GHz | qexp, qprod, qlong, qfree, qprace, qatlas |
+| Nodes with a GPU accelerator | 23    | cn[181-203] | 96GB   | 16 @ 2.3GHz | qgpu, qexp                                |
+| Nodes with a MIC accelerator | 4     | cn[204-207] | 96GB   | 16 @ 2.3GHz | qmic, qexp                                |
+| Fat compute nodes            | 2     | cn[208-209] | 512GB  | 16 @ 2.4GHz | qfat, qexp                                |
 
 ## Processor Architecture
 
-Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes without accelerator and fat nodes) and Intel Xeon E5-2470 (nodes with accelerator). Processors support Advanced Vector Extensions (AVX) 256-bit instruction set.
+Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes without accelerators and fat nodes) and Intel Xeon E5-2470 (nodes with accelerators). The processors support Advanced Vector Extensions (AVX) 256-bit instruction set.
 
 ### Intel Sandy Bridge E5-2665 Processor
 
@@ -83,7 +83,7 @@ Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes
   * L3: 20 MB per processor
 * memory bandwidth at the level of the processor: 38.4 GB/s
 
-Nodes equipped with Intel Xeon E5-2665 CPU have set PBS resource attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU have set PBS resource attribute cpu_freq = 23.
+Nodes equipped with Intel Xeon E5-2665 CPU have a set PBS resource attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU have set PBS resource attribute cpu_freq = 23.
 
 ```console
 $ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16:cpu_freq=24 -I
@@ -99,7 +99,7 @@ $ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
 
 ## Memory Architecture
 
-### Compute Node Without Accelerator
+### Compute Nodes Without Accelerators
 
 * 2 sockets
 * Memory Controllers are integrated into processors.
@@ -109,7 +109,7 @@ $ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
   * Data rate support: up to 1600MT/s
 * Populated memory: 8 x 8 GB DDR3 DIMM 1600 MHz
 
-### Compute Node With GPU or MIC Accelerator
+### Compute Nodes With a GPU or MIC Accelerator
 
 * 2 sockets
 * Memory Controllers are integrated into processors.
@@ -119,7 +119,7 @@ $ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
   * Data rate support: up to 1600MT/s
 * Populated memory: 6 x 16 GB DDR3 DIMM 1600 MHz
 
-### Fat Compute Node
+### Fat Compute Nodes
 
 * 2 sockets
 * Memory Controllers are integrated into processors.
diff --git a/docs.it4i/anselm/environment-and-modules.md b/docs.it4i/anselm/environment-and-modules.md
new file mode 100644
index 0000000000000000000000000000000000000000..e32aab0ff3547a0938180ba6e7733a7513d35d23
--- /dev/null
+++ b/docs.it4i/anselm/environment-and-modules.md
@@ -0,0 +1,88 @@
+# Environment and Modules
+
+## Environment Customization
+
+After logging in, you may want to configure the environment. Write your preferred path definitions, aliases, functions and module loads in the .bashrc file;
+
+```console
+$ cat ./bashrc
+
+# ./bashrc
+
+# Source global definitions
+if [ -f /etc/bashrc ]; then
+      . /etc/bashrc
+fi
+
+# User specific aliases and functions
+alias qs='qstat -a'
+module load PrgEnv-gnu
+
+# Display information to standard output - only in interactive ssh session
+if [ -n "$SSH_TTY" ]
+then
+ module list # Display loaded modules
+fi
+```
+
+!!! note
+    Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks the fundamental functionality (scp, PBS) of your account! Consider utilization of SSH session interactivity for such commands as stated in the previous example.
+
+## Application Modules
+
+In order to configure your shell for running a particular application on Anselm we use Module package interface.
+
+!!! note
+    The modules set up the application paths, library paths, and environment variables for running a particular application.
+
+    We can also have a second modules repository. This modules repository is created using a tool called EasyBuild. On the Salomon cluster, all modules are built with this tool. If you want to use software from this modules repository, please follow the instructions in the section [Application Modules Path Expansion](environment-and-modules/#application-modules-path-expansion).
+
+The modules may be loaded, unloaded, and switched as required.
+
+To check available modules use;
+
+```console
+$ module avail **or** ml av
+```
+
+To load a module, for example the octave module use;
+
+```console
+$ module load octave **or** ml octave
+```
+
+loading the octave module will set up paths and the environment variables of your active shell such that you are ready to run the octave software.
+
+To check loaded modules use;
+
+```console
+$ module list **or** ml
+```
+
+ To unload a module, for example the octave module use;
+
+```console
+$ module unload octave **or** ml -octave
+```
+
+Learn more about modules by reading the module man page;
+
+```console
+$ man module
+```
+
+The following modules set up the development environment;
+
+PrgEnv-gnu sets up the GNU development environment in conjunction with the bullx MPI library.
+
+PrgEnv-intel sets up the INTEL development environment in conjunction with the Intel MPI library.
+
+## Application Modules Path Expansion
+
+All application modules on the Salomon cluster (and further) are built using a tool called [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). In the case that you want to use applications that have already been built by EasyBuild, you have to modify your MODULEPATH environment variable.
+
+```console
+export MODULEPATH=$MODULEPATH:/apps/easybuild/modules/all/
+```
+
+This command expands your searched paths to modules. You can also add this command to the .bashrc file to expand paths permanently. After this command, you can use the same commands to list/add/remove modules as described above.
diff --git a/docs.it4i/anselm/hardware-overview.md b/docs.it4i/anselm/hardware-overview.md
index 1a1ecde339a06d1b6cbb67f3be4ea7002a687349..f91c5bc70ef8ac2a0d4c79d39cc08fc8d62c45a6 100644
--- a/docs.it4i/anselm/hardware-overview.md
+++ b/docs.it4i/anselm/hardware-overview.md
@@ -1,8 +1,8 @@
 # Hardware Overview
 
-The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 GPU Kepler K20m accelerated nodes, 4 MIC Xeon Phi 5110P accelerated nodes and 2 fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB RAM, and local hard drive. The user access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320 TB /home disk storage to store the user files. The 146 TB shared /scratch storage is available for the scratch data.
+The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 are GPU Kepler K20 accelerated nodes, 4 are MIC Xeon Phi 5110P accelerated nodes, and 2 are fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB of RAM, and a local hard drive. User access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked through high speed InfiniBand and Ethernet networks. All nodes share a 320 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data.
 
-The Fat nodes are equipped with large amount (512 GB) of memory. Fat nodes may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes are available [upon request](https://support.it4i.cz/rt) made by a PI.
+The Fat nodes are equipped with a large amount (512 GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) from a PI.
 
 Schematic representation of the Anselm cluster. Each box represents a node (computer) or storage capacity:
 
@@ -12,21 +12,21 @@ The cluster compute nodes cn[1-207] are organized within 13 chassis.
 
 There are four types of compute nodes:
 
-* 180 compute nodes without the accelerator
-* 23 compute nodes with GPU accelerator - equipped with NVIDIA Tesla Kepler K20m
-* 4 compute nodes with MIC accelerator - equipped with Intel Xeon Phi 5110P
-* 2 fat nodes - equipped with 512 GB RAM and two 100 GB SSD drives
+* 180 compute nodes without an accelerator
+* 23 compute nodes with a GPU accelerator - an NVIDIA Tesla Kepler K20m
+* 4 compute nodes with a MIC accelerator - an Intel Xeon Phi 5110P
+* 2 fat nodes - equipped with 512 GB of RAM and two 100 GB SSD drives
 
 [More about Compute nodes](compute-nodes/).
 
 GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resources-allocation-policy/).
 
-All these nodes are interconnected by fast InfiniBand network and Ethernet network.  [More about the Network](network/).
-Every chassis provides InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
+All of these nodes are interconnected through fast InfiniBand and Ethernet networks.  [More about the Network](network/).
+Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
 
-All nodes share 360 TB /home disk storage to store user files. The 146 TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch.  [More about Storage](storage/).
+All of the nodes share a 360 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data. These file systems are provided by the Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch.  [More about Storage](storage/).
 
-The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](shell-and-data-access/)
+User access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing the cluster.](shell-and-data-access/)
 
 The parameters are summarized in the following tables:
 
@@ -36,7 +36,7 @@ The parameters are summarized in the following tables:
 | Architecture of compute nodes               | x86-64                                       |
 | Operating system                            | Linux (CentOS)                               |
 | [**Compute nodes**](compute-nodes/)         |                                              |
-| Totally                                     | 209                                          |
+| Total                                       | 209                                          |
 | Processor cores                             | 16 (2 x 8 cores)                             |
 | RAM                                         | min. 64 GB, min. 4 GB per core               |
 | Local disk drive                            | yes - usually 500 GB                         |
@@ -57,4 +57,4 @@ The parameters are summarized in the following tables:
 | MIC accelerated  | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB  | Intel Xeon Phi 5110P |
 | Fat compute node | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 512 GB | -                    |
 
-For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/), and [Network](network/).
+For more details please refer to [Compute nodes](compute-nodes/), [Storage](storage/), and [Network](network/).
diff --git a/docs.it4i/anselm/introduction.md b/docs.it4i/anselm/introduction.md
index c2eb4c71a54cca5e3398cfd669bae9f60d64630b..d40cd090ce273bde81280981cd78ee9033d0523e 100644
--- a/docs.it4i/anselm/introduction.md
+++ b/docs.it4i/anselm/introduction.md
@@ -1,10 +1,10 @@
 # Introduction
 
-Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM and giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB RAM, and 500 GB hard disk drive. Nodes are interconnected by fully non-blocking fat-tree InfiniBand network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
+Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totalling 3344 compute cores with 15 TB RAM, giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB of RAM, and a 500 GB hard disk drive. Nodes are interconnected through a fully non-blocking fat-tree InfiniBand network, and are equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
 
-The cluster runs operating system, which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](../environment-and-modules/).
+The cluster runs with an [operating system](software/operating-system/) which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/).
 
-User data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users.
+The user data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users.
 
 The PBS Professional workload manager provides [computing resources allocations and job execution](resources-allocation-policy/).
 
diff --git a/docs.it4i/anselm/job-priority.md b/docs.it4i/anselm/job-priority.md
index 09acc4cedc9a6cad4ac9b51fdc46ddac2ef91ad2..6af6c87eccf100ecae728b0f56c63077d1eca34f 100644
--- a/docs.it4i/anselm/job-priority.md
+++ b/docs.it4i/anselm/job-priority.md
@@ -2,7 +2,7 @@
 
 ## Job Execution Priority
 
-Scheduler gives each job an execution priority and then uses this job execution priority to select which job(s) to run.
+The scheduler gives each job an execution priority and then uses this job execution priority to select which job(s) to run.
 
 Job execution priority on Anselm is determined by these job properties (in order of importance):
 
@@ -12,15 +12,15 @@ Job execution priority on Anselm is determined by these job properties (in order
 
 ### Queue Priority
 
-Queue priority is priority of queue where job is queued before execution.
+Queue priority is the priority of the queue in which the job is waiting prior to execution.
 
-Queue priority has the biggest impact on job execution priority. Execution priority of jobs in higher priority queues is always greater than execution priority of jobs in lower priority queues. Other properties of job used for determining job execution priority (fair-share priority, eligible time) cannot compete with queue priority.
+Queue priority has the biggest impact on job execution priority. The execution priority of jobs in higher priority queues is always greater than the execution priority of jobs in lower priority queues. Other properties of jobs used for determining the job execution priority (fair-share priority, eligible time) cannot compete with queue priority.
 
 Queue priorities can be seen at <https://extranet.it4i.cz/anselm/queues>
 
 ### Fair-Share Priority
 
-Fair-share priority is priority calculated on recent usage of resources. Fair-share priority is calculated per project, all members of project share same fair-share priority. Projects with higher recent usage have lower fair-share priority than projects with lower or none recent usage.
+Fair-share priority is priority calculated on the basis of recent usage of resources. Fair-share priority is calculated per project, all members of a project sharing the same fair-share priority. Projects with higher recent usage have a lower fair-share priority than projects with lower or no recent usage.
 
 Fair-share priority is used for ranking jobs with equal queue priority.
 
@@ -29,24 +29,24 @@ Fair-share priority is calculated as
 ---8<--- "fairshare_formula.md"
 
 where MAX_FAIRSHARE has value 1E6,
-usage<sub>Project</sub> is cumulated usage by all members of selected project,
-usage<sub>Total</sub> is total usage by all users, by all projects.
+usage<sub>Project</sub> is accumulated usage by all members of a selected project,
+usage<sub>Total</sub> is total usage by all users, across all projects.
 
-Usage counts allocated core-hours (`ncpus x walltime`). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week).
-Jobs queued in queue qexp are not calculated to project's usage.
+Usage counts allocated core-hours (`ncpus x walltime`). Usage decays, halving at intervals of 168 hours (one week).
+Jobs queued in the queue qexp are not used to calculate the project's usage.
 
 !!! note
     Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/anselm/projects>.
 
-Calculated fair-share priority can be also seen as Resource_List.fairshare attribute of a job.
+Calculated fair-share priority can be also be seen in the Resource_List.fairshare attribute of a job.
 
 ### Eligible Time
 
-Eligible time is amount (in seconds) of eligible time job accrued while waiting to run. Jobs with higher eligible time gains higher priority.
+Eligible time is the amount (in seconds) of eligible time a job accrues while waiting to run. Jobs with higher eligible time gain higher priority.
 
 Eligible time has the least impact on execution priority. Eligible time is used for sorting jobs with equal queue priority and fair-share priority. It is very, very difficult for eligible time to compete with fair-share priority.
 
-Eligible time can be seen as eligible_time attribute of job.
+Eligible time can be seen in the eligible_time attribute of job.
 
 ### Formula
 
@@ -56,17 +56,17 @@ Job execution priority (job sort formula) is calculated as:
 
 ### Job Backfilling
 
-Anselm cluster uses job backfilling.
+The Anselm cluster uses job backfilling.
 
-Backfilling means fitting smaller jobs around the higher-priority jobs that the scheduler is going to run next, in such a way that the higher-priority jobs are not delayed. Backfilling allows us to keep resources from becoming idle when the top job (job with the highest execution priority) cannot run.
+Backfilling means fitting smaller jobs around the higher-priority jobs that the scheduler is going to run next, in such a way that the higher-priority jobs are not delayed. Backfilling allows us to keep resources from becoming idle when the top job (the job with the highest execution priority) cannot run.
 
-The scheduler makes a list of jobs to run in order of execution priority. Scheduler looks for smaller jobs that can fit into the usage gaps around the highest-priority jobs in the list. The scheduler looks in the prioritized list of jobs and chooses the highest-priority smaller jobs that fit. Filler jobs are run only if they will not delay the start time of top jobs.
+The scheduler makes a list of jobs to run in order of execution priority. The scheduler looks for smaller jobs that can fit into the usage gaps around the highest-priority jobs in the list. The scheduler looks in the prioritized list of jobs and chooses the highest-priority smaller jobs that fit. Filler jobs are run only if they will not delay the start time of top jobs.
 
-It means, that jobs with lower execution priority can be run before jobs with higher execution priority.
+This means that jobs with lower execution priority can be run before jobs with higher execution priority.
 
 !!! note
     It is **very beneficial to specify the walltime** when submitting jobs.
 
-Specifying more accurate walltime enables better scheduling, better execution times and better resource usage. Jobs with suitable (small) walltime could be backfilled - and overtake job(s) with higher priority.
+Specifying more accurate walltime enables better scheduling, better execution times, and better resource usage. Jobs with suitable (small) walltime can be backfilled - and overtake job(s) with a higher priority.
 
 ---8<--- "mathjax.md"
diff --git a/docs.it4i/anselm/job-submission-and-execution.md b/docs.it4i/anselm/job-submission-and-execution.md
index 09e09620d11fbe8a02199c05811bb27cc071f301..8b1e201f3819f56de5d2d377d8be09f4ba50d22d 100644
--- a/docs.it4i/anselm/job-submission-and-execution.md
+++ b/docs.it4i/anselm/job-submission-and-execution.md
@@ -4,15 +4,15 @@
 
 When allocating computational resources for the job, please specify
 
-1. suitable queue for your job (default is qprod)
-1. number of computational nodes required
-1. number of cores per node required
-1. maximum wall time allocated to your calculation, note that jobs exceeding maximum wall time will be killed
-1. Project ID
-1. Jobscript or interactive switch
+1. a suitable queue for your job (the default is qprod)
+1. the number of computational nodes required
+1. the number of cores per node required
+1. the maximum wall time allocated to your calculation, note that jobs exceeding the maximum wall time will be killed
+1. your Project ID
+1. a Jobscript or interactive switch
 
 !!! note
-    Use the **qsub** command to submit your job to a queue for allocation of the computational resources.
+    Use the **qsub** command to submit your job to a queue for allocation of computational resources.
 
 Submit the job using the qsub command:
 
@@ -20,10 +20,10 @@ Submit the job using the qsub command:
 $ qsub -A Project_ID -q queue -l select=x:ncpus=y,walltime=[[hh:]mm:]ss[.ms] jobscript
 ```
 
-The qsub submits the job into the queue, in another words the qsub command creates a request to the PBS Job manager for allocation of specified resources. The resources will be allocated when available, subject to above described policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
+The qsub command submits the job to the queue, i.e. the qsub command creates a request to the PBS Job manager for allocation of specified resources. The resources will be allocated when available, subject to the above described policies and constraints. **After the resources are allocated, the jobscript or interactive shell is executed on the first of the allocated nodes.**
 
 !!! note
-    PBS statement nodes (qsub -l nodes=nodespec) is not supported on Anselm cluster.
+    PBS statement nodes (qsub -l nodes=nodespec) are not supported on the Anselm cluster.
 
 ### Job Submission Examples
 
@@ -31,33 +31,33 @@ The qsub submits the job into the queue, in another words the qsub command creat
 $ qsub -A OPEN-0-0 -q qprod -l select=64:ncpus=16,walltime=03:00:00 ./myjob
 ```
 
-In this example, we allocate 64 nodes, 16 cores per node, for 3 hours. We allocate these resources via the qprod queue, consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. Jobscript myjob will be executed on the first node in the allocation.
+In this example, we allocate 64 nodes, 16 cores per node, for 3 hours. We allocate these resources via the qprod queue, consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. The jobscript 'myjob' will be executed on the first node in the allocation.
 
 ```console
 $ qsub -q qexp -l select=4:ncpus=16 -I
 ```
 
-In this example, we allocate 4 nodes, 16 cores per node, for 1 hour. We allocate these resources via the qexp queue. The resources will be available interactively
+In this example, we allocate 4 nodes, 16 cores per node, for 1 hour. We allocate these resources via the qexp queue. The resources will be available interactively.
 
 ```console
 $ qsub -A OPEN-0-0 -q qnvidia -l select=10:ncpus=16 ./myjob
 ```
 
-In this example, we allocate 10 nvidia accelerated nodes, 16 cores per node, for 24 hours. We allocate these resources via the qnvidia queue. Jobscript myjob will be executed on the first node in the allocation.
+In this example, we allocate 10 nvidia accelerated nodes, 16 cores per node, for 24 hours. We allocate these resources via the qnvidia queue. the jobscript 'myjob' will be executed on the first node in the allocation.
 
 ```console
 $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=16 ./myjob
 ```
 
-In this example, we allocate 10 nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation.
+In this example, we allocate 10 nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. The jobscript myjob will be executed on the first node in the allocation.
 
-All qsub options may be [saved directly into the jobscript](#example-jobscript-for-mpi-calculation-with-preloaded-inputs). In such a case, no options to qsub are needed.
+All qsub options may be [saved directly into the jobscript](#example-jobscript-for-mpi-calculation-with-preloaded-inputs). In such cases, it is not necessary to specify any options for qsub.
 
 ```console
 $ qsub ./myjob
 ```
 
-By default, the PBS batch system sends an e-mail only when the job is aborted. Disabling mail events completely can be done like this:
+By default, the PBS batch system sends an e-mail only when the job is aborted. Disabling mail events completely can be done as follows:
 
 ```console
 $ qsub -m n
@@ -77,7 +77,7 @@ In this example, we allocate nodes cn171 and cn172, all 16 cores per node, for 2
 
 ### Placement by CPU Type
 
-Nodes equipped with Intel Xeon E5-2665 CPU have base clock frequency 2.4GHz, nodes equipped with Intel Xeon E5-2470 CPU have base frequency 2.3 GHz (see section Compute Nodes for details).  Nodes may be selected via the PBS resource attribute cpu_freq .
+Nodes equipped with an Intel Xeon E5-2665 CPU have a base clock frequency of 2.4GHz, nodes equipped with an Intel Xeon E5-2470 CPU have a base frequency of 2.3 GHz (see the section Compute Nodes for details).  Nodes may be selected via the PBS resource attribute cpu_freq .
 
 | CPU Type           | base freq. | Nodes                  | cpu_freq attribute |
 | ------------------ | ---------- | ---------------------- | ------------------ |
@@ -88,21 +88,21 @@ Nodes equipped with Intel Xeon E5-2665 CPU have base clock frequency 2.4GHz, nod
 $ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16:cpu_freq=24 -I
 ```
 
-In this example, we allocate 4 nodes, 16 cores, selecting only the nodes with Intel Xeon E5-2665 CPU.
+In this example, we allocate 4 nodes, 16 cores per node, selecting only the nodes with Intel Xeon E5-2665 CPU.
 
 ### Placement by IB Switch
 
-Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](network/) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and provides for unbiased, most efficient network communication.
+Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](network/) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and facilitates unbiased, highly efficient network communication.
 
-Nodes sharing the same switch may be selected via the PBS resource attribute ibswitch. Values of this attribute are iswXX, where XX is the switch number. The node-switch mapping can be seen at [Hardware Overview](hardware-overview/) section.
+Nodes sharing the same switch may be selected via the PBS resource attribute ibswitch. Values of this attribute are iswXX, where XX is the switch number. The node-switch mapping can be seen in the [Hardware Overview](hardware-overview/) section.
 
-We recommend allocating compute nodes of a single switch when best possible computational network performance is required to run the job efficiently:
+We recommend allocating compute nodes to a single switch when best possible computational network performance is required to run the job efficiently:
 
 ```console
 $ qsub -A OPEN-0-0 -q qprod -l select=18:ncpus=16:ibswitch=isw11 ./myjob
 ```
 
-In this example, we request all the 18 nodes sharing the isw11 switch for 24 hours. Full chassis will be allocated.
+In this example, we request all of the 18 nodes sharing the isw11 switch for 24 hours. a full chassis will be allocated.
 
 ## Advanced Job Handling
 
@@ -110,17 +110,17 @@ In this example, we request all the 18 nodes sharing the isw11 switch for 24 hou
 
 Intel Turbo Boost Technology is on by default. We strongly recommend keeping the default.
 
-If necessary (such as in case of benchmarking) you can disable the Turbo for all nodes of the job by using the PBS resource attribute cpu_turbo_boost
+If necessary (such as in the case of benchmarking) you can disable the Turbo for all nodes of the job by using the PBS resource attribute cpu_turbo_boost:
 
 ```console
 $ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
 ```
 
-More about the Intel Turbo Boost in the TurboBoost section
+More information about the Intel Turbo Boost can be found in the TurboBoost section
 
 ### Advanced Examples
 
-In the following example, we select an allocation for benchmarking a very special and demanding MPI program. We request Turbo off, 2 full chassis of compute nodes (nodes sharing the same IB switches) for 30 minutes:
+In the following example, we select an allocation for benchmarking a very special and demanding MPI program. We request Turbo off, and 2 full chassis of compute nodes (nodes sharing the same IB switches) for 30 minutes:
 
 ```console
 $ qsub -A OPEN-0-0 -q qprod
@@ -129,7 +129,7 @@ $ qsub -A OPEN-0-0 -q qprod
     -N Benchmark ./mybenchmark
 ```
 
-The MPI processes will be distributed differently on the nodes connected to the two switches. On the isw10 nodes, we will run 1 MPI process per node 16 threads per process, on isw20 nodes we will run 16 plain MPI processes.
+The MPI processes will be distributed differently on the nodes connected to the two switches. On the isw10 nodes, we will run 1 MPI process per node with 16 threads per process, on isw20 nodes we will run 16 plain MPI processes.
 
 Although this example is somewhat artificial, it demonstrates the flexibility of the qsub command options.
 
@@ -159,9 +159,9 @@ Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
 16547.srv11     user2    qprod    job3x       13516   2 32    --  48:00 R 00:58
 ```
 
-In this example user1 and user2 are running jobs named job1, job2 and job3x. The jobs job1 and job2 are using 4 nodes, 16 cores per node each. The job1 already runs for 38 hours and 25 minutes, job2 for 17 hours 44 minutes. The job1 already consumed `64 x 38.41 = 2458.6` core hours. The job3x already consumed `0.96 x 32 = 30.93` core hours. These consumed core hours will be accounted on the respective project accounts, regardless of whether the allocated cores were actually used for computations.
+In this example user1 and user2 are running jobs named job1, job2 and job3x. The jobs job1 and job2 are using 4 nodes, 16 cores per node each. job1 has already run for 38 hours and 25 minutes, and job2 for 17 hours 44 minutes. job1 has already consumed `64 x 38.41 = 2458.6` core hours. job3x has already consumed `0.96 x 32 = 30.93` core hours. These consumed core hours will be accounted for on the respective project accounts, regardless of whether the allocated cores were actually used for computations.
 
-Check status of your jobs using check-pbs-jobs command. Check presence of user's PBS jobs' processes on execution hosts. Display load, processes. Display job standard and error output. Continuously display (tail -f) job standard or error output.
+The following commands allow you to; check the status of your jobs using the check-pbs-jobs command; check for the presence of user's PBS jobs' processes on execution hosts; display load and processes; display job standard and error output; continuously display (tail -f) job standard or error output;
 
 ```console
 $ check-pbs-jobs --check-all
@@ -182,7 +182,7 @@ cn164: OK
 cn165: No process
 ```
 
-In this example we see that job 35141.dm2 currently runs no process on allocated node cn165, which may indicate an execution error.
+In this example we see that job 35141.dm2 is not currently running any processes on the allocated node cn165, which may indicate an execution error.
 
 ```console
 $ check-pbs-jobs --print-load --print-processes
@@ -198,7 +198,7 @@ cn164: 99.7 run-task
 ...
 ```
 
-In this example we see that job 35141.dm2 currently runs process run-task on node cn164, using one thread only, while node cn165 is empty, which may indicate an execution error.
+In this example we see that job 35141.dm2 is currently running a process run-task on node cn164, using one thread only, while node cn165 is empty, which may indicate an execution error.
 
 ```console
 $ check-pbs-jobs --jobid 35141.dm2 --print-job-out
@@ -217,13 +217,13 @@ In this example, we see actual output (some iteration loops) of the job 35141.dm
 !!! note
     Manage your queued or running jobs, using the **qhold**, **qrls**, **qdel**, **qsig** or **qalter** commands
 
-You may release your allocation at any time, using qdel command
+You may release your allocation at any time, using the qdel command
 
 ```console
 $ qdel 12345.srv11
 ```
 
-You may kill a running job by force, using qsig command
+You may kill a running job by force, using the qsig command
 
 ```console
 $ qsig -s 9 12345.srv11
@@ -242,7 +242,7 @@ $ man pbs_professional
 !!! note
     Prepare the jobscript to run batch jobs in the PBS queue system
 
-The Jobscript is a user made script, controlling sequence of commands for executing the calculation. It is often written in bash, other scripts may be used as well. The jobscript is supplied to PBS **qsub** command as an argument and executed by the PBS Professional workload manager.
+The Jobscript is a user made script controlling a sequence of commands for executing the calculation. It is often written in bash, though other scripts may be used as well. The jobscript is supplied to the PBS **qsub** command as an argument, and is executed by the PBS Professional workload manager.
 
 !!! note
     The jobscript or interactive shell is executed on first of the allocated nodes.
@@ -259,9 +259,9 @@ Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
    cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16
 ```
 
-In this example, the nodes cn17, cn108, cn109 and cn110 were allocated for 1 hour via the qexp queue. The jobscript myjob will be executed on the node cn17, while the nodes cn108, cn109 and cn110 are available for use as well.
+In this example, the nodes cn17, cn108, cn109, and cn110 were allocated for 1 hour via the qexp queue. The jobscript myjob will be executed on the node cn17, while the nodes cn108, cn109, and cn110 are available for use as well.
 
-The jobscript or interactive shell is by default executed in home directory
+The jobscript or interactive shell is by default executed in the home directory
 
 ```console
 $ qsub -q qexp -l select=4:ncpus=16 -I
@@ -275,7 +275,7 @@ $ pwd
 In this example, 4 nodes were allocated interactively for 1 hour via the qexp queue. The interactive shell is executed in the home directory.
 
 !!! note
-    All nodes within the allocation may be accessed via ssh.  Unallocated nodes are not accessible to user.
+    All nodes within the allocation may be accessed via ssh.  Unallocated nodes are not accessible to the user.
 
 The allocated nodes are accessible via ssh from login nodes. The nodes may access each other via ssh as well.
 
@@ -309,7 +309,7 @@ In this example, the hostname program is executed via pdsh from the interactive
 !!! note
     Production jobs must use the /scratch directory for I/O
 
-The recommended way to run production jobs is to change to /scratch directory early in the jobscript, copy all inputs to /scratch, execute the calculations and copy outputs to home directory.
+The recommended way to run production jobs is to change to the /scratch directory early in the jobscript, copy all inputs to /scratch, execute the calculations and copy outputs to the home directory.
 
 ```bash
 #!/bin/bash
@@ -336,19 +336,19 @@ cp output $PBS_O_WORKDIR/.
 exit
 ```
 
-In this example, some directory on the /home holds the input file input and executable mympiprog.x . We create a directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI programm mympiprog.x and copy the output file back to the /home directory. The mympiprog.x is executed as one process per node, on all allocated nodes.
+In this example, a directory in /home holds the input file input and executable mympiprog.x . We create the directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI program mympiprog.x and copy the output file back to the /home directory. mympiprog.x is executed as one process per node, on all allocated nodes.
 
 !!! note
-    Consider preloading inputs and executables onto [shared scratch](storage/) before the calculation starts.
+    Consider preloading inputs and executables onto [shared scratch](storage/) memory before the calculation starts.
 
-In some cases, it may be impractical to copy the inputs to scratch and outputs to home. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such a case, it is users responsibility to preload the input files on shared /scratch before the job submission and retrieve the outputs manually, after all calculations are finished.
+In some cases, it may be impractical to copy the inputs to the scratch memory and the outputs to the home directory. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such cases, it is the users' responsibility to preload the input files on shared /scratch memory before the job submission, and retrieve the outputs manually after all calculations are finished.
 
 !!! note
     Store the qsub options within the jobscript. Use **mpiprocs** and **ompthreads** qsub options to control the MPI job execution.
 
 ### Example Jobscript for MPI Calculation With Preloaded Inputs
 
-Example jobscript for an MPI job with preloaded inputs and executables, options for qsub are stored within the script :
+Example jobscript for an MPI job with preloaded inputs and executables, options for qsub are stored within the script:
 
 ```bash
 #!/bin/bash
@@ -371,14 +371,17 @@ mpirun ./mympiprog.x
 exit
 ```
 
-In this example, input and executable files are assumed preloaded manually in /scratch/$USER/myjob directory. Note the **mpiprocs** and **ompthreads** qsub options, controlling behavior of the MPI execution. The mympiprog.x is executed as one process per node, on all 100 allocated nodes. If mympiprog.x implements OpenMP threads, it will run 16 threads per node.
+In this example, input and executable files are assumed to be preloaded manually in the /scratch/$USER/myjob directory. Note the **mpiprocs** and **ompthreads** qsub options controlling the behavior of the MPI execution. mympiprog.x is executed as one process per node, on all 100 allocated nodes. If mympiprog.x implements OpenMP threads, it will run 16 threads per node.
+
+More information can be found in the [Running OpenMPI](software/mpi/Running_OpenMPI/) and [Running MPICH2](software/mpi/running-mpich2/)
+sections.
 
 ### Example Jobscript for Single Node Calculation
 
 !!! note
-    Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends.
+    The local scratch directory is often useful for single node jobs. Local scratch memory will be deleted immediately after the job ends.
 
-Example jobscript for single node calculation, using [local scratch](storage/) on the node:
+Example jobscript for single node calculation, using [local scratch](storage/) memory on the node:
 
 ```bash
 #!/bin/bash
@@ -400,7 +403,7 @@ cp output $PBS_O_WORKDIR/.
 exit
 ```
 
-In this example, some directory on the home holds the input file input and executable myprog.x . We copy input and executable files from the home directory where the qsub was invoked ($PBS_O_WORKDIR) to local scratch /lscratch/$PBS_JOBID, execute the myprog.x and copy the output file back to the /home directory. The myprog.x runs on one node only and may use threads.
+In this example, a directory in /home holds the input file input and executable myprog.x . We copy input and executable files from the home directory where the qsub was invoked ($PBS_O_WORKDIR) to local scratch memory /lscratch/$PBS_JOBID, execute myprog.x and copy the output file back to the /home directory. myprog.x runs on one node only and may use threads.
 
 ### Other Jobscript Examples
 
diff --git a/docs.it4i/anselm/network.md b/docs.it4i/anselm/network.md
index 79c6f1a37f0d22f286e4de57dac097dcea8d19e8..f7890c08ed6582f3077b0aba8af84dbac73a30af 100644
--- a/docs.it4i/anselm/network.md
+++ b/docs.it4i/anselm/network.md
@@ -1,15 +1,15 @@
 # Network
 
-All compute and login nodes of Anselm are interconnected by [InfiniBand](http://en.wikipedia.org/wiki/InfiniBand) QDR network and by Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet) network. Both networks may be used to transfer user data.
+All of the compute and login nodes of Anselm are interconnected through an [InfiniBand](http://en.wikipedia.org/wiki/InfiniBand) QDR network and a Gigabit [Ethernet](http://en.wikipedia.org/wiki/Ethernet) network. Both networks may be used to transfer user data.
 
 ## InfiniBand Network
 
-All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [InfiniBand](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4 x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree.
+All of the compute and login nodes of Anselm are interconnected through a high-bandwidth, low-latency [InfiniBand](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4 x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree.
 
 The compute nodes may be accessed via the InfiniBand network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native InfiniBand connection among the nodes.
 
 !!! note
-    The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via native InfiniBand protocol.
+    The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via the native InfiniBand protocol.
 
 The Fat tree topology ensures that peak transfer rates are achieved between any two nodes, independent of network traffic exchanged among other nodes concurrently.
 
@@ -32,4 +32,4 @@ $ ssh 10.2.1.110
 $ ssh 10.1.1.108
 ```
 
-In this example, we access the node cn110 by InfiniBand network via the ib0 interface, then from cn110 to cn108 by Ethernet network.
+In this example, we access the node cn110 through the InfiniBand network via the ib0 interface, then from cn110 to cn108 through the Ethernet network.
diff --git a/docs.it4i/anselm/prace.md b/docs.it4i/anselm/prace.md
new file mode 100644
index 0000000000000000000000000000000000000000..28980cc7322649993089744ae6ba6caa13912d53
--- /dev/null
+++ b/docs.it4i/anselm/prace.md
@@ -0,0 +1,260 @@
+# PRACE User Support
+
+## Intro
+
+PRACE users coming to Anselm as to TIER-1 system offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This is inconvenient, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](../general/obtaining-login-credentials/obtaining-login-credentials/), if the same level of access is required.
+
+All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing to read the local documentation here.
+
+## Help and Support
+
+If you have any troubles, need information, require support or want to install additional software, please use the [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/).
+
+Information about the local services is provided in the [introduction of the general user documentation](introduction/). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz.
+
+## Obtaining Login Credentials
+
+In general PRACE users already have a PRACE account setup through their HOMESITE (institution from their country) as a result of rewarded PRACE project proposal. This includes signed PRACE AuP, generated and registered certificates, etc.
+
+If there's a special need a PRACE user can get a standard (local) account at IT4Innovations. To get an account on the Anselm cluster, the user needs to obtain the login credentials. The procedure is the same as for general users of the cluster, so please see the corresponding section of the general documentation here.
+
+## Accessing the Cluster
+
+### Access With GSI-SSH
+
+For all PRACE users, the method for interactive access (login) and data transfer based on grid services from the Globus Toolkit (GSI SSH and GridFTP) is supported.
+
+The user will need a valid certificate and to be present in the PRACE LDAP (please contact your HOME SITE or the primary investigator of your project for LDAP account creation).
+
+Most of the information needed by PRACE users accessing the Anselm TIER-1 system can be found here:
+
+* [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
+* [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
+* [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
+* [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
+* [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
+
+Before you start to use any of the services don't forget to create a proxy certificate from your certificate:
+
+```console
+$ grid-proxy-init
+```
+
+To check whether your proxy certificate is still valid (by default it's valid for 12 hours), use:
+
+```console
+$ grid-proxy-info
+```
+
+To access the Anselm cluster, two login nodes running GSI SSH service are available. The service is available publicly on the Internet as well as on the internal PRACE network (only accessible to other PRACE partners).
+
+#### Access From the PRACE Network:
+
+It is recommended to use the single DNS name anselm-prace.it4i.cz which is distributed between the two login nodes. If needed, the user can login directly to one of the login nodes. The addresses are:
+
+| Login address               | Port | Protocol | Login node       |
+| --------------------------- | ---- | -------- | ---------------- |
+| anselm-prace.it4i.cz        | 2222 | gsissh   | login1 or login2 |
+| login1-prace.anselm.it4i.cz | 2222 | gsissh   | login1           |
+| login2-prace.anselm.it4i.cz | 2222 | gsissh   | login2           |
+
+```console
+$ gsissh -p 2222 anselm-prace.it4i.cz
+```
+
+When logging in from another PRACE system, the prace_service script can be used:
+
+```console
+$ gsissh `prace_service -i -s anselm`
+```
+
+#### Public Access From the Internet:
+
+It is recommended to use the single DNS name anselm.it4i.cz which is distributed between the two login nodes. If needed, the user can login directly to one of the login nodes. The addresses are:
+
+| Login address         | Port | Protocol | Login node       |
+| --------------------- | ---- | -------- | ---------------- |
+| anselm.it4i.cz        | 2222 | gsissh   | login1 or login2 |
+| login1.anselm.it4i.cz | 2222 | gsissh   | login1           |
+| login2.anselm.it4i.cz | 2222 | gsissh   | login2           |
+
+```console
+$ gsissh -p 2222 anselm.it4i.cz
+```
+
+When logging in from another PRACE system, the prace_service script can be used:
+
+```console
+$ gsissh `prace_service -e -s anselm`
+```
+
+Although the preferred and recommended file transfer mechanism is [using GridFTP](prace/#file-transfers), the GSI SSH implementation on Anselm also supports SCP, so for small files transfer gsiscp can be used:
+
+```console
+$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ anselm.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_
+
+$ gsiscp -P 2222 anselm.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
+
+$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ anselm-prace.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_
+
+$ gsiscp -P 2222 anselm-prace.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
+```
+
+### Access to X11 Applications (VNC)
+
+If the user needs to run an X11 based graphical application and does not have an X11 server, the applications can be run using a VNC service. If the user is using regular SSH based access, please see the relevant section in general documentation.
+
+If the user uses GSI SSH based access, then the procedure is similar to the SSH based access, only the port forwarding must be done using GSI SSH:
+
+```console
+$ gsissh -p 2222 anselm.it4i.cz -L 5961:localhost:5961
+```
+
+### Access With SSH
+
+After successful obtainment of login credentials for the local IT4Innovations account, PRACE users can access the cluster as regular users using SSH. For more information please see the section in general documentation.
+
+## File Transfers
+
+PRACE users can use the same transfer mechanisms as regular users (if they've undergone the full registration procedure). For information about this, please see the relevant section in the general documentation.
+
+Apart from the standard mechanisms, for PRACE users to transfer data to/from the Anselm cluster, a GridFTP server running the Globus Toolkit GridFTP service is available. The service is available publicly over the Internet as well as from the internal PRACE network (accessible only to other PRACE partners).
+
+There is one control server and three backend servers for striping and/or backup in case any of them were to fail.
+
+### Access From the PRACE Network
+
+| Login address                | Port | Node role                   |
+| ---------------------------- | ---- | --------------------------- |
+| gridftp-prace.anselm.it4i.cz | 2812 | Front end /control server   |
+| login1-prace.anselm.it4i.cz  | 2813 | Backend / data mover server |
+| login2-prace.anselm.it4i.cz  | 2813 | Backend / data mover server |
+| dm1-prace.anselm.it4i.cz     | 2813 | Backend / data mover server |
+
+Copy files **to** Anselm by running the following commands on your local machine:
+
+```console
+$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
+```
+
+Or by using the prace_service script:
+
+```console
+$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
+```
+
+Copy files **from** Anselm with:
+
+```console
+$ globus-url-copy gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
+```
+
+Or by using the prace_service script:
+
+```console
+$ globus-url-copy gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
+```
+
+### Public Access From the Internet
+
+| Login address          | Port | Node role                   |
+| ---------------------- | ---- | --------------------------- |
+| gridftp.anselm.it4i.cz | 2812 | Front end /control server   |
+| login1.anselm.it4i.cz  | 2813 | Backend / data mover server |
+| login2.anselm.it4i.cz  | 2813 | Backend / data mover server |
+| dm1.anselm.it4i.cz     | 2813 | Backend / data mover server |
+
+Copy files **to** Anselm by running the following commands on your local machine:
+
+```console
+$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
+```
+
+Or by using the prace_service script:
+
+```console
+$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
+```
+
+Copy files **from** Anselm with:
+
+```console
+$ globus-url-copy gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
+```
+
+Or by using the prace_service script:
+
+```console
+$ globus-url-copy gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
+```
+
+Generally both shared file systems are available through GridFTP:
+
+| File system mount point | Filesystem | Comment                                                        |
+| ----------------------- | ---------- | -------------------------------------------------------------- |
+| /home                   | Lustre     | Default HOME directories of users in format /home/prace/login/ |
+| /scratch                | Lustre     | Shared SCRATCH mounted on the whole cluster                    |
+
+More information about the shared file systems is available [here](storage/).
+
+## Usage of the Cluster
+
+There are some limitations for PRACE users when using the cluster. By default PRACE users aren't allowed to access special queues in the PBS Pro to have high priority or exclusive access to some special equipment like accelerated nodes and high memory (fat) nodes. There may also be restrictions when obtaining a working license for the commercial software installed on the cluster, mostly because of the license agreement or because of insufficient amounts of licenses.
+
+For production runs always use the scratch file systems, either the global shared or the local ones. The available file systems are described [here](hardware-overview/).
+
+### Software, Modules, and the PRACE Common Production Environment
+
+All system wide installed software on the cluster is made available to users via the modules. The information about environment and module usage is in this [section of general documentation](environment-and-modules/).
+
+PRACE users, via the "prace" module, can use the [PRACE Common Production Environment](http://www.prace-ri.eu/prace-common-production-environment/);
+
+```console
+$ module load prace
+```
+
+### Resource Allocation and Job Execution
+
+General information about the resource allocation, job queuing and job execution is in this [section of general documentation](resources-allocation-policy/).
+
+For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree".
+
+| queue                         | Active project | Project resources | Nodes               | priority | authorization | walltime  |
+| ----------------------------- | -------------- | ----------------- | ------------------- | -------- | ------------- | --------- |
+| **qexp** Express queue        | no             | none required     | 2 reserved, 8 total | high     | no            | 1 / 1h    |
+| **qprace** Production queue   | yes            | > 0               | 178 w/o accelerator | medium   | no            | 24 / 48 h |
+| **qfree** Free resource queue | yes            | none required     | 178 w/o accelerator | very low | no            | 12 / 12 h |
+
+**qprace**, the PRACE queue: This queue is intended for normal production runs. It is required that an active project with nonzero remaining resources is specified to enter the qprace. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprace is 12 hours. If the job needs a longer time, it must use checkpoint/restart functionality.
+
+### Accounting & Quota
+
+The resources that are currently subject to accounting are the core hours. The core hours are accounted on the basis of wall clock time. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See the [example in the general documentation](resources-allocation-policy/).
+
+PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/).
+
+Users who have undergone the full local registration procedure (including signing the IT4Innovations Acceptable Use Policy) and who have received a local password may check at any time, how many core-hours have been consumed by themselves and their projects using the command "it4ifree".
+
+!!! note
+    You need to know your user password to use the command. Displayed core hours are "system core hours" which differ from PRACE "standardized core hours".
+
+!!! hint
+    The **it4ifree** command is a part of it4i.portal.clients package, [located here](https://pypi.python.org/pypi/it4i.portal.clients).
+
+```console
+$ it4ifree
+    Password:
+         PID    Total   Used   ...by me Free
+       -------- ------- ------ -------- -------
+       OPEN-0-0 1500000 400644   225265 1099356
+       DD-13-1    10000   2606     2606    7394
+```
+
+By default a file system quota is applied. To check the current status of the quota use;
+
+```console
+$ lfs quota -u USER_LOGIN /home
+$ lfs quota -u USER_LOGIN /scratch
+```
+
+If the quota is insufficient, please contact the [support](prace/#help-and-support) and request an increase.
diff --git a/docs.it4i/anselm/remote-visualization.md.disable b/docs.it4i/anselm/remote-visualization.md.disable
index 93d5cd23b4fc8c2c6856a511dcb7a31cf4d4fb00..0a97df68104d25f9cd88c1c8b5da16e2f8fb69e1 100644
--- a/docs.it4i/anselm/remote-visualization.md.disable
+++ b/docs.it4i/anselm/remote-visualization.md.disable
@@ -2,11 +2,11 @@
 
 ## Introduction
 
-The goal of this service is to provide the users a GPU accelerated use of OpenGL applications, especially for pre- and post- processing work, where not only the GPU performance is needed but also fast access to the shared file systems of the cluster and a reasonable amount of RAM.
+The goal of this service is to provide users with GPU accelerated use of OpenGL applications, especially for pre- and post- processing work, where not only GPU performance is needed but also fast access to the shared file systems of the cluster and a reasonable amount of RAM.
 
-The service is based on integration of open source tools VirtualGL and TurboVNC together with the cluster's job scheduler PBS Professional.
+The service is based on integration of the open source tools VirtualGL and TurboVNC together with the cluster's job scheduler PBS Professional.
 
-Currently two compute nodes are dedicated for this service with following configuration for each node:
+Currently there are two dedicated compute nodes for this service with the following configuration for each node:
 
 | [**Visualization node configuration**](compute-nodes/) |                                         |
 | ------------------------------------------------------ | --------------------------------------- |
@@ -27,7 +27,7 @@ Currently two compute nodes are dedicated for this service with following config
 
 ### Setup and Start Your Own TurboVNC Server
 
-TurboVNC is designed and implemented for cooperation with VirtualGL and available for free for all major platforms. For more information and download, please refer to: <http://sourceforge.net/projects/turbovnc/>
+TurboVNC is designed and implemented for cooperation with VirtualGL and is available for free for all major platforms. For more information and download, please refer to: <http://sourceforge.net/projects/turbovnc/>
 
 **Always use TurboVNC on both sides** (server and client) **don't mix TurboVNC and other VNC implementations** (TightVNC, TigerVNC, ...) as the VNC protocol implementation may slightly differ and diminish your user experience by introducing picture artifacts, etc.
 
@@ -39,12 +39,12 @@ Please [follow the documentation](shell-and-data-access/).
 
 #### 2. Run Your Own Instance of TurboVNC Server
 
-To have the OpenGL acceleration, **24 bit color depth must be used**. Otherwise only the geometry (desktop size) definition is needed.
+To have OpenGL acceleration, **24 bit color depth must be used**. Otherwise only the geometry (desktop size) definition is needed.
 
 !!! hint
-    At first VNC server run you need to define a password.
+    The first time the VNC server is run you need to define a password.
 
-This example defines desktop with dimensions 1200x700 pixels and 24 bit color depth.
+This example defines a desktop with the dimensions of 1200x700 pixels and 24 bit color depth.
 
 ```console
 $ module load turbovnc/1.2.2
@@ -69,7 +69,7 @@ X DISPLAY # PROCESS ID
 
 In this example the VNC server runs on display **:1**.
 
-#### 4. Remember the Exact Login Node, Where Your VNC Server Runs
+#### 4. Remember the Exact Login Node Where Your VNC Server Runs
 
 ```console
 $ uname -n
@@ -113,11 +113,11 @@ Mind that you should connect through the SSH tunneled port. In this example it i
 $ vncviewer localhost:5901
 ```
 
-If you use Windows version of TurboVNC Viewer, just run the Viewer and use address **localhost:5901**.
+If you use the Windows version of TurboVNC Viewer, just run the Viewer and use the address **localhost:5901**.
 
 #### 9. Proceed to the Chapter "Access the Visualization Node"
 
-Now you should have working TurboVNC session connected to your workstation.
+Now you should have a working TurboVNC session connected to your workstation.
 
 #### 10. After You End Your Visualization Session
 
@@ -129,8 +129,8 @@ $ vncserver -kill :1
 
 ### Access the Visualization Node
 
-**To access the node use a dedicated PBS Professional scheduler queue
-qviz**. The queue has following properties:
+**To access the node use the dedicated PBS Professional scheduler queue
+qviz**. The queue has the following properties:
 
 | queue                        | active project | project resources | nodes | min ncpus | priority | authorization | walltime         |
 | ---------------------------- | -------------- | ----------------- | ----- | --------- | -------- | ------------- | ---------------- |
@@ -139,13 +139,13 @@ qviz**. The queue has following properties:
 Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 16 GB of RAM and 1/4 of the GPU capacity.
 
 !!! note
-    If more GPU power or RAM is required, it is recommended to allocate one whole node per user, so that all 16 cores, whole RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
+    If more GPU power or RAM is required, it is recommended to allocate one whole node per user, so that all 16 cores, the whole RAM, and the whole GPU is exclusive. This is currently also the maximum allocation allowed per user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
 
 To access the visualization node, follow these steps:
 
-#### 1. In Your VNC Session, Open a Terminal and Allocate a Node Using PBSPro qsub Command
+#### 1. In Your VNC Session, Open a Terminal and Allocate a Node Using the PBSPro qsub Command
 
-This step is necessary to allow you to proceed with next steps.
+This step is necessary to allow you to proceed with the next steps.
 
 ```console
 $ qsub -I -q qviz -A PROJECT_ID
@@ -159,9 +159,9 @@ $ qsub -I -q qviz -A PROJECT_ID -l select=1:ncpus=16 -l walltime=02:00:00
 
 Substitute **PROJECT_ID** with the assigned project identification string.
 
-In this example a whole node for 2 hours is requested.
+In this example a whole node is requested for 2 hours.
 
-If there are free resources for your request, you will have a shell unning on an assigned node. Please remember the name of the node.
+If there are free resources for your request, you will have a shell running on an assigned node. Please remember the name of the node.
 
 ```console
 $ uname -n
@@ -178,7 +178,7 @@ Setup the VirtualGL connection to the node, which PBSPro allocated for our job.
 $ vglconnect srv8
 ```
 
-You will be connected with created VirtualGL tunnel to the visualization ode, where you will have a shell.
+You will be connected with the created VirtualGL tunnel to the visualization node, where you will have a shell.
 
 #### 3. Load the VirtualGL Module
 
@@ -186,13 +186,13 @@ You will be connected with created VirtualGL tunnel to the visualization ode, wh
 $ module load virtualgl/2.4
 ```
 
-#### 4. Run Your Desired OpenGL Accelerated Application Using VirtualGL Script "Vglrun"
+#### 4. Run Your Desired OpenGL Accelerated Application Using the VirtualGL Script "Vglrun"
 
 ```console
 $ vglrun glxgears
 ```
 
-If you want to run an OpenGL application which is vailable through modules, you need at first load the respective module. E.g. to run the **Mentat** OpenGL application from **MARC** software ackage use:
+If you want to run an OpenGL application which is available through modules, you need to first load the respective module. E.g. to run the **Mentat** OpenGL application from **MARC** software package use:
 
 ```console
 $ module load marc/2013.1
@@ -201,7 +201,7 @@ $ vglrun mentat
 
 #### 5. After You End Your Work With the OpenGL Application
 
-Just logout from the visualization node and exit both opened terminals nd end your VNC server session as described above.
+Just logout from the visualization node and exit both opened terminals and end your VNC server session as described above.
 
 ## Tips and Tricks
 
diff --git a/docs.it4i/anselm/resource-allocation-and-job-execution.md b/docs.it4i/anselm/resource-allocation-and-job-execution.md
index 4585588dd18b1d308bd32c434dd2b09f50f9c154..a24b8151186555ca9c7be1bfb3063ff510b1ffe6 100644
--- a/docs.it4i/anselm/resource-allocation-and-job-execution.md
+++ b/docs.it4i/anselm/resource-allocation-and-job-execution.md
@@ -2,9 +2,9 @@
 
 To run a [job](job-submission-and-execution/), [computational resources](resources-allocation-policy/) for this particular job must be allocated. This is done via the PBS Pro job workload manager software, which efficiently distributes workloads across the supercomputer. Extensive information about PBS Pro can be found in the [official documentation here](../pbspro/), especially in the PBS Pro User's Guide.
 
-## Resources Allocation Policy
+## Resource Allocation Policy
 
-The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](job-priority/) at Anselm ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are available to Anselm users:
+The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](job-priority/) system of Anselm ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. The following queues are available to Anselm users:
 
 * **qexp**, the Express queue
 * **qprod**, the Production queue
@@ -22,17 +22,17 @@ Read more on the [Resource AllocationPolicy](resources-allocation-policy/) page.
 !!! note
     Use the **qsub** command to submit your jobs.
 
-The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 16 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
+The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is an entire node, 16 cores**, with the exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on the first of the allocated nodes.**
 
 Read more on the [Job submission and execution](job-submission-and-execution/) page.
 
 ## Capacity Computing
 
 !!! note
-    Use Job arrays when running huge number of jobs.
+    Use Job arrays when running a huge number of jobs.
 
 Use GNU Parallel and/or Job arrays when running (many) single core jobs.
 
-In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization. In this chapter, we discuss the the recommended way to run huge number of jobs, including **ways to run huge number of single core jobs**.
+In many cases, it is useful to submit a huge (100+) number of computational jobs into the PBS queue system. A huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving the best runtime, throughput, and computer utilization. In this chapter, we discuss the the recommended way to run a huge number of jobs, including **ways to run a huge number of single core jobs**.
 
-Read more on [Capacity computing](capacity-computing/) page.
+Read more on the [Capacity computing](capacity-computing/) page.
diff --git a/docs.it4i/anselm/resources-allocation-policy.md b/docs.it4i/anselm/resources-allocation-policy.md
index 25c527bc3a378dc3a94dd04a97b44fc7e376e9bb..47e3a6971ffdafb8a4510df9a8b4bdf9b3e5c211 100644
--- a/docs.it4i/anselm/resources-allocation-policy.md
+++ b/docs.it4i/anselm/resources-allocation-policy.md
@@ -2,38 +2,38 @@
 
 ## Job Queue Policies
 
-The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The Fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
+The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and the resources available to the Project. The Fair-share system of Anselm ensures that individual users may consume approximately equal amounts of resources per week. Detailed information can be found in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. The following table provides the queue partitioning overview:
 
 !!! note
     Check the queue status at <https://extranet.it4i.cz/anselm/>
 
 | queue               | active project | project resources | nodes                                                | min ncpus | priority | authorization | walltime |
 | ------------------- | -------------- | ----------------- | ---------------------------------------------------- | --------- | -------- | ------------- | -------- |
-| qexp                | no             | none required     | 2 reserved, 31 totalincluding MIC, GPU               | 1         | 150      | no            | 1 h      |
-| qprod               | yes            | 0                 | 178 nodes w/o accelerator                            | 16        | 0        | no            | 24/48 h  |
-| qlong               | yes            | 0                 | 60 nodes w/o accelerator                             | 16        | 0        | no            | 72/144 h |
+| qexp                | no             | none required     | 209 nodes                                            | 1         | 150      | no            | 1 h      |
+| qprod               | yes            | 0                 | 180 nodes w/o accelerator                            | 16        | 0        | no            | 24/48 h  |
+| qlong               | yes            | 0                 | 180 nodes w/o accelerator                            | 16        | 0        | no            | 72/144 h |
 | qnvidia, qmic       | yes            | 0                 | 23 nvidia nodes, 4 mic nodes                         | 16        | 200      | yes           | 24/48 h  |
 | qfat                | yes            | 0                 | 2 fat nodes                                          | 16        | 200      | yes           | 24/144 h |
-| qfree               | yes            | none required     | 178 w/o accelerator                                  | 16        | -1024    | no            | 12 h     |
+| qfree               | yes            | none required     | 180 w/o accelerator                                  | 16        | -1024    | no            | 12 h     |
 
 !!! note
- **The qfree queue is not free of charge**. [Normal accounting](#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
+ **The qfree queue is not free of charge**. [Normal accounting](#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a project has exhausted all its allocated computational resources. This does not apply to Director's Discretion projects (DD projects) by default. Usage of qfree after exhaustion of DD projects' computational resources is allowed after request for this queue.
 
-**The qexp queue is equipped with the nodes not having the very same CPU clock speed.** Should you need the very same CPU speed, you have to select the proper nodes during the PSB job submission.
+**The qexp queue is equipped with nodes which do not have exactly the same CPU clock speed.** Should you need the nodes to have exactly the same CPU speed, you have to select the proper nodes during the PSB job submission.
 
-* **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user, from a pool of nodes containing Nvidia accelerated nodes (cn181-203), MIC accelerated nodes (cn204-207). This enables to test and tune also accelerated code. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
-* **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, except the reserved ones. 178 nodes without accelerator are included. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
-* **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 60 nodes without acceleration may be accessed via the qlong queue. Full nodes, 16 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 x 48 h).
-* **qnvidia**, **qmic**, **qfat**, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project.
-* **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
+* **qexp**, the Express queue: This queue is dedicated to testing and running very small jobs. It is not required to specify a project to enter the qexp. There are always 2 nodes reserved for this queue (w/o accelerators), a maximum 8 nodes are available via the qexp for a particular user, from a pool of nodes containing Nvidia accelerated nodes (cn181-203), MIC accelerated nodes (cn204-207) and Fat nodes with 512GB of RAM (cn208-209). This enables us to test and tune accelerated code and code with higher RAM requirements. The nodes may be allocated on a per core basis. No special authorization is required to use qexp. The maximum runtime in qexp is 1 hour.
+* **qprod**, the Production queue: This queue is intended for normal production runs. It is required that an active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, except the reserved ones. 178 nodes without accelerators are included. Full nodes, 16 cores per node, are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
+* **qlong**, the Long queue: This queue is intended for long production runs. It is required that an active project with nonzero remaining resources is specified to enter the qlong. Only 60 nodes without acceleration may be accessed via the qlong queue. Full nodes, 16 cores per node, are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times that of the standard qprod time - 3 x 48 h).
+* **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to accessing the Nvidia accelerated nodes, the qmic to accessing MIC nodes and qfat the Fat nodes. It is required that an active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic, and 2 fat nodes are included. Full nodes, 16 cores per node, are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs to explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated with her/his project.
+* **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a project has exhausted all of its allocated computational resources (Does not apply to DD projects by default; DD projects have to request persmission to use qfree after exhaustion of computational resources). It is required that an active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerators may be accessed from this queue. Full nodes, 16 cores per node, are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
 
 ## Queue Notes
 
-The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be  [set manually, see examples](job-submission-and-execution/).
+The job wall clock time defaults to **half the maximum time**, see the table above. Longer wall time limits can be  [set manually, see examples](job-submission-and-execution/).
 
-Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. Wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R).
+Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. The wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however it cannot be changed for a running job (state R).
 
-Anselm users may check current queue configuration at <https://extranet.it4i.cz/anselm/queues>.
+Anselm users may check the current queue configuration at <https://extranet.it4i.cz/anselm/queues>.
 
 ## Queue Status
 
@@ -48,7 +48,7 @@ Display the queue status on Anselm:
 $ qstat -q
 ```
 
-The PBS allocation overview may be obtained also using the rspbs command.
+The PBS allocation overview may be obtained also using the rspbs command:
 
 ```console
 $ rspbs
diff --git a/docs.it4i/anselm/shell-and-data-access.md b/docs.it4i/anselm/shell-and-data-access.md
index 15294724775cf2d4c1d695f3f049cfb05ec9df8b..5373a678fb00a1a0bbc6ab5e7f9402c6b4a5c750 100644
--- a/docs.it4i/anselm/shell-and-data-access.md
+++ b/docs.it4i/anselm/shell-and-data-access.md
@@ -2,7 +2,7 @@
 
 ## Shell Access
 
-The Anselm cluster is accessed by SSH protocol via login nodes login1 and login2 at address anselm.it4i.cz. The login nodes may be addressed specifically, by prepending the login node name to the address.
+The Anselm cluster is accessed by SSH protocol via login nodes login1 and login2 at the address anselm.it4i.cz. The login nodes may be addressed specifically, by prepending the login node name to the address.
 
 | Login address         | Port | Protocol | Login node                                   |
 | --------------------- | ---- | -------- | -------------------------------------------- |
@@ -10,7 +10,7 @@ The Anselm cluster is accessed by SSH protocol via login nodes login1 and login2
 | login1.anselm.it4i.cz | 22   | ssh      | login1                                       |
 | login2.anselm.it4i.cz | 22   | ssh      | login2                                       |
 
-The authentication is by the [private key](../general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
+Authentication is by [private key](../general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
 
 !!! note
     Please verify SSH fingerprints during the first logon. They are identical on all login nodes:
@@ -27,13 +27,13 @@ The authentication is by the [private key](../general/accessing-the-clusters/she
 
 Private key authentication:
 
-On **Linux** or **Mac**, use
+On **Linux** or **Mac**, use:
 
 ```console
 $ ssh -i /path/to/id_rsa username@anselm.it4i.cz
 ```
 
-If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this command to set lower permissions to private key file.
+If you see a warning message "UNPROTECTED PRIVATE KEY FILE!", use this command to set lower permissions to the private key file:
 
 ```console
 $ chmod 600 /path/to/id_rsa
@@ -65,7 +65,7 @@ Example to the cluster login:
 
 ## Data Transfer
 
-Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy).
+Data in and out of the system may be transferred by the [scp](http://en.wikipedia.org/wiki/Secure_copy) and sftp protocols. (Not available yet). In the case that large volumes of data are transferred, use the dedicated data mover node dm1.anselm.it4i.cz for increased performance.
 
 | Address               | Port | Protocol  |
 | --------------------- | ---- | --------- |
@@ -73,19 +73,19 @@ Data in and out of the system may be transferred by the [scp](http://en.wikipedi
 | login1.anselm.it4i.cz | 22   | scp       |
 | login2.anselm.it4i.cz | 22   | scp       |
 
-The authentication is by the [private key](../general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
+Authentication is by [private key](../general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys/)
 
 !!! note
-    Data transfer rates up to **160MB/s** can be achieved with scp or sftp.
+    Data transfer rates of up to **160MB/s** can be achieved with scp or sftp.
 
     1TB may be transferred in 1:50h.
 
-To achieve 160MB/s transfer rates, the end user must be connected by 10G line all the way to IT4Innovations and use computer with fast processor for the transfer. Using Gigabit ethernet connection, up to 110MB/s may be expected.  Fast cipher (aes128-ctr) should be used.
+To achieve 160MB/s transfer rates, the end user must be connected by 10G line all the way to IT4Innovations, and be using a computer with a fast processor for the transfer. When using a Gigabit ethernet connection, up to 110MB/s transfer rates may be expected.  Fast cipher (aes128-ctr) should be used.
 
 !!! note
     If you experience degraded data transfer performance, consult your local network provider.
 
-On linux or Mac, use scp or sftp client to transfer the data to Anselm:
+On linux or Mac, use an scp or sftp client to transfer data to Anselm:
 
 ```console
 $ scp -i /path/to/id_rsa my-local-file username@anselm.it4i.cz:directory/file
@@ -101,7 +101,7 @@ or
 $ sftp -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz
 ```
 
-Very convenient way to transfer files in and out of the Anselm computer is via the fuse filesystem [sshfs](http://linux.die.net/man/1/sshfs)
+A very convenient way to transfer files in and out of Anselm is via the fuse filesystem [sshfs](http://linux.die.net/man/1/sshfs)
 
 ```console
 $ sshfs -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz:. mountpoint
@@ -109,7 +109,7 @@ $ sshfs -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz:. mountpoint
 
 Using sshfs, the users Anselm home directory will be mounted on your local computer, just like an external disk.
 
-Learn more on ssh, scp and sshfs by reading the manpages
+Learn more about ssh, scp and sshfs by reading the manpages
 
 ```console
 $ man ssh
@@ -117,13 +117,13 @@ $ man scp
 $ man sshfs
 ```
 
-On Windows, use [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Anselm filesystems directly as an external disc.
+On Windows, use the [WinSCP client](http://winscp.net/eng/download.php) to transfer the data. The [win-sshfs client](http://code.google.com/p/win-sshfs/) provides a way to mount the Anselm filesystems directly as an external disc.
 
 More information about the shared file systems is available [here](storage/).
 
 ## Connection Restrictions
 
-Outgoing connections, from Anselm Cluster login nodes to the outside world, are restricted to following ports:
+Outgoing connections, from Anselm Cluster login nodes to the outside world, are restricted to the following ports:
 
 | Port | Protocol |
 | ---- | -------- |
@@ -135,28 +135,28 @@ Outgoing connections, from Anselm Cluster login nodes to the outside world, are
 !!! note
     Please use **ssh port forwarding** and proxy servers to connect from Anselm to all other remote ports.
 
-Outgoing connections, from Anselm Cluster compute nodes are restricted to the internal network. Direct connections form compute nodes to outside world are cut.
+Outgoing connections, from Anselm Cluster compute nodes are restricted to the internal network. Direct connections form compute nodes to the outside world are cut.
 
 ## Port Forwarding
 
 ### Port Forwarding From Login Nodes
 
 !!! note
-    Port forwarding allows an application running on Anselm to connect to arbitrary remote host and port.
+    Port forwarding allows an application running on Anselm to connect to arbitrary remote hosts and ports.
 
-It works by tunneling the connection from Anselm back to users workstation and forwarding from the workstation to the remote host.
+It works by tunneling the connection from Anselm back to users' workstations and forwarding from the workstation to the remote host.
 
-Pick some unused port on Anselm login node  (for example 6000) and establish the port forwarding:
+Pick some unused port on the Anselm login node  (for example 6000) and establish the port forwarding:
 
 ```console
 $ ssh -R 6000:remote.host.com:1234 anselm.it4i.cz
 ```
 
-In this example, we establish port forwarding between port 6000 on Anselm and port 1234 on the remote.host.com. By accessing localhost:6000 on Anselm, an application will see response of remote.host.com:1234. The traffic will run via users local workstation.
+In this example, we establish port forwarding between port 6000 on Anselm and port 1234 on the remote.host.com. By accessing localhost:6000 on Anselm, an application will see the response of remote.host.com:1234. The traffic will run via the user's local workstation.
 
-Port forwarding may be done **using PuTTY** as well. On the PuTTY Configuration screen, load your Anselm configuration first. Then go to Connection->SSH->Tunnels to set up the port forwarding. Click Remote radio button. Insert 6000 to Source port textbox. Insert remote.host.com:1234. Click Add button, then Open.
+Port forwarding may be done **using PuTTY** as well. On the PuTTY Configuration screen, load your Anselm configuration first. Then go to Connection->SSH->Tunnels to set up the port forwarding. Click Remote radio button. Insert 6000 to theSource port textbox. Insert remote.host.com:1234. Click the Add button, then Open.
 
-Port forwarding may be established directly to the remote host. However, this requires that user has ssh access to remote.host.com
+Port forwarding may be established directly to the remote host. However, this requires that the user has ssh access to remote.host.com
 
 ```console
 $ ssh -L 6000:localhost:1234 remote.host.com
@@ -167,26 +167,26 @@ $ ssh -L 6000:localhost:1234 remote.host.com
 
 ### Port Forwarding From Compute Nodes
 
-Remote port forwarding from compute nodes allows applications running on the compute nodes to access hosts outside Anselm Cluster.
+Remote port forwarding from compute nodes allows applications running on the compute nodes to access hosts outside the Anselm Cluster.
 
 First, establish the remote port forwarding form the login node, as [described above](#port-forwarding-from-login-nodes).
 
-Second, invoke port forwarding from the compute node to the login node. Insert following line into your jobscript or interactive shell
+Second, invoke port forwarding from the compute node to the login node. Insert the following line into your jobscript or interactive shell;
 
 ```console
 $ ssh  -TN -f -L 6000:localhost:6000 login1
 ```
 
-In this example, we assume that port forwarding from login1:6000 to remote.host.com:1234 has been established beforehand. By accessing localhost:6000, an application running on a compute node will see response of remote.host.com:1234
+In this example, we assume that port forwarding from login1:6000 to remote.host.com:1234 has been established beforehand. By accessing localhost:6000, an application running on a compute node will see the response of remote.host.com:1234
 
 ### Using Proxy Servers
 
-Port forwarding is static, each single port is mapped to a particular port on remote host. Connection to other remote host, requires new forward.
+Port forwarding is static, each single port is mapped to a particular port on a remote host. Connection to another remote host requires a new forward.
 
 !!! note
-    Applications with inbuilt proxy support, experience unlimited access to remote hosts, via single proxy server.
+    Applications with inbuilt proxy support experience unlimited access to remote hosts via a single proxy server.
 
-To establish local proxy server on your workstation, install and run SOCKS proxy server software. On Linux, sshd demon provides the functionality. To establish SOCKS proxy server listening on port 1080 run:
+To establish a local proxy server on your workstation, install and run SOCKS proxy server software. On Linux, sshd demon provides the functionality. To establish SOCKS proxy server listening on port 1080 run:
 
 ```console
 $ ssh -D 1080 localhost
@@ -194,7 +194,7 @@ $ ssh -D 1080 localhost
 
 On Windows, install and run the free, open source [Sock Puppet](http://sockspuppet.com/) server.
 
-Once the proxy server is running, establish ssh port forwarding from Anselm to the proxy server, port 1080, exactly as [described above](#port-forwarding-from-login-nodes).
+Once the proxy server is running, establish ssh port forwarding from Anselm to the proxy server, port 1080, exactly as [described above](#port-forwarding-from-login-nodes):
 
 ```console
 $ ssh -R 6000:localhost:1080 anselm.it4i.cz
@@ -204,9 +204,9 @@ Now, configure the applications proxy settings to **localhost:6000**. Use port f
 
 ## Graphical User Interface
 
-* The [X Window system](../general/accessing-the-clusters/graphical-user-interface/x-window-system/) is a principal way to get GUI access to the clusters.
-* The [Virtual Network Computing](../general/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
+* The [X Window system](../general/accessing-the-clusters/graphical-user-interface/x-window-system/) is the principal way to get GUI access to the clusters.
+* [Virtual Network Computing](../general/accessing-the-clusters/graphical-user-interface/vnc/) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing) system that uses the [Remote Frame Buffer protocol](http://en.wikipedia.org/wiki/RFB_protocol) to remotely control another [computer](http://en.wikipedia.org/wiki/Computer).
 
 ## VPN Access
 
-* Access to IT4Innovations internal resources via [VPN](../general/accessing-the-clusters/vpn-access/).
+* Access IT4Innovations internal resources via [VPN](../general/accessing-the-clusters/vpn-access/).
diff --git a/docs.it4i/anselm/software/nvidia-cuda.md b/docs.it4i/anselm/software/nvidia-cuda.md
index 91251e132e59d3c86d1b60169f7da82cfae2fcee..406c5e4b6d3faddf2dd142a9815a644d9aedd1ed 100644
--- a/docs.it4i/anselm/software/nvidia-cuda.md
+++ b/docs.it4i/anselm/software/nvidia-cuda.md
@@ -4,26 +4,26 @@ Guide to NVIDIA CUDA Programming and GPU Usage
 
 ## CUDA Programming on Anselm
 
-The default programming model for GPU accelerators on Anselm is Nvidia CUDA. To set up the environment for CUDA use
+The default programming model for GPU accelerators on Anselm is Nvidia CUDA. To set up the environment for CUDA use;
 
 ```console
 $ ml av cuda
 $ ml cuda **or** ml CUDA
 ```
 
-If the user code is hybrid and uses both CUDA and MPI, the MPI environment has to be set up as well. One way to do this is to use the PrgEnv-gnu module, which sets up correct combination of GNU compiler and MPI library.
+If the user code is hybrid and uses both CUDA and MPI, the MPI environment has to be set up as well. One way to do this is to use the PrgEnv-gnu module, which sets up the correct combination of the GNU compiler and MPI library;
 
 ```console
 $ ml PrgEnv-gnu
 ```
 
-CUDA code can be compiled directly on login1 or login2 nodes. User does not have to use compute nodes with GPU accelerator for compilation. To compile a CUDA source code, use nvcc compiler.
+CUDA code can be compiled directly on login1 or login2 nodes. The user does not have to use compute nodes with GPU accelerators for compilation. To compile CUDA source code, use an nvcc compiler;
 
 ```console
 $ nvcc --version
 ```
 
-CUDA Toolkit comes with large number of examples, that can be helpful to start with. To compile and test these examples user should copy them to its home directory
+The CUDA Toolkit comes with large number of examples which can be a helpful reference to start with. To compile and test these examples, users should copy them to their home directory;
 
 ```console
 $ cd ~
@@ -31,14 +31,14 @@ $ mkdir cuda-samples
 $ cp -R /apps/nvidia/cuda/6.5.14/samples/* ~/cuda-samples/
 ```
 
-To compile an examples, change directory to the particular example (here the example used is deviceQuery) and run "make" to start the compilation
+To compile examples, change directory to the particular example (here the example used is deviceQuery) and run "make" to start the compilation;
 
 ```console
 $ cd ~/cuda-samples/1_Utilities/deviceQuery
 $ make
 ```
 
-To run the code user can use PBS interactive session to get access to a node from qnvidia queue (note: use your project name with parameter -A in the qsub command) and execute the binary file
+To run the code, the user can use PBS interactive session to get access to a node from qnvidia queue (note: use your project name with parameter -A in the qsub command) and execute the binary file;
 
 ```console
 $ qsub -I -q qnvidia -A OPEN-0-0
@@ -46,7 +46,7 @@ $ ml cuda
 $ ~/cuda-samples/1_Utilities/deviceQuery/deviceQuery
 ```
 
-Expected output of the deviceQuery example executed on a node with Tesla K20m is
+The expected output of the deviceQuery example executed on a node with a Tesla K20m is;
 
 ```console
     CUDA Device Query (Runtime API) version (CUDART static linking)
@@ -179,13 +179,13 @@ int main( void ) {
 }
 ```
 
-This code can be compiled using following command
+This code can be compiled using the following command;
 
 ```console
 $ nvcc test.cu -o test_cuda
 ```
 
-To run the code use interactive PBS session to get access to one of the GPU accelerated nodes
+To run the code, use an interactive PBS session to get access to one of the GPU accelerated nodes;
 
 ```console
 $ qsub -I -q qnvidia -A OPEN-0-0
@@ -197,11 +197,11 @@ $ ./test.cuda
 
 ### cuBLAS
 
-The NVIDIA CUDA Basic Linear Algebra Subroutines (cuBLAS) library is a GPU-accelerated version of the complete standard BLAS library with 152 standard BLAS routines. Basic description of the library together with basic performance comparison with MKL can be found [here](https://developer.nvidia.com/cublas "Nvidia cuBLAS").
+The NVIDIA CUDA Basic Linear Algebra Subroutines (cuBLAS) library is a GPU-accelerated version of the complete standard BLAS library with 152 standard BLAS routines. A basic description of the library together with basic performance comparisons with MKL can be found [here](https://developer.nvidia.com/cublas "Nvidia cuBLAS").
 
 #### cuBLAS Example: SAXPY
 
-SAXPY function multiplies the vector x by the scalar alpha and adds it to the vector y overwriting the latest vector with the result. The description of the cuBLAS function can be found in [NVIDIA CUDA documentation](http://docs.nvidia.com/cuda/cublas/index.html#cublas-lt-t-gt-axpy "Nvidia CUDA documentation "). Code can be pasted in the file and compiled without any modification.
+The SAXPY function multiplies the vector x by the scalar alpha, and adds it to the vector y, overwriting the latest vector with the result. A description of the cuBLAS function can be found in [NVIDIA CUDA documentation](http://docs.nvidia.com/cuda/cublas/index.html#cublas-lt-t-gt-axpy "Nvidia CUDA documentation "). Code can be pasted in the file and compiled without any modification.
 
 ```cpp
 /* Includes, system */
@@ -286,7 +286,7 @@ int main(int argc, char **argv)
     - [cublasSetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublassetvector) - transfers data from CPU to GPU memory
     - [cublasGetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublasgetvector) - transfers data from GPU to CPU memory
 
-To compile the code using NVCC compiler a "-lcublas" compiler flag has to be specified:
+To compile the code using the NVCC compiler a "-lcublas" compiler flag has to be specified:
 
 ```console
 $ ml cuda
@@ -300,7 +300,7 @@ $ ml cuda
 $ gcc -std=c99 test_cublas.c -o test_cublas_icc -lcublas -lcudart
 ```
 
-To compile the same code with Intel compiler:
+To compile the same code with an Intel compiler:
 
 ```console
 $ ml cuda
diff --git a/docs.it4i/index.md b/docs.it4i/index.md
index 1d095539fe07ef804b16c63539173cbf9625fc6c..326416a600b044125cbb7b5aba2f4b4935616632 100644
--- a/docs.it4i/index.md
+++ b/docs.it4i/index.md
@@ -1,10 +1,10 @@
 # Documentation
 
-Welcome to IT4Innovations documentation pages. The IT4Innovations national supercomputing center operates supercomputers [Salomon](/salomon/introduction/) and [Anselm](/anselm/introduction/). The supercomputers are [available](general/applying-for-resources/) to academic community within the Czech Republic and Europe and industrial community worldwide. The purpose of these pages is to provide a comprehensive documentation on hardware, software and usage of the computers.
+Welcome to the IT4Innovations documentation pages. The IT4Innovations national supercomputing center operates the supercomputers [Salomon](/salomon/introduction/) and [Anselm](/anselm/introduction/). The supercomputers are [available](general/applying-for-resources/) to the academic community within the Czech Republic and Europe, and the industrial community worldwide. The purpose of these pages is to provide comprehensive documentation of the hardware, software and usage of the computers.
 
 ## How to Read the Documentation
 
-1. Read the list in the left column. Select the subject of interest. Alternatively, use the Search in the upper right corner.
+1. Read the list in the left column. Select the subject of interest. Alternatively, use the Search tool in the upper right corner.
 1. Scan for all the notes and reminders on the page.
 1. Read the details if still more information is needed. **Look for examples** illustrating the concepts.
 
@@ -13,32 +13,32 @@ Welcome to IT4Innovations documentation pages. The IT4Innovations national super
 !!! note
     Contact [support\[at\]it4i.cz](mailto:support@it4i.cz) for help and support regarding the cluster technology at IT4Innovations. Please use **Czech**, **Slovak** or **English** language for communication with us. Follow the status of your request to IT4Innovations at [support.it4i.cz/rt](http://support.it4i.cz/rt).
 
-Use your IT4Innotations username and password to log in to the [support](http://support.it4i.cz/) portal.
+Use your IT4Innovations username and password to log in to the [support](http://support.it4i.cz/) portal.
 
 ## Required Proficiency
 
 !!! note
-    You need basic proficiency in Linux environment.
+    You need basic proficiency in Linux environments.
 
-In order to use the system for your calculations, you need basic proficiency in Linux environment. To gain the proficiency, we recommend you reading the [introduction to Linux](http://www.tldp.org/LDP/intro-linux/html/) operating system environment and installing a Linux distribution on your personal computer. A good choice might be the [CentOS](http://www.centos.org/) distribution, as it is similar to systems on the clusters at IT4Innovations. It's easy to install and use. In fact, any distribution would do.
+In order to use the system for your calculations, you need basic proficiency in Linux environments. To gain this proficiency we recommend you read the [introduction to Linux](http://www.tldp.org/LDP/intro-linux/html/) operating system environments, and install a Linux distribution on your personal computer. A good choice might be the [CentOS](http://www.centos.org/) distribution, as it is similar to systems on the clusters at IT4Innovations. It's easy to install and use. In fact, any Linux distribution would do.
 
 !!! note
     Learn how to parallelize your code!
 
-In many cases, you will run your own code on the cluster. In order to fully exploit the cluster, you will need to carefully consider how to utilize all the cores available on the node and how to use multiple nodes at the same time. You need to **parallelize** your code. Proficieny in MPI, OpenMP, CUDA, UPC or GPI2 programming may be gained via the [training provided by IT4Innovations.](http://prace.it4i.cz)
+In many cases, you will run your own code on the cluster. In order to fully exploit the cluster, you will need to carefully consider how to utilize all the cores available on the node and how to use multiple nodes at the same time. You need to **parallelize** your code. Proficieny in MPI, OpenMP, CUDA, UPC or GPI2 programming may be gained via [training provided by IT4Innovations.](http://prace.it4i.cz)
 
 ## Terminology Frequently Used on These Pages
 
-* **node:** a computer, interconnected by network to other computers - Computational nodes are powerful computers, designed and dedicated for executing demanding scientific computations.
-* **core:** processor core, a unit of processor, executing computations
+* **node:** a computer, interconnected via a network to other computers - Computational nodes are powerful computers, designed for, and dedicated to executing demanding scientific computations.
+* **core:** a processor core, a unit of processor, executing computations
 * **core-hour:** also normalized core-hour, NCH. A metric of computer utilization, [see definition](salomon/resources-allocation-policy/#normalized-core-hours-nch).
-* **job:** a calculation running on the supercomputer - The job allocates and utilizes resources of the supercomputer for certain time.
+* **job:** a calculation running on the supercomputer - the job allocates and utilizes the resources of the supercomputer for certain time.
 * **HPC:** High Performance Computing
 * **HPC (computational) resources:** corehours, storage capacity, software licences
 * **code:** a program
 * **primary investigator (PI):** a person responsible for execution of computational project and utilization of computational resources allocated to that project
-* **collaborator:** a person participating on execution of computational project and utilization of computational resources allocated to that project
-* **project:** a computational project under investigation by the PI - The project is identified by the project ID. The computational resources are allocated and charged per project.
+* **collaborator:** a person participating in the execution of a computational project and utilization of computational resources allocated to that project
+* **project:** a computational project under investigation by the PI - the project is identified by the project ID. Computational resources are allocated and charged per project.
 * **jobscript:** a script to be executed by the PBS Professional workload manager
 
 ## Conventions
@@ -57,7 +57,7 @@ Your local linux host command prompt
 local $
 ```
 
-## Errata
+## Errors
 
 Although we have taken every care to ensure the accuracy of the content, mistakes do happen.
 If you find an inconsistency or error, please report it by visiting <http://support.it4i.cz/rt>, creating a new ticket, and entering the details.
diff --git a/docs.it4i/salomon/introduction.md b/docs.it4i/salomon/introduction.md
index bc466a8d89a4fb78292e076744ca35563511a646..5e688273778808272e02e46fb9453828565f4647 100644
--- a/docs.it4i/salomon/introduction.md
+++ b/docs.it4i/salomon/introduction.md
@@ -1,10 +1,10 @@
 # Introduction
 
-Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totaling 24192 compute cores with 129 TB RAM and giving over 2 PFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, at least 128 GB RAM. Nodes are interconnected by 7D Enhanced hypercube InfiniBand network and equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
+Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totalling 24192 compute cores with 129 TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, and at least 128 GB RAM. Nodes are interconnected through a 7D Enhanced hypercube InfiniBand network and are equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators, and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
 
-The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
+The cluster runs with a [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
 
-## Water-Cooled Compute Nodes With MIC Accelerator
+## Water-Cooled Compute Nodes With MIC Accelerators
 
 ![](../img/salomon.jpg)