diff --git a/.spelling b/.spelling
index fd17bf7f6f059c63d5fbcbc09234cb061f61d213..1cb0368c8bf540de371e775f49e4848e7a78d425 100644
--- a/.spelling
+++ b/.spelling
@@ -3,17 +3,25 @@
 # global dictionary is at the start, file overrides afterwards
 # one word per line, to define a file override use ' - filename'
 # where filename is relative to this configuration file
+COM
+.ssh
 Anselm
 IT4I
 IT4Innovations
 PBS
 Salomon
 TurboVNC
+VNC
 DDR3
 DIMM
 InfiniBand
 CUDA
+ORCA
 COMSOL
+API
+GNU
+CUDA
+NVIDIA
 LiveLink
 MATLAB
 Allinea
@@ -63,6 +71,7 @@ MPICH
 MVAPICH2
 OpenBLAS
 ScaLAPACK
+PAPI
 SGI
 UV2000
 400GB
@@ -251,7 +260,3 @@ r37u31n1008
 qsub
 it4ifree
 it4i.portal.clients
-API
-GNU
-CUDA
-NVIDIA
diff --git a/docs.it4i/anselm-cluster-documentation/capacity-computing.md b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
index 5edbf72dd35d618a48d7655a27d3005370c14cc7..cab8b9c19c39ed2b1828768b484769dfc30a0255 100644
--- a/docs.it4i/anselm-cluster-documentation/capacity-computing.md
+++ b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
@@ -31,7 +31,7 @@ A job array is a compact representation of many jobs, called subjobs. The subjob
 
 All subjobs within a job array have the same scheduling priority and schedule as independent jobs. Entire job array is submitted through a single qsub command and may be managed by qdel, qalter, qhold, qrls and qsig commands as a single job.
 
-### Shared jobscript
+### Shared Jobscript
 
 All subjobs in job array use the very same, single jobscript. Each subjob runs its own instance of the jobscript. The instances execute different work controlled by $PBS_ARRAY_INDEX variable.
 
@@ -161,7 +161,7 @@ $ module add parallel
 $ man parallel
 ```
 
-### GNU Parallel jobscript
+### GNU Parallel Jobscript
 
 The GNU parallel shell executes multiple instances of the jobscript using all cores on the node. The instances execute different work, controlled by the $PARALLEL_SEQ variable.
 
diff --git a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
index 484a69076488b107ad085712476142c696038bf0..9ed9bb7cbabfec0328b3da68e67cd27daba12477 100644
--- a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
+++ b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
@@ -107,7 +107,7 @@ Options:
 
 ## Resources Accounting Policy
 
-### the Core-Hour
+### Core-Hours
 
 The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (16 cores) for 1 hour accounts to 16 core-hours. See example in the  [Job submission and execution](job-submission-and-execution/) section.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/compilers.md b/docs.it4i/anselm-cluster-documentation/software/compilers.md
index 5d8135e05ecb772a471af2eee33912452fa4f0ef..67c0bd30edcc631860ec8d853e0905729f8e5108 100644
--- a/docs.it4i/anselm-cluster-documentation/software/compilers.md
+++ b/docs.it4i/anselm-cluster-documentation/software/compilers.md
@@ -1,6 +1,6 @@
 # Compilers
 
-## Available Compilers, Including GNU, INTEL and UPC Compilers
+## Available Compilers, Including GNU, INTEL, and UPC Compilers
 
 Currently there are several compilers for different programming languages available on the Anselm cluster:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
index 028d533e4b3a1e7491103d11bc5214cb4081fc5f..689bdf611508229df8611022283972632ff84fd9 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md
@@ -72,7 +72,7 @@ Measures the cost (in cycles) of basic PAPI operations.
 
 Prints information about the memory architecture of the current CPU.
 
-## Papi API
+## PAPI API
 
 PAPI provides two kinds of events:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md b/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md
index 75d1106bbec8f144bb989f259c731a23cdb4deef..5c0a71af18ba839622f6ae0d5eef8c9ec62ac285 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-xeon-phi.md
@@ -1,6 +1,6 @@
 # Intel Xeon Phi
 
-## a Guide to Intel Xeon Phi Usage
+## Guide to Intel Xeon Phi Usage
 
 Intel Xeon Phi can be programmed in several modes. The default mode on Anselm is offload mode, but all modes described in this document are supported.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/paraview.md b/docs.it4i/anselm-cluster-documentation/software/paraview.md
index 62670987b5b3395aa26a3dc381d0a29cec8e48de..b7a350368cac78589729f1928b9b1cce9e1dd449 100644
--- a/docs.it4i/anselm-cluster-documentation/software/paraview.md
+++ b/docs.it4i/anselm-cluster-documentation/software/paraview.md
@@ -1,6 +1,6 @@
 # ParaView
 
-## an Open-Source, Multi-Platform Data Analysis and Visualization Application
+Open-Source, Multi-Platform Data Analysis and Visualization Application
 
 ## Introduction
 
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md
index 5d6877d3b03fdba6b7484245dfcf0468041408aa..f1c3573a84bd0e13a403e0b4b0566120585c1d22 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md
@@ -6,7 +6,7 @@ The X Window system is a principal way to get GUI access to the clusters.
 
 Read more about configuring [**X Window System**](x-window-system/).
 
-## Vnc
+## VNC
 
 The **Virtual Network Computing** (**VNC**) is a graphical [desktop sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing") system that uses the  [Remote Frame Buffer protocol (RFB)](http://en.wikipedia.org/wiki/RFB_protocol "RFB protocol") to remotely control another [computer](http://en.wikipedia.org/wiki/Computer "Computer").
 
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
index 36af58cc175a4ce830c0b142ad3319096a915ded..ec5b7ffb4c6e7264d9cef6a8666e40943b04e9ee 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
@@ -19,7 +19,7 @@ After logging in, you can see .ssh/ directory with SSH keys and authorized_keys
 !!! Hint
     Private keys in .ssh directory are without passphrase and allow you to connect within the cluster.
 
-## Access Privileges on .Ssh Folder
+## Access Privileges on .ssh Folder
 
 -   .ssh directory: 700 (drwx------)
 -   Authorized_keys, known_hosts and public key (.pub file): 644 (-rw-r--r--)
diff --git a/docs.it4i/salomon/job-submission-and-execution.md b/docs.it4i/salomon/job-submission-and-execution.md
index 0bbf2fec1bd7c1b8b65c0c698f6b7f1e7e59cff2..96f8d21875ace8c7e2b51a647f8d8707661af175 100644
--- a/docs.it4i/salomon/job-submission-and-execution.md
+++ b/docs.it4i/salomon/job-submission-and-execution.md
@@ -127,18 +127,18 @@ In this example, we allocate nodes r24u35n680 and r24u36n681, all 24 cores per n
 
 ### Placement by Network Location
 
-Network location of allocated nodes in the [Infiniband network](network/) influences efficiency of network communication between nodes of job. Nodes on the same Infiniband switch communicate faster with lower latency than distant nodes. To improve communication efficiency of jobs, PBS scheduler on Salomon is configured to allocate nodes - from currently available resources - which are as close as possible in the network topology.
+Network location of allocated nodes in the [InifiBand network](network/) influences efficiency of network communication between nodes of job. Nodes on the same InifiBand switch communicate faster with lower latency than distant nodes. To improve communication efficiency of jobs, PBS scheduler on Salomon is configured to allocate nodes - from currently available resources - which are as close as possible in the network topology.
 
-For communication intensive jobs it is possible to set stricter requirement - to require nodes directly connected to the same Infiniband switch or to require nodes located in the same dimension group of the Infiniband network.
+For communication intensive jobs it is possible to set stricter requirement - to require nodes directly connected to the same InifiBand switch or to require nodes located in the same dimension group of the InifiBand network.
 
-### Placement by Infiniband Switch
+### Placement by InifiBand Switch
 
-Nodes directly connected to the same Infiniband switch can communicate most efficiently. Using the same switch prevents hops in the network and provides for unbiased, most efficient network communication. There are 9 nodes directly connected to every Infiniband switch.
+Nodes directly connected to the same InifiBand switch can communicate most efficiently. Using the same switch prevents hops in the network and provides for unbiased, most efficient network communication. There are 9 nodes directly connected to every InifiBand switch.
 
 !!! Note "Note"
 	We recommend allocating compute nodes of a single switch when the best possible computational network performance is required to run job efficiently.
 
-Nodes directly connected to the one Infiniband switch can be allocated using node grouping on PBS resource attribute switch. 
+Nodes directly connected to the one InifiBand switch can be allocated using node grouping on PBS resource attribute switch. 
 
 In this example, we request all 9 nodes directly connected to the same switch using node grouping placement.
 
@@ -146,12 +146,12 @@ In this example, we request all 9 nodes directly connected to the same switch us
 $ qsub -A OPEN-0-0 -q qprod -l select=9:ncpus=24 -l place=group=switch ./myjob
 ```
 
-### Placement by Specific Infiniband Switch
+### Placement by Specific InifiBand Switch
 
 !!! Note "Note"
 	Not useful for ordinary computing, suitable for testing and management tasks.
 
-Nodes directly connected to the specific Infiniband switch can be selected using the PBS resource attribute _switch_.
+Nodes directly connected to the specific InifiBand switch can be selected using the PBS resource attribute _switch_.
 
 In this example, we request all 9 nodes directly connected to r4i1s0sw1 switch.
 
@@ -159,7 +159,7 @@ In this example, we request all 9 nodes directly connected to r4i1s0sw1 switch.
 $ qsub -A OPEN-0-0 -q qprod -l select=9:ncpus=24:switch=r4i1s0sw1 ./myjob
 ```
 
-List of all Infiniband switches:
+List of all InifiBand switches:
 
 ```bash
 $ qmgr -c 'print node @a' | grep switch | awk '{print $6}' | sort -u
@@ -172,7 +172,7 @@ r1i2s0sw0
 ...
 ```
 
-List of all all nodes directly connected to the specific Infiniband switch:
+List of all all nodes directly connected to the specific InifiBand switch:
 
 ```bash
 $ qmgr -c 'p n @d' | grep 'switch = r36sw3' | awk '{print $3}' | sort
diff --git a/docs.it4i/salomon/resources-allocation-policy.md b/docs.it4i/salomon/resources-allocation-policy.md
index c970dbdf48b91772ffcde8f4d44b2798a5b708f7..8f77c70f2eaa9f26629e5418c3b39c661427cf5a 100644
--- a/docs.it4i/salomon/resources-allocation-policy.md
+++ b/docs.it4i/salomon/resources-allocation-policy.md
@@ -113,7 +113,7 @@ Options:
 
 ## Resources Accounting Policy
 
-### the Core-Hour
+### Core-Hours
 
 The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (24 cores) for 1 hour accounts to 24 core-hours. See example in the [Job submission and execution](job-submission-and-execution/) section.
 
diff --git a/docs.it4i/salomon/software/ansys/licensing.md b/docs.it4i/salomon/software/ansys/licensing.md
index 115ef282ded14d67fa8c459c960028f2860a5f9b..ba4405f1a2ca525a338f2aadc09fc893a6ff1958 100644
--- a/docs.it4i/salomon/software/ansys/licensing.md
+++ b/docs.it4i/salomon/software/ansys/licensing.md
@@ -10,7 +10,7 @@
 
 The licence intended to be used for science and research, publications, students’ projects (academic licence).
 
-## ANSYS Com
+## ANSYS COM
 
 The licence intended to be used for science and research, publications, students’ projects, commercial research with no commercial use restrictions.
 
diff --git a/docs.it4i/salomon/software/chemistry/phono3py.md b/docs.it4i/salomon/software/chemistry/phono3py.md
index be354df1296f143038a35255139f80f6ea047a4b..35a5d1313797af3cde15ea042a39327ca00d66f7 100644
--- a/docs.it4i/salomon/software/chemistry/phono3py.md
+++ b/docs.it4i/salomon/software/chemistry/phono3py.md
@@ -37,7 +37,7 @@ Direct
    0.6250000000000000  0.6250000000000000  0.1250000000000000
 ```
 
-### Generating Displacement Using 2 X 2 X 2 Supercell for Both Second and Third Order Force Constants
+### Generating Displacement Using 2 by 2 by 2 Supercell for Both Second and Third Order Force Constants
 
 ```bash
 $ phono3py -d --dim="2 2 2" -c POSCAR
diff --git a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
index 3b69447da096e1cbaaa8a5029a60600229605c17..d3c84a193a723d9042ba788ef687cde5290992be 100644
--- a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
+++ b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md
@@ -1,4 +1,4 @@
-# COMSOL Multiphysics®
+# COMSOL Multiphysics
 
 ## Introduction
 
@@ -70,9 +70,9 @@ comsol -nn ${ntask} batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tm
 
 Working directory has to be created before sending the (comsol.pbs) job script into the queue. Input file (name_input_f.mph) has to be in working directory or full path to input file has to be specified. The appropriate path to the temp directory of the job has to be set by command option (-tmpdir).
 
-## LiveLink™\* \*For MATLAB®
+## LiveLink for MATLAB
 
-COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL®API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB.
+COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB.
 
 LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On the clusters 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../isv_licenses/)) are available. Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode.
 
diff --git a/docs.it4i/software/lmod.md b/docs.it4i/software/lmod.md
index f3ed85911852cb888ac61e5aaf25b12db25fb08b..5ba63f7e03762e356a0d74cfb4eb4826682314a6 100644
--- a/docs.it4i/software/lmod.md
+++ b/docs.it4i/software/lmod.md
@@ -26,7 +26,7 @@ Below you will find more details and examples.
 | ml save mycollection     | stores the currently loaded modules to a collection              |
 | ml restore mycollection  | restores a previously stored collection of modules               |
 
-## Listing Loaded Modules: Ml (Module Load)
+## Listing Loaded Modules
 
 To get an overview of the currently loaded modules, use module list or ml (without specifying extra arguments).
 
@@ -41,7 +41,7 @@ Currently Loaded Modules:
 !!! tip
     for more details on sticky modules, see the section on [ml purge](#resetting-by-unloading-all-modules-ml-purge-module-purge)
 
-## Searching for Available Modules: Ml Av (Module Avail) and Ml Spider
+## Searching for Available Modules
 
 To get an overview of all available modules, you can use module avail or simply ml av:
 
@@ -65,7 +65,7 @@ In the current module naming scheme, each module name consists of two parts:
 !!! tip
     The (D) indicates that this particular version of the module is the default, but we strongly recommend to not rely on this as the default can change at any point. Usuall, the default will point to the latest version available.
 
-## Searching for Modules: Ml Spider
+## Searching for Modules
 
 If you just provide a software name, for example gcc, it prints on overview of all available modules for GCC.
 
@@ -129,7 +129,7 @@ $ module spider GCC/6.2.0-2.27
 
 This tells you what the module contains and a URL to the homepage of the software.
 
-## Available Modules for a Particular Software Package: Ml Av <name>
+## Available Modules for a Particular Software Package
 
 To check which modules are available for a particular software package, you can provide the software name to ml av.
 For example, to check which versions of git are available:
@@ -165,7 +165,7 @@ Use "module spider" to find all possible modules.
 Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".
 ```
 
-## Inspecting a Module Using Ml Show
+## Inspecting a Module
 
 To see how a module would change the environment, use module show or ml show:
 
@@ -200,7 +200,7 @@ setenv("EBEXTSLISTPYTHON","setuptools-20.1.1,pip-8.0.2,nose-1.3.7")
 
 If you're not sure what all of this means: don't worry, you don't have to know; just try loading the module as try using the software.
 
-## Loading Modules: Ml &Lt;modname(s)> (Module Load &Lt;modname(s)>)
+## Loading Modules
 
 The effectively apply the changes to the environment that are specified by a module, use module load or ml and specify the name of the module.
 For example, to set up your environment to use intel:
@@ -260,7 +260,7 @@ $ which gcc
 /usr/bin/gcc
 ```
 
-## Resetting by Unloading All Modules: Ml Purge (Module Purge)
+## Resetting by Unloading All Modules
 
 To reset your environment back to a clean state, you can use module purge or ml purge:
 
@@ -282,7 +282,7 @@ No modules loaded
 
 As such, you should not (re)load the cluster module anymore after running ml purge. See also here.
 
-## Module Collections: Ml Save, Ml Restore
+## Module Collections
 
 If you have a set of modules that you need to load often, you can save these in a collection (only works with Lmod).
 
diff --git a/docs.it4i/software/orca.md b/docs.it4i/software/orca.md
index c556446592cadde27aaa4e749b0ac1f0bc203692..6a8769c65c1033f1b55fecf26a1e7855bc3a9da6 100644
--- a/docs.it4i/software/orca.md
+++ b/docs.it4i/software/orca.md
@@ -2,7 +2,7 @@
 
 ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.
 
-## Making Orca Available
+## Making ORCA Available
 
 The following module command makes the latest version of orca available to your session