diff --git a/docs.it4i/anselm-cluster-documentation/network.md b/docs.it4i/anselm-cluster-documentation/network.md
index 308d7caa0699e93521ba9757f22b80c19cf190cc..a659adf7073270c651af20d76c4b6a926544762d 100644
--- a/docs.it4i/anselm-cluster-documentation/network.md
+++ b/docs.it4i/anselm-cluster-documentation/network.md
@@ -5,18 +5,18 @@ All compute and login nodes of Anselm are interconnected by [InfiniBand](http://
 
 InfiniBand Network
 ------------------
-All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree.
+All compute and login nodes of Anselm are interconnected by a high-bandwidth, low-latency [Infiniband](http://en.wikipedia.org/wiki/InfiniBand) QDR network (IB 4 x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree.
 
 The compute nodes may be accessed via the Infiniband network using ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native Infiniband connection among the nodes.
 
 !!! Note "Note"
-	The network provides **2170MB/s** transfer rates via the TCP connection (single stream) and up to **3600MB/s** via native Infiniband protocol.
+	The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600MB/s** via native Infiniband protocol.
 
 The Fat tree topology ensures that peak transfer rates are achieved between any two nodes, independent of network traffic exchanged among other nodes concurrently.
 
 Ethernet Network
 ----------------
-The compute nodes may be accessed via the regular Gigabit Ethernet network interface eth0, in address range 10.1.1.1-209, or by using aliases cn1-cn209. The network provides **114MB/s** transfer rates via the TCP connection.
+The compute nodes may be accessed via the regular Gigabit Ethernet network interface eth0, in address range 10.1.1.1-209, or by using aliases cn1-cn209. The network provides **114 MB/s** transfer rates via the TCP connection.
 
 Example
 -------
diff --git a/docs.it4i/anselm-cluster-documentation/remote-visualization.md b/docs.it4i/anselm-cluster-documentation/remote-visualization.md
index d6466340a5daea218cdd9c65e21423f5cc8130bd..a7de22ab470a65b4c628856fecf98939a52e7a1b 100644
--- a/docs.it4i/anselm-cluster-documentation/remote-visualization.md
+++ b/docs.it4i/anselm-cluster-documentation/remote-visualization.md
@@ -11,10 +11,10 @@ Currently two compute nodes are dedicated for this service with following config
 
 |[**Visualization node configuration**](compute-nodes/)||
 |---|---|
-|CPU|2x Intel Sandy Bridge E5-2670, 2.6GHz|
-|Processor cores|16 (2x8 cores)|
+|CPU|2 x Intel Sandy Bridge E5-2670, 2.6 GHz|
+|Processor cores|16 (2 x 8 cores)|
 |RAM|64 GB, min. 4 GB per core|
-|GPU|NVIDIA Quadro 4000, 2GB RAM|
+|GPU|NVIDIA Quadro 4000, 2 GB RAM|
 |Local disk drive|yes - 500 GB|
 |Compute network|InfiniBand QDR|
 
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
index d79c5c45463087b0d54e5ed5f26400801ee308e9..8e970f764e8d55a284b95249d234686f36de725d 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
@@ -109,7 +109,7 @@ xserver-xephyr, on OS X it is part of [XQuartz](http://xquartz.macosforge.org/la
 local $ Xephyr -ac -screen 1024x768 -br -reset -terminate :1 &
 ```
 
-This will open a new X window with size 1024x768 at DISPLAY :1. Next, ssh to the cluster with DISPLAY environment variable set and launch  gnome-session
+This will open a new X window with size 1024 x 768 at DISPLAY :1. Next, ssh to the cluster with DISPLAY environment variable set and launch  gnome-session
 
 ```bash
 local $ DISPLAY=:1.0 ssh -XC yourname@cluster-name.it4i.cz -i ~/.ssh/path_to_your_key
diff --git a/docs.it4i/salomon/compute-nodes.md b/docs.it4i/salomon/compute-nodes.md
index d30f29688e9d42a7e496a8bb1bf51057e6d455ee..0504b444b293ef127c3ed7e275298ad3a869a282 100644
--- a/docs.it4i/salomon/compute-nodes.md
+++ b/docs.it4i/salomon/compute-nodes.md
@@ -13,7 +13,7 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 -   codename "grafton"
 -   576 nodes
 -   13 824 cores in total
--   two Intel Xeon E5-2680v3, 12-core, 2.5GHz processors per node
+-   two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
 -   128 GB of physical memory per node
 
 ![cn_m_cell](../img/cn_m_cell)
@@ -23,9 +23,9 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 -   codename "perrin"
 -   432 nodes
 -   10 368 cores in total
--   two Intel Xeon E5-2680v3, 12-core, 2.5GHz processors per node
+-   two Intel Xeon E5-2680v3, 12-core, 2.5 GHz processors per node
 -   128 GB of physical memory per node
--   MIC accelerator 2x Intel Xeon Phi 7120P per node, 61-cores, 16GB per     accelerator
+-   MIC accelerator 2 x Intel Xeon Phi 7120P per node, 61-cores, 16 GB per accelerator
 
 ![cn_mic](../img/cn_mic-1)
 
@@ -38,9 +38,9 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 -   codename "UV2000"
 -   1 node
 -   112 cores in total
--   14x Intel Xeon E5-4627v2, 8-core, 3.3GHz processors, in 14 NUMA     nodes
+-   14 x Intel Xeon E5-4627v2, 8-core, 3.3 GHz processors, in 14 NUMA nodes
 -   3328 GB of physical memory per node
--   1x NVIDIA GM200 (GeForce GTX TITAN X), 12GB RAM
+-   1 x NVIDIA GM200 (GeForce GTX TITAN X), 12 GB RAM
 
 ![](../img/uv-2000.jpeg)
 
@@ -48,8 +48,8 @@ Compute nodes with MIC accelerator **contains two Intel Xeon Phi 7120P accelerat
 
  |Node type |Count |Memory |Cores |
  | --- | --- | --- | --- |
- |Nodes without accelerator |576 |128GB |24 @ 2.5Ghz |
- |Nodes with MIC accelerator |432 |128GB<p>32GB\ |<p>24 @ 2.5Ghz<p>61 @  1.238GHz\ |
+ |Nodes without accelerator |576 |128 GB |24 @ 2.5Ghz |
+ |Nodes with MIC accelerator |432 |128 GB<p>32GB\ |<p>24 @ 2.5Ghz<p>61 @  1.238 GHz\ |
  |UV2000 SMP node |1 |3328GB\ |<p>112 @ 3.3GHz\ |
 
 Processor Architecture
diff --git a/docs.it4i/salomon/hardware-overview.md b/docs.it4i/salomon/hardware-overview.md
index 90ee2595e069151d6afb07662a7d717536dc207e..c00647be2e8ca5b5c07ffddbee1109855bb68bc4 100644
--- a/docs.it4i/salomon/hardware-overview.md
+++ b/docs.it4i/salomon/hardware-overview.md
@@ -3,7 +3,7 @@ Hardware Overview
 
 Introduction
 ------------
-The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a  powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes.
+The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a  powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128 GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5 PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes.
 
 [More about schematic representation of the Salomon cluster compute nodes IB topology](ib-single-plane-topology/).
 
@@ -18,11 +18,11 @@ General information
 |---|---|
 |Primary purpose|High Performance Computing|
 |Architecture of compute nodes|x86-64|
-|Operating system|CentOS 6.7 Linux|
+|Operating system|CentOS 6.x Linux|
 |[**Compute nodes**](compute-nodes/)||
 |Totally|1008|
-|Processor|2x Intel Xeon E5-2680v3, 2.5GHz, 12cores|
-|RAM|128GB, 5.3GB per core, DDR4@2133 MHz|
+|Processor|2 x Intel Xeon E5-2680v3, 2.5 GHz, 12 cores|
+|RAM|128GB, 5.3 GB per core, DDR4@2133 MHz|
 |Local disk drive|no|
 |Compute network / Topology|InfiniBand FDR56 / 7D Enhanced hypercube|
 |w/o accelerator|576|
@@ -36,8 +36,8 @@ Compute nodes
 
 |Node|Count|Processor|Cores|Memory|Accelerator|
 |---|---|---|---|---|---|
-|w/o accelerator|576|2x Intel Xeon E5-2680v3, 2.5GHz|24|128GB|-|
-|MIC accelerated|432|2x Intel Xeon E5-2680v3, 2.5GHz|24|128GB|2x Intel Xeon Phi 7120P, 61cores, 16GB RAM|
+|w/o accelerator|576|2 x Intel Xeon E5-2680v3, 2.5 GHz|24|128 GB|-|
+|MIC accelerated|432|2 x Intel Xeon E5-2680v3, 2.5 GHz|24|128 GB|2 x Intel Xeon Phi 7120P, 61 cores, 16 GB RAM|
 
 For more details please refer to the [Compute nodes](compute-nodes/).
 
@@ -47,7 +47,7 @@ For remote visualization two nodes with NICE DCV software are available each con
 
 |Node|Count|Processor|Cores|Memory|GPU Accelerator|
 |---|---|---|---|---|---|
-|visualization|2|2x Intel Xeon E5-2695v3, 2.3GHz|28|512GB|NVIDIA QUADRO K5000, 4GB RAM|
+|visualization|2|2 x Intel Xeon E5-2695v3, 2.3 GHz|28|512 GB|NVIDIA QUADRO K5000, 4 GB RAM|
 
 SGI UV 2000
 -----------
@@ -55,6 +55,6 @@ For large memory computations a special SMP/NUMA SGI UV 2000 server is available
 
 |Node |Count |Processor |Cores|Memory|Extra HW |
 | --- | --- | --- | --- | --- | --- |
-|UV2000 |1 |14 x Intel Xeon E5-4627v2, 3.3 GHz, 8 cores |112 |3328 GB DDR3@1866 MHz |2 x 400GB local SSD1x NVIDIA GM200(GeForce GTX TITAN X),12 GB RAM |
+|UV2000 |1 |14 x Intel Xeon E5-4627v2, 3.3 GHz, 8 cores |112 |3328 GB DDR3@1866 MHz |2 x 400GB local SSD</br>1x NVIDIA GM200 (GeForce GTX TITAN X), 12 GB RAM |
 
 ![](../img/uv-2000.jpeg)
diff --git a/docs.it4i/salomon/software/chemistry/phono3py.md b/docs.it4i/salomon/software/chemistry/phono3py.md
index d23a8b04bda061932e0268d14fbaa59df2978e4b..be2ed6ed12aa2c992724cda0ac1d6c36e729df4c 100644
--- a/docs.it4i/salomon/software/chemistry/phono3py.md
+++ b/docs.it4i/salomon/software/chemistry/phono3py.md
@@ -39,7 +39,7 @@ Direct
    0.6250000000000000  0.6250000000000000  0.1250000000000000
 ```
 
-### Generating displacement using 2x2x2 supercell for both second and third order force constants
+### Generating displacement using 2 x 2 x 2 supercell for both second and third order force constants
 
 ```bash
 $ phono3py -d --dim="2 2 2" -c POSCAR
diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md
index 5f8306aa266b7740847b637922ea622c81f6d6ff..e57e8fc3137c06ae32a27dac4fdee19ed2f9ea6a 100644
--- a/docs.it4i/salomon/storage.md
+++ b/docs.it4i/salomon/storage.md
@@ -26,7 +26,7 @@ Salomon computer provides two main shared filesystems, the [ HOME filesystem](#h
 
 ###HOME filesystem
 
-The HOME filesystem is realized as a Tiered filesystem, exported via NFS. The first tier has capacity 100TB, second tier has capacity 400TB. The filesystem is available on all login and computational nodes. The Home filesystem hosts the [HOME workspace](#home).
+The HOME filesystem is realized as a Tiered filesystem, exported via NFS. The first tier has capacity 100 TB, second tier has capacity 400 TB. The filesystem is available on all login and computational nodes. The Home filesystem hosts the [HOME workspace](#home).
 
 ###SCRATCH filesystem
 
@@ -36,13 +36,13 @@ Configuration of the SCRATCH Lustre storage
 
 -    SCRATCH Lustre object storage
     -   Disk array SFA12KX
-    -   540 4TB SAS 7.2krpm disks
+    -   540 4 TB SAS 7.2krpm disks
     -   54 OSTs of 10 disks in RAID6 (8+2)
     -   15 hot-spare disks
-    -   4x 400GB SSD cache
+    -   4 x 400 GB SSD cache
 -    SCRATCH Lustre metadata storage
     -   Disk array EF3015
-    -   12 600GB SAS 15krpm disks
+    -   12 600 GB SAS 15 krpm disks
 
 ### Understanding the Lustre Filesystems
 
@@ -56,7 +56,7 @@ If multiple clients try to read and write the same part of a file at the same ti
 
 There is default stripe configuration for Salomon Lustre filesystems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance:
 
-1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre filesystems
+1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of kB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre filesystems
 2. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre filesystems  one can specify -1 to use all OSTs in the filesystem.
 3. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.