diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md
index 1a83a2b6e854e6607c695ea3f7e4a62b9f4ad4d3..640844b2ee9c3fbbb2b248277a6db8c45a4ff4dd 100644
--- a/docs.it4i/salomon/storage.md
+++ b/docs.it4i/salomon/storage.md
@@ -61,35 +61,35 @@ Disk usage and user quotas can be checked and reviewed using the following comma
 $ it4i-disk-usage
 ```
 
-Example:
+Example for Salomon:
 
 ```console
 $ it4i-disk-usage -h
 # Using human-readable format
-# Using power of 1024 for space
+# Using power of 1000 for space
 # Using power of 1000 for entries
 
 Filesystem:    /home
-Space used:    110GiB
-Space limit:   238GiB
+Space used:    110GB
+Space limit:   250GB
 Entries:       40K
 Entries limit: 500K
 # based on filesystem quota
 
 Filesystem:    /scratch
-Space used:    377GiB
-Space limit:   93TiB
+Space used:    377GB
+Space limit:   100TB
 Entries:       14K
 Entries limit: 10M
 # based on Lustre quota
 
 Filesystem:    /scratch
-Space used:    377GiB
+Space used:    377GB
 Entries:       14K
 # based on Robinhood
 
 Filesystem:    /scratch/work
-Space used:    377GiB
+Space used:    377GB
 Entries:       14K
 Entries:       40K
 Entries limit: 1.0M
@@ -102,7 +102,7 @@ Entries:       6
 ```
 
 In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
-Note that limits are imposed also on number of objects (files, directories, links, etc...) that are allowed to create.
+Note that limits are imposed also on number of objects (files, directories, links, etc.) that the user is allowed to create.
 
 To have a better understanding of where the space is exactly used, use the following command:
 
@@ -122,7 +122,7 @@ $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
 2,7M     .idb_13.0_linux_intel64_app
 ```
 
-This will list all directories with MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
+This will list all directories with megabytes or gigabytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
 
 To have a better understanding of the previous commands, read the man pages:
 
@@ -187,17 +187,17 @@ The workspace is backed up, such that it can be restored in case of catastrophic
 | HOME workspace    |                |
 | ----------------- | -------------- |
 | Accesspoint       | /home/username |
-| Capacity          | 0.5 PB         |
-| Throughput        | 6 GB/s         |
-| User space quota  | 250 GB         |
-| User inodes quota | 500 k          |
+| Capacity          | 500TB          |
+| Throughput        | 6GB/s          |
+| User space quota  | 250GB          |
+| User inodes quota | 500K           |
 | Protocol          | NFS, 2-Tier    |
 
 ### Scratch
 
 The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. There are 54 OSTs dedicated for the SCRATCH file system.
 
-Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10 m inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. Should 100 TB of space or 10 m inodes prove insufficient, contact [support][d], the quota may be lifted upon request.
+Accessible capacity is 1.6PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10M inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. Should 100TB of space or 10M inodes prove insufficient, contact [support][d], the quota may be lifted upon request.
 
 #### Work
 
@@ -233,19 +233,19 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin
   </tr>
   <tr>
     <td>Capacity</td>
-    <td colspan="2" style="vertical-align : middle;text-align:center;">1.6 PB</td>
+    <td colspan="2" style="vertical-align : middle;text-align:center;">1.6PB</td>
   </tr>
   <tr>
     <td>Throughput</td>
-    <td colspan="2" style="vertical-align : middle;text-align:center;">30 GB/s</td>
+    <td colspan="2" style="vertical-align : middle;text-align:center;">30GB/s</td>
   </tr>
   <tr>
     <td>User space quota</td>
-    <td colspan="2" style="vertical-align : middle;text-align:center;">100 TB</td>
+    <td colspan="2" style="vertical-align : middle;text-align:center;">100TB</td>
   </tr>
   <tr>
     <td>User inodes quota</td>
-    <td colspan="2" style="vertical-align : middle;text-align:center;">10 M</td>
+    <td colspan="2" style="vertical-align : middle;text-align:center;">10M</td>
   </tr>
   <tr>
     <td>Number of OSTs</td>
@@ -281,8 +281,8 @@ It is not recommended to allocate large amount of memory and use large amount of
 | ----------- | ------------------------------------------------------------------------------------------------------- |
 | Mountpoint  | /ramdisk                                                                                                |
 | Accesspoint | /ramdisk/$PBS_JOBID                                                                                     |
-| Capacity    | 110 GB                                                                                                  |
-| Throughput  | over 1.5 GB/s write, over 5 GB/s read, single thread, over 10 GB/s write, over 50 GB/s read, 16 threads |
+| Capacity    | 110GB                                                                                                   |
+| Throughput  | over 1.5GB/s write, over 5GB/s read, single thread, over 10GB/s write, over 50GB/s read, 16 threads     |
 | User quota  | none                                                                                                    |
 
 ### Global RAM Disk
@@ -329,8 +329,8 @@ within a job.
 | ------------------ | --------------------------------------------------------------------------|
 | Mountpoint         | /mnt/global_ramdisk                                                       |
 | Accesspoint        | /mnt/global_ramdisk                                                       |
-| Capacity           | (N*110) GB                                                                |
-| Throughput         | 3*(N+1) GB/s, 2 GB/s single POSIX thread                                  |
+| Capacity           | (N*110)GB                                                                 |
+| Throughput         | 3*(N+1)GB/s, 2GB/s single POSIX thread                                    |
 | User quota         | none                                                                      |
 
 N = number of compute nodes in the job.
@@ -352,9 +352,9 @@ N = number of compute nodes in the job.
     <td>/home</td>
     <td>home directory</td>
     <td>NFS, 2-Tier</td>
-    <td>0.5 PB</td>
-    <td>6 GB/s</td>
-    <td>250&nbsp;GB / 500&nbsp;k</td>
+    <td>500TB</td>
+    <td>6GB/s</td>
+    <td>250GB / 500K</td>
     <td>Compute and login nodes</td>
     <td>backed up</td>
   </tr>
@@ -362,9 +362,9 @@ N = number of compute nodes in the job.
     <td style="background-color: #D3D3D3;">/scratch/work</td>
     <td style="background-color: #D3D3D3;">large project files</td>
     <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">Lustre</td>
-    <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">1.69 PB</td>
-    <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">30 GB/s</td>
-    <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">100&nbsp;TB / 10&nbsp;M</td>
+    <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">1.69PB</td>
+    <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">30GB/s</td>
+    <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">100TB / 10M</td>
     <td style="background-color: #D3D3D3;">Compute and login nodes</td>
     <td style="background-color: #D3D3D3;">none</td>
   </tr>
@@ -378,8 +378,8 @@ N = number of compute nodes in the job.
     <td>/ramdisk</td>
     <td>job temporary data, node local</td>
     <td>tmpfs</td>
-    <td>110 GB</td>
-    <td>90 GB/s</td>
+    <td>110GB</td>
+    <td>90GB/s</td>
     <td>none / none</td>
     <td>Compute nodes, node local</td>
     <td>purged after job ends</td>
@@ -388,8 +388,8 @@ N = number of compute nodes in the job.
     <td style="background-color: #D3D3D3;">/mnt/global_ramdisk</td>
     <td style="background-color: #D3D3D3;">job temporary data</td>
     <td style="background-color: #D3D3D3;">BeeGFS</td>
-    <td style="background-color: #D3D3D3;">(N*110) GB</td>
-    <td style="background-color: #D3D3D3;">3*(N+1) GB/s</td>
+    <td style="background-color: #D3D3D3;">(N*110)GB</td>
+    <td style="background-color: #D3D3D3;">3*(N+1)GB/s</td>
     <td style="background-color: #D3D3D3;">none / none</td>
     <td style="background-color: #D3D3D3;">Compute nodes, job shared</td>
     <td style="background-color: #D3D3D3;">purged after job ends</td>
@@ -437,10 +437,11 @@ First, create the mount point:
 $ mkdir cesnet
 ```
 
-Mount the storage. Note that you can choose among the ssh.du1.cesnet.cz (Plzen), ssh.du2.cesnet.cz (Jihlava), ssh.du3.cesnet.cz (Brno) Mount tier1_home **(only 5120M !)**:
+Mount the storage. Note that you can choose among ssh.du4.cesnet.cz (Ostrava) and ssh.du5.cesnet.cz (Jihlava).
+Mount tier1_home **(only 5120M !)**:
 
 ```console
-$ sshfs username@ssh.du1.cesnet.cz:. cesnet/
+$ sshfs username@ssh.du4.cesnet.cz:. cesnet/
 ```
 
 For easy future access, install your public key:
@@ -452,7 +453,7 @@ $ cp .ssh/id_rsa.pub cesnet/.ssh/authorized_keys
 Mount tier1_cache_tape for the Storage VO:
 
 ```console
-$ sshfs username@ssh.du1.cesnet.cz:/cache_tape/VO_storage/home/username cesnet/
+$ sshfs username@ssh.du4.cesnet.cz:/cache_tape/VO_storage/home/username cesnet/
 ```
 
 View the archive, copy the files and directories in and out:
@@ -483,18 +484,18 @@ More about Rsync [here][j].
 Transfer large files to/from CESNET storage, assuming membership in the Storage VO:
 
 ```console
-$ rsync --progress datafile username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
-$ rsync --progress username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafile .
+$ rsync --progress datafile username@ssh.du4.cesnet.cz:VO_storage-cache_tape/.
+$ rsync --progress username@ssh.du4.cesnet.cz:VO_storage-cache_tape/datafile .
 ```
 
 Transfer large directories to/from CESNET storage, assuming membership in the Storage VO:
 
 ```console
-$ rsync --progress -av datafolder username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
-$ rsync --progress -av username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafolder .
+$ rsync --progress -av datafolder username@ssh.du4.cesnet.cz:VO_storage-cache_tape/.
+$ rsync --progress -av username@ssh.du4.cesnet.cz:VO_storage-cache_tape/datafolder .
 ```
 
-Transfer rates of about 28 MB/s can be expected.
+Transfer rates of about 28MB/s can be expected.
 
 [1]: #home
 [2]: #shared-filesystems