From 311e25c12da9738ef8607972789c07adae8f5a07 Mon Sep 17 00:00:00 2001
From: Jan Siwiec <jan.siwiec@vsb.cz>
Date: Tue, 2 Mar 2021 13:55:26 +0100
Subject: [PATCH] Update storage.md

---
 docs.it4i/barbora/storage.md | 50 +++++++++++++++++++-----------------
 1 file changed, 26 insertions(+), 24 deletions(-)

diff --git a/docs.it4i/barbora/storage.md b/docs.it4i/barbora/storage.md
index 0ff51b5b3..59a8a0e89 100644
--- a/docs.it4i/barbora/storage.md
+++ b/docs.it4i/barbora/storage.md
@@ -110,10 +110,10 @@ The filesystem is backed up, so that it can be restored in case of a catastrophi
 | HOME filesystem      |                 |
 | -------------------- | --------------- |
 | Accesspoint          | /home/username  |
-| Capacity             | 26 TB           |
-| Throughput           | 1 GB/s          |
-| User space quota     | 24 GB           |
-| User inodes quota    | 500 k           |
+| Capacity             | 26TB           |
+| Throughput           | 1GB/s          |
+| User space quota     | 25GB           |
+| User inodes quota    | 500K           |
 | Protocol             | NFS             |
 
 ### SCRATCH File System
@@ -138,12 +138,12 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl
 | SCRATCH filesystem   |           |
 | -------------------- | --------- |
 | Mountpoint           | /scratch  |
-| Capacity             | 282 TB    |
-| Throughput           | 5 GB/s    |
-| Throughput [Burst]   | 38 GB/s   |
-| User space quota     | 9,3 TB    |
-| User inodes quota    | 10 M      |
-| Default stripe size  | 1 MB      |
+| Capacity             | 282TB    |
+| Throughput           | 5GB/s    |
+| Throughput [Burst]   | 38GB/s   |
+| User space quota     | 10TB    |
+| User inodes quota    | 10M      |
+| Default stripe size  | 1MB      |
 | Default stripe count | 1         |
 | Number of OSTs       | 5         |
 
@@ -162,22 +162,24 @@ $ it4i-disk-usage
 Example:
 
 ```console
-$ it4i-disk-usage -h
+$ it4i-disk-usage -H
 # Using human-readable format
-# Using power of 1024 for space
+# Using power of 1000 for space
 # Using power of 1000 for entries
 
 Filesystem:    /home
-Space used:    112G
-Space limit:   238G
-Entries:       15k
-Entries limit: 500k
+Space used:    11GB
+Space limit:   25GB
+Entries:       15K
+Entries limit: 500K
+# based on filesystem quota
 
 Filesystem:    /scratch
-Space used:    0
-Space limit:   93T
-Entries:       0
-Entries limit: 0
+Space used:    5TB
+Space limit:   10TB
+Entries:       22K
+Entries limit: 10M
+# based on Lustre quota
 ```
 
 In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
@@ -244,9 +246,9 @@ Each node is equipped with RAMDISK storage accessible at /tmp, /lscratch and /ra
 
 | Mountpoint | Usage                     | Protocol | Net Capacity     | Throughput                     | Limitations | Access                   | Services                        |
 | ---------- | ------------------------- | -------- | --------------   | ------------------------------ | ----------- | -----------------------  | ------------------------------- |
-| /home      | home directory            | NFS      | 26 TiB           | 1 GB/s                         | Quota 25GB  | Compute and login nodes  | backed up                       |
-| /scratch   | scratch temoporary        | Lustre   | 282 TiB          | 5 GB/s, 30 GB/s burst buffer   | Quota 9.3TB | Compute and login nodes  |files older 90 days autoremoved |
-| /lscratch  | local scratch ramdisk     | tmpfs    | 180 GB           | 130 GB/s                       | none        | Node local               | auto purged after job end       |
+| /home      | home directory            | NFS      | 26TiB           | 1GB/s                         | Quota 25GB  | Compute and login nodes  | backed up                       |
+| /scratch   | scratch temoporary        | Lustre   | 282TiB          | 5GB/s, 30GB/s burst buffer   | Quota 10TB | Compute and login nodes  |files older 90 days autoremoved |
+| /lscratch  | local scratch ramdisk     | tmpfs    | 180GB           | 130GB/s                       | none        | Node local               | auto purged after job end       |
 
 ## CESNET Data Storage
 
@@ -287,7 +289,7 @@ First, create the mount point:
 $ mkdir cesnet
 ```
 
-Mount the storage. Note that you can choose among `ssh.du4.cesnet.cz` (Ostrava) and `ssh.du5.cesnet.cz` (Jihlava). Mount tier1_home **(only 5120 MB!)**:
+Mount the storage. Note that you can choose among `ssh.du4.cesnet.cz` (Ostrava) and `ssh.du5.cesnet.cz` (Jihlava). Mount tier1_home **(only 5120MB!)**:
 
 ```console
 $ sshfs username@ssh.du4.cesnet.cz:. cesnet/
-- 
GitLab