diff --git a/docs.it4i/karolina/storage.md b/docs.it4i/karolina/storage.md
index 206fb14be41360ada987652f06054b14ee486324..1f6d95d4cd4ca3a9c57cfd81a3f9aa6372a61505 100644
--- a/docs.it4i/karolina/storage.md
+++ b/docs.it4i/karolina/storage.md
@@ -9,7 +9,7 @@ For more information, see the [CESNET storage][6] section.
 
 ### HOME File System
 
-The HOME filesystem is an HA cluster of two active-passive NFS servers. This filesystem contains users' home directories `/home/username`. Accessible capacity is 25 TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 25 GB per user. Should 25 GB prove insufficient, contact [support][d], the quota may be lifted upon request.
+The HOME filesystem is an HA cluster of two active-passive NFS servers. This filesystem contains users' home directories `/home/username`. Accessible capacity is 31 TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 25 GB per user. Should 25 GB prove insufficient, contact [support][d], the quota may be lifted upon request.
 
 !!! note
     The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
@@ -18,14 +18,14 @@ The files on HOME filesystem will not be deleted until the end of the [user's li
 
 The filesystem is backed up, so that it can be restored in case of a catastrophic failure resulting in significant data loss. However, this backup is not intended to restore old versions of user data or to restore (accidentally) deleted files.
 
-| HOME filesystem      |                 |
-| -------------------- | --------------- |
-| Accesspoint          | /home/username  |
-| Capacity             | 25 TB           |
-| Throughput           | 1.2 GB/s        |
-| User space quota     | 25 GB           |
-| User inodes quota    | 500 k           |
-| Protocol             | NFS             |
+| HOME filesystem      |                                |
+| -------------------- | ------------------------------ |
+| Accesspoint          | /home/username                 |
+| Capacity             | 31 TB                          |
+| Throughput           | 1.93 GB/s write, 3.1 GB/s read |
+| User space quota     | 25 GB                          |
+| User inodes quota    | 500 k                          |
+| Protocol             | NFS                            |
 
  Configuration of the storage:
 
@@ -65,16 +65,16 @@ The SCRATCH filesystem is mounted in directory /scratch. Users may freely create
 !!! warning
     Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
 
-| SCRATCH filesystem   |           |
-| -------------------- | --------- |
-| Mountpoint           | /scratch  |
-| Capacity             | 1000 TB   |
-| Throughput           | 1000 GB/s |
-| User space quota     | 9,3 TB    |
-| User inodes quota    | 10 M      |
-| Default stripe size  | 1 MB      |
-| Default stripe count | 1         |
-| Protocol             | Lustre    |
+| SCRATCH filesystem   |                                    |
+| -------------------- | ---------------------------------- |
+| Mountpoint           | /scratch                           |
+| Capacity             | 1361 TB                            |
+| Throughput           | 730.9 GB/s write, 1198.3 GB/s read |
+| User space quota     | 9.3 TB                             |
+| User inodes quota    | 10 M                               |
+| Default stripe size  | 1 MB                               |
+| Default stripe count | 1                                  |
+| Protocol             | Lustre                             |
 
 Configuration of the storage:
 
@@ -124,8 +124,8 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir
 
 | Mountpoint | Usage                     | Protocol | Net Capacity   | Throughput | Limitations | Access                  | Services                    |        |
 | ---------- | ------------------------- | -------- | -------------- | ---------- | ----------- | ----------------------- | --------------------------- | ------ |
-| /home      | home directory            | NFS      | 25 TB        | 1.2 GB/s     | Quota 25 GB | Compute and login nodes | backed up                   |        |
-| /scratch   | cluster shared jobs' data | Lustre   | 1000 TB        | 1000 GB/s  | Quota 9.3 TB| Compute and login nodes | files older 90 days removed |        |
+| /home      | home directory            | NFS      | 31 TB        | 1.93 GB/s write, 3.1 GB/s read | Quota 25 GB | Compute and login nodes | backed up                   |        |
+| /scratch   | cluster shared jobs' data | Lustre   | 1361 TB        | 730.9 GB/s write, 1198.3 GB/s read  | Quota 9.3 TB| Compute and login nodes | files older 90 days removed |        |
 | /tmp       | local temporary files     | local    | ------   | -------   | none        | Compute / login nodes   | auto                        | purged |
 
 [1]: #home-file-system