From 5fda8bddbfc994f3df2527dd850879fa412e807a Mon Sep 17 00:00:00 2001 From: Jan Siwiec <jan.siwiec@vsb.cz> Date: Wed, 3 Mar 2021 09:00:16 +0100 Subject: [PATCH] converted total capacity from TiB to TB --- docs.it4i/barbora/storage.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/docs.it4i/barbora/storage.md b/docs.it4i/barbora/storage.md index 59a8a0e89..e2b54cdce 100644 --- a/docs.it4i/barbora/storage.md +++ b/docs.it4i/barbora/storage.md @@ -110,7 +110,7 @@ The filesystem is backed up, so that it can be restored in case of a catastrophi | HOME filesystem | | | -------------------- | --------------- | | Accesspoint | /home/username | -| Capacity | 26TB | +| Capacity | 28TB | | Throughput | 1GB/s | | User space quota | 25GB | | User inodes quota | 500K | @@ -138,12 +138,12 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl | SCRATCH filesystem | | | -------------------- | --------- | | Mountpoint | /scratch | -| Capacity | 282TB | -| Throughput | 5GB/s | -| Throughput [Burst] | 38GB/s | -| User space quota | 10TB | -| User inodes quota | 10M | -| Default stripe size | 1MB | +| Capacity | 310TB | +| Throughput | 5GB/s | +| Throughput [Burst] | 38GB/s | +| User space quota | 10TB | +| User inodes quota | 10M | +| Default stripe size | 1MB | | Default stripe count | 1 | | Number of OSTs | 5 | @@ -246,9 +246,9 @@ Each node is equipped with RAMDISK storage accessible at /tmp, /lscratch and /ra | Mountpoint | Usage | Protocol | Net Capacity | Throughput | Limitations | Access | Services | | ---------- | ------------------------- | -------- | -------------- | ------------------------------ | ----------- | ----------------------- | ------------------------------- | -| /home | home directory | NFS | 26TiB | 1GB/s | Quota 25GB | Compute and login nodes | backed up | -| /scratch | scratch temoporary | Lustre | 282TiB | 5GB/s, 30GB/s burst buffer | Quota 10TB | Compute and login nodes |files older 90 days autoremoved | -| /lscratch | local scratch ramdisk | tmpfs | 180GB | 130GB/s | none | Node local | auto purged after job end | +| /home | home directory | NFS | 28TB | 1GB/s | Quota 25GB | Compute and login nodes | backed up | +| /scratch | scratch temoporary | Lustre | 310TB | 5GB/s, 30GB/s burst buffer | Quota 10TB | Compute and login nodes |files older 90 days autoremoved | +| /lscratch | local scratch ramdisk | tmpfs | 180GB | 130GB/s | none | Node local | auto purged after job end | ## CESNET Data Storage -- GitLab