Skip to content
Snippets Groups Projects
Commit 5fda8bdd authored by Jan Siwiec's avatar Jan Siwiec
Browse files

converted total capacity from TiB to TB

parent 311e25c1
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
...@@ -110,7 +110,7 @@ The filesystem is backed up, so that it can be restored in case of a catastrophi ...@@ -110,7 +110,7 @@ The filesystem is backed up, so that it can be restored in case of a catastrophi
| HOME filesystem | | | HOME filesystem | |
| -------------------- | --------------- | | -------------------- | --------------- |
| Accesspoint | /home/username | | Accesspoint | /home/username |
| Capacity | 26TB | | Capacity | 28TB |
| Throughput | 1GB/s | | Throughput | 1GB/s |
| User space quota | 25GB | | User space quota | 25GB |
| User inodes quota | 500K | | User inodes quota | 500K |
...@@ -138,12 +138,12 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl ...@@ -138,12 +138,12 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl
| SCRATCH filesystem | | | SCRATCH filesystem | |
| -------------------- | --------- | | -------------------- | --------- |
| Mountpoint | /scratch | | Mountpoint | /scratch |
| Capacity | 282TB | | Capacity | 310TB |
| Throughput | 5GB/s | | Throughput | 5GB/s |
| Throughput [Burst] | 38GB/s | | Throughput [Burst] | 38GB/s |
| User space quota | 10TB | | User space quota | 10TB |
| User inodes quota | 10M | | User inodes quota | 10M |
| Default stripe size | 1MB | | Default stripe size | 1MB |
| Default stripe count | 1 | | Default stripe count | 1 |
| Number of OSTs | 5 | | Number of OSTs | 5 |
...@@ -246,9 +246,9 @@ Each node is equipped with RAMDISK storage accessible at /tmp, /lscratch and /ra ...@@ -246,9 +246,9 @@ Each node is equipped with RAMDISK storage accessible at /tmp, /lscratch and /ra
| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Limitations | Access | Services | | Mountpoint | Usage | Protocol | Net Capacity | Throughput | Limitations | Access | Services |
| ---------- | ------------------------- | -------- | -------------- | ------------------------------ | ----------- | ----------------------- | ------------------------------- | | ---------- | ------------------------- | -------- | -------------- | ------------------------------ | ----------- | ----------------------- | ------------------------------- |
| /home | home directory | NFS | 26TiB | 1GB/s | Quota 25GB | Compute and login nodes | backed up | | /home | home directory | NFS | 28TB | 1GB/s | Quota 25GB | Compute and login nodes | backed up |
| /scratch | scratch temoporary | Lustre | 282TiB | 5GB/s, 30GB/s burst buffer | Quota 10TB | Compute and login nodes |files older 90 days autoremoved | | /scratch | scratch temoporary | Lustre | 310TB | 5GB/s, 30GB/s burst buffer | Quota 10TB | Compute and login nodes |files older 90 days autoremoved |
| /lscratch | local scratch ramdisk | tmpfs | 180GB | 130GB/s | none | Node local | auto purged after job end | | /lscratch | local scratch ramdisk | tmpfs | 180GB | 130GB/s | none | Node local | auto purged after job end |
## CESNET Data Storage ## CESNET Data Storage
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment