Skip to content
Snippets Groups Projects
Commit 311e25c1 authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Update storage.md

parent 0fa8f896
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
...@@ -110,10 +110,10 @@ The filesystem is backed up, so that it can be restored in case of a catastrophi ...@@ -110,10 +110,10 @@ The filesystem is backed up, so that it can be restored in case of a catastrophi
| HOME filesystem | | | HOME filesystem | |
| -------------------- | --------------- | | -------------------- | --------------- |
| Accesspoint | /home/username | | Accesspoint | /home/username |
| Capacity | 26 TB | | Capacity | 26TB |
| Throughput | 1 GB/s | | Throughput | 1GB/s |
| User space quota | 24 GB | | User space quota | 25GB |
| User inodes quota | 500 k | | User inodes quota | 500K |
| Protocol | NFS | | Protocol | NFS |
### SCRATCH File System ### SCRATCH File System
...@@ -138,12 +138,12 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl ...@@ -138,12 +138,12 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl
| SCRATCH filesystem | | | SCRATCH filesystem | |
| -------------------- | --------- | | -------------------- | --------- |
| Mountpoint | /scratch | | Mountpoint | /scratch |
| Capacity | 282 TB | | Capacity | 282TB |
| Throughput | 5 GB/s | | Throughput | 5GB/s |
| Throughput [Burst] | 38 GB/s | | Throughput [Burst] | 38GB/s |
| User space quota | 9,3 TB | | User space quota | 10TB |
| User inodes quota | 10 M | | User inodes quota | 10M |
| Default stripe size | 1 MB | | Default stripe size | 1MB |
| Default stripe count | 1 | | Default stripe count | 1 |
| Number of OSTs | 5 | | Number of OSTs | 5 |
...@@ -162,22 +162,24 @@ $ it4i-disk-usage ...@@ -162,22 +162,24 @@ $ it4i-disk-usage
Example: Example:
```console ```console
$ it4i-disk-usage -h $ it4i-disk-usage -H
# Using human-readable format # Using human-readable format
# Using power of 1024 for space # Using power of 1000 for space
# Using power of 1000 for entries # Using power of 1000 for entries
Filesystem: /home Filesystem: /home
Space used: 112G Space used: 11GB
Space limit: 238G Space limit: 25GB
Entries: 15k Entries: 15K
Entries limit: 500k Entries limit: 500K
# based on filesystem quota
Filesystem: /scratch Filesystem: /scratch
Space used: 0 Space used: 5TB
Space limit: 93T Space limit: 10TB
Entries: 0 Entries: 22K
Entries limit: 0 Entries limit: 10M
# based on Lustre quota
``` ```
In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command. In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
...@@ -244,9 +246,9 @@ Each node is equipped with RAMDISK storage accessible at /tmp, /lscratch and /ra ...@@ -244,9 +246,9 @@ Each node is equipped with RAMDISK storage accessible at /tmp, /lscratch and /ra
| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Limitations | Access | Services | | Mountpoint | Usage | Protocol | Net Capacity | Throughput | Limitations | Access | Services |
| ---------- | ------------------------- | -------- | -------------- | ------------------------------ | ----------- | ----------------------- | ------------------------------- | | ---------- | ------------------------- | -------- | -------------- | ------------------------------ | ----------- | ----------------------- | ------------------------------- |
| /home | home directory | NFS | 26 TiB | 1 GB/s | Quota 25GB | Compute and login nodes | backed up | | /home | home directory | NFS | 26TiB | 1GB/s | Quota 25GB | Compute and login nodes | backed up |
| /scratch | scratch temoporary | Lustre | 282 TiB | 5 GB/s, 30 GB/s burst buffer | Quota 9.3TB | Compute and login nodes |files older 90 days autoremoved | | /scratch | scratch temoporary | Lustre | 282TiB | 5GB/s, 30GB/s burst buffer | Quota 10TB | Compute and login nodes |files older 90 days autoremoved |
| /lscratch | local scratch ramdisk | tmpfs | 180 GB | 130 GB/s | none | Node local | auto purged after job end | | /lscratch | local scratch ramdisk | tmpfs | 180GB | 130GB/s | none | Node local | auto purged after job end |
## CESNET Data Storage ## CESNET Data Storage
...@@ -287,7 +289,7 @@ First, create the mount point: ...@@ -287,7 +289,7 @@ First, create the mount point:
$ mkdir cesnet $ mkdir cesnet
``` ```
Mount the storage. Note that you can choose among `ssh.du4.cesnet.cz` (Ostrava) and `ssh.du5.cesnet.cz` (Jihlava). Mount tier1_home **(only 5120 MB!)**: Mount the storage. Note that you can choose among `ssh.du4.cesnet.cz` (Ostrava) and `ssh.du5.cesnet.cz` (Jihlava). Mount tier1_home **(only 5120MB!)**:
```console ```console
$ sshfs username@ssh.du4.cesnet.cz:. cesnet/ $ sshfs username@ssh.du4.cesnet.cz:. cesnet/
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment