Skip to content
Snippets Groups Projects
Commit 311e25c1 authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Update storage.md

parent 0fa8f896
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
......@@ -110,10 +110,10 @@ The filesystem is backed up, so that it can be restored in case of a catastrophi
| HOME filesystem | |
| -------------------- | --------------- |
| Accesspoint | /home/username |
| Capacity | 26 TB |
| Throughput | 1 GB/s |
| User space quota | 24 GB |
| User inodes quota | 500 k |
| Capacity | 26TB |
| Throughput | 1GB/s |
| User space quota | 25GB |
| User inodes quota | 500K |
| Protocol | NFS |
### SCRATCH File System
......@@ -138,12 +138,12 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl
| SCRATCH filesystem | |
| -------------------- | --------- |
| Mountpoint | /scratch |
| Capacity | 282 TB |
| Throughput | 5 GB/s |
| Throughput [Burst] | 38 GB/s |
| User space quota | 9,3 TB |
| User inodes quota | 10 M |
| Default stripe size | 1 MB |
| Capacity | 282TB |
| Throughput | 5GB/s |
| Throughput [Burst] | 38GB/s |
| User space quota | 10TB |
| User inodes quota | 10M |
| Default stripe size | 1MB |
| Default stripe count | 1 |
| Number of OSTs | 5 |
......@@ -162,22 +162,24 @@ $ it4i-disk-usage
Example:
```console
$ it4i-disk-usage -h
$ it4i-disk-usage -H
# Using human-readable format
# Using power of 1024 for space
# Using power of 1000 for space
# Using power of 1000 for entries
Filesystem: /home
Space used: 112G
Space limit: 238G
Entries: 15k
Entries limit: 500k
Space used: 11GB
Space limit: 25GB
Entries: 15K
Entries limit: 500K
# based on filesystem quota
Filesystem: /scratch
Space used: 0
Space limit: 93T
Entries: 0
Entries limit: 0
Space used: 5TB
Space limit: 10TB
Entries: 22K
Entries limit: 10M
# based on Lustre quota
```
In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
......@@ -244,9 +246,9 @@ Each node is equipped with RAMDISK storage accessible at /tmp, /lscratch and /ra
| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Limitations | Access | Services |
| ---------- | ------------------------- | -------- | -------------- | ------------------------------ | ----------- | ----------------------- | ------------------------------- |
| /home | home directory | NFS | 26 TiB | 1 GB/s | Quota 25GB | Compute and login nodes | backed up |
| /scratch | scratch temoporary | Lustre | 282 TiB | 5 GB/s, 30 GB/s burst buffer | Quota 9.3TB | Compute and login nodes |files older 90 days autoremoved |
| /lscratch | local scratch ramdisk | tmpfs | 180 GB | 130 GB/s | none | Node local | auto purged after job end |
| /home | home directory | NFS | 26TiB | 1GB/s | Quota 25GB | Compute and login nodes | backed up |
| /scratch | scratch temoporary | Lustre | 282TiB | 5GB/s, 30GB/s burst buffer | Quota 10TB | Compute and login nodes |files older 90 days autoremoved |
| /lscratch | local scratch ramdisk | tmpfs | 180GB | 130GB/s | none | Node local | auto purged after job end |
## CESNET Data Storage
......@@ -287,7 +289,7 @@ First, create the mount point:
$ mkdir cesnet
```
Mount the storage. Note that you can choose among `ssh.du4.cesnet.cz` (Ostrava) and `ssh.du5.cesnet.cz` (Jihlava). Mount tier1_home **(only 5120 MB!)**:
Mount the storage. Note that you can choose among `ssh.du4.cesnet.cz` (Ostrava) and `ssh.du5.cesnet.cz` (Jihlava). Mount tier1_home **(only 5120MB!)**:
```console
$ sshfs username@ssh.du4.cesnet.cz:. cesnet/
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment