Skip to content
Snippets Groups Projects
Commit a7a6a46c authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Merge branch 'sta03-master-patch-41230' into 'master'

Update storage.md

See merge request !331
parents ebb5688e 780436ce
No related branches found
No related tags found
1 merge request!331Update storage.md
Pipeline #20691 passed
......@@ -9,7 +9,7 @@ For more information, see the [CESNET storage][6] section.
### HOME File System
The HOME filesystem is an HA cluster of two active-passive NFS servers. This filesystem contains users' home directories `/home/username`. Accessible capacity is 25 TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 25 GB per user. Should 25 GB prove insufficient, contact [support][d], the quota may be lifted upon request.
The HOME filesystem is an HA cluster of two active-passive NFS servers. This filesystem contains users' home directories `/home/username`. Accessible capacity is 31 TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 25 GB per user. Should 25 GB prove insufficient, contact [support][d], the quota may be lifted upon request.
!!! note
The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
......@@ -18,14 +18,14 @@ The files on HOME filesystem will not be deleted until the end of the [user's li
The filesystem is backed up, so that it can be restored in case of a catastrophic failure resulting in significant data loss. However, this backup is not intended to restore old versions of user data or to restore (accidentally) deleted files.
| HOME filesystem | |
| -------------------- | --------------- |
| Accesspoint | /home/username |
| Capacity | 25 TB |
| Throughput | 1.2 GB/s |
| User space quota | 25 GB |
| User inodes quota | 500 k |
| Protocol | NFS |
| HOME filesystem | |
| -------------------- | ------------------------------ |
| Accesspoint | /home/username |
| Capacity | 31 TB |
| Throughput | 1.93 GB/s write, 3.1 GB/s read |
| User space quota | 25 GB |
| User inodes quota | 500 k |
| Protocol | NFS |
Configuration of the storage:
......@@ -65,16 +65,16 @@ The SCRATCH filesystem is mounted in directory /scratch. Users may freely create
!!! warning
Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
| SCRATCH filesystem | |
| -------------------- | --------- |
| Mountpoint | /scratch |
| Capacity | 1000 TB |
| Throughput | 1000 GB/s |
| User space quota | 9,3 TB |
| User inodes quota | 10 M |
| Default stripe size | 1 MB |
| Default stripe count | 1 |
| Protocol | Lustre |
| SCRATCH filesystem | |
| -------------------- | ---------------------------------- |
| Mountpoint | /scratch |
| Capacity | 1361 TB |
| Throughput | 730.9 GB/s write, 1198.3 GB/s read |
| User space quota | 9.3 TB |
| User inodes quota | 10 M |
| Default stripe size | 1 MB |
| Default stripe count | 1 |
| Protocol | Lustre |
Configuration of the storage:
......@@ -124,8 +124,8 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir
| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Limitations | Access | Services | |
| ---------- | ------------------------- | -------- | -------------- | ---------- | ----------- | ----------------------- | --------------------------- | ------ |
| /home | home directory | NFS | 25 TB | 1.2 GB/s | Quota 25 GB | Compute and login nodes | backed up | |
| /scratch | cluster shared jobs' data | Lustre | 1000 TB | 1000 GB/s | Quota 9.3 TB| Compute and login nodes | files older 90 days removed | |
| /home | home directory | NFS | 31 TB | 1.93 GB/s write, 3.1 GB/s read | Quota 25 GB | Compute and login nodes | backed up | |
| /scratch | cluster shared jobs' data | Lustre | 1361 TB | 730.9 GB/s write, 1198.3 GB/s read | Quota 9.3 TB| Compute and login nodes | files older 90 days removed | |
| /tmp | local temporary files | local | ------ | ------- | none | Compute / login nodes | auto | purged |
[1]: #home-file-system
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment