5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!226inodes quotas
@@ -235,42 +237,36 @@ The files on HOME will not be deleted until end of the [users lifecycle][10].
...
@@ -235,42 +237,36 @@ The files on HOME will not be deleted until end of the [users lifecycle][10].
The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
| HOME workspace | |
| HOME workspace | |
| -------------- | -------------- |
| ----------------- | -------------- |
| Accesspoint | /home/username |
| Accesspoint | /home/username |
| Capacity | 0.5 PB |
| Capacity | 0.5 PB |
| Throughput | 6 GB/s |
| Throughput | 6 GB/s |
| User quota | 250 GB |
| User space quota | 250 GB |
| Protocol | NFS, 2-Tier |
| User inodes quota | 500 k |
| Protocol | NFS, 2-Tier |
### Work
### Scratch
The WORK workspace resides on SCRATCH file system. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid.
The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system.
!!! note
!!! note
The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up.
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project.
Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10 m inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. If 100 TB space or 10 m inodes should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request.
#### Work
The WORK workspace is hosted on SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system.
The WORK workspace resides on SCRATCH file system. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid.
!!! note
!!! note
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up.
| WORK workspace | |
Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project.
The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. If 100 TB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request.
The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK.
!!! note
!!! note
The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
...
@@ -280,21 +276,50 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin
...
@@ -280,21 +276,50 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin
!!! warning
!!! warning
Files on the TEMP file system that are **not accessed for more than 90 days** will be automatically **deleted**.
Files on the TEMP file system that are **not accessed for more than 90 days** will be automatically **deleted**.
The TEMP workspace is hosted on SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system.