diff --git a/docs.it4i/anselm/storage.md b/docs.it4i/anselm/storage.md index 2fa8128b41d1f13992429a35319281a74c8a0759..dc1b4ae53d87d3dca6cd7359230d3dbd45fa699d 100644 --- a/docs.it4i/anselm/storage.md +++ b/docs.it4i/anselm/storage.md @@ -148,7 +148,7 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl | Mountpoint | /scratch | | Capacity | 146 TB | | Throughput | 6 GB/s | -| User quota | 100 TB | +| User space quota | 100 TB | | User inodes quota | 10 M | | Default stripe size | 1 MB | | Default stripe count | 1 | @@ -303,13 +303,13 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir ## Summary -| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Space/Inodes quota | Access | Services | | -| ---------- | ------------------------- | -------- | -------------- | ---------- | ------------------ | ----------------------- | --------------------------- | ------ | -| /home | home directory | Lustre | 320 TiB | 2 GB/s | 250 GB / 500 k | Compute and login nodes | backed up | | -| /scratch | cluster shared jobs' data | Lustre | 146 TiB | 6 GB/s | 100 TB / 10 M | Compute and login nodes | files older 90 days removed | | -| /lscratch | node local jobs' data | local | 330 GB | 100 MB/s | none / none | Compute nodes | purged after job ends | | -| /ramdisk | node local jobs' data | local | 60, 90, 500 GB | 5-50 GB/s | none / none | Compute nodes | purged after job ends | | -| /tmp | local temporary files | local | 9.5 GB | 100 MB/s | none / none | Compute and login nodes | auto | purged | +| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Space/Inodes quota | Access | Services | | +| ---------- | ------------------------- | -------- | -------------- | ---------- | ------------------------ | ----------------------- | --------------------------- | ------ | +| /home | home directory | Lustre | 320 TiB | 2 GB/s | 250 GB / 500 k | Compute and login nodes | backed up | | +| /scratch | cluster shared jobs' data | Lustre | 146 TiB | 6 GB/s | 100 TB / 10 M | Compute and login nodes | files older 90 days removed | | +| /lscratch | node local jobs' data | local | 330 GB | 100 MB/s | none / none | Compute nodes | purged after job ends | | +| /ramdisk | node local jobs' data | local | 60, 90, 500 GB | 5-50 GB/s | none / none | Compute nodes | purged after job ends | | +| /tmp | local temporary files | local | 9.5 GB | 100 MB/s | none / none | Compute and login nodes | auto | purged | ## CESNET Data Storage diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md index f043973cb93002b4c74dc36840ada5c00da5b139..b6ed820f07ee1983b0de4a40d142fbed055bafff 100644 --- a/docs.it4i/salomon/storage.md +++ b/docs.it4i/salomon/storage.md @@ -133,7 +133,7 @@ Filesystem: /scratch Space used: 377G Space limit: 93T Entries: 14k -Entries limit: 0 +Entries limit: 10m # based on Lustre quota Filesystem: /scratch @@ -144,6 +144,8 @@ Entries: 14k Filesystem: /scratch/work Space used: 377G Entries: 14k +Entries: 40k +Entries limit: 1.0m # based on Robinhood Filesystem: /scratch/temp @@ -235,42 +237,36 @@ The files on HOME will not be deleted until end of the [users lifecycle][10]. The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files. -| HOME workspace | | -| -------------- | -------------- | -| Accesspoint | /home/username | -| Capacity | 0.5 PB | -| Throughput | 6 GB/s | -| User quota | 250 GB | -| Protocol | NFS, 2-Tier | +| HOME workspace | | +| ----------------- | -------------- | +| Accesspoint | /home/username | +| Capacity | 0.5 PB | +| Throughput | 6 GB/s | +| User space quota | 250 GB | +| User inodes quota | 500 k | +| Protocol | NFS, 2-Tier | -### Work +### Scratch -The WORK workspace resides on SCRATCH file system. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid. +The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system. !!! note - The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up. + Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. - Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project. +Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10 m inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. If 100 TB space or 10 m inodes should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request. + +#### Work -The WORK workspace is hosted on SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system. +The WORK workspace resides on SCRATCH file system. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid. !!! note - Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. + The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up. -| WORK workspace | | -| -------------------- | --------------------------------------------------------- | -| Accesspoints | /scratch/work/user/username, /scratch/work/user/projectid | -| Capacity | 1.6 PB | -| Throughput | 30 GB/s | -| User quota | 100 TB | -| Default stripe size | 1 MB | -| Default stripe count | 1 | -| Number of OSTs | 54 | -| Protocol | Lustre | + Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project. -### Temp +#### Temp -The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. If 100 TB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request. +The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. !!! note The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory. @@ -280,21 +276,50 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin !!! warning Files on the TEMP file system that are **not accessed for more than 90 days** will be automatically **deleted**. -The TEMP workspace is hosted on SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system. - -!!! note - Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. - -| TEMP workspace | | -| -------------------- | ------------- | -| Accesspoint | /scratch/temp | -| Capacity | 1.6 PB | -| Throughput | 30 GB/s | -| User quota | 100 TB | -| Default stripe size | 1 MB | -| Default stripe count | 1 | -| Number of OSTs | 54 | -| Protocol | Lustre | +<table> + <tr> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;"></td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">WORK workspace</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">TEMP workspace</td> + </tr> + <tr> + <td style="vertical-align : middle">Accesspoints</td> + <td>/scratch/work/user/username,<br />/scratch/work/user/projectid</td> + <td>/scratch/temp</td> + </tr> + <tr> + <td>Capacity</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">1.6 PB</td> + </tr> + <tr> + <td>Throughput</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">30 GB/s</td> + </tr> + <tr> + <td>User space quota</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">100 TB</td> + </tr> + <tr> + <td>User inodes quota</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">10 M</td> + </tr> + <tr> + <td>Default stripe size</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">1 MB</td> + </tr> + <tr> + <td>Default stripe count</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">1</td> + </tr> + <tr> + <td>Number of OSTs</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">54</td> + </tr> + <tr> + <td>Protocol</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">Lustre</td> + </tr> +</table> ## RAM Disk @@ -368,21 +393,72 @@ within a job. | ------------------ | --------------------------------------------------------------------------| | Mountpoint | /mnt/global_ramdisk | | Accesspoint | /mnt/global_ramdisk | -| Capacity | N*110 GB | -| Throughput | 3*(N+1) GB/s, 2GB/s single POSIX thread | +| Capacity | (N*110) GB | +| Throughput | 3*(N+1) GB/s, 2 GB/s single POSIX thread | | User quota | none | N = number of compute nodes in the job. ## Summary -| Mountpoint | Usage | Protocol | Net Capacity| Throughput | Limitations | Access | Service | -| ------------------- | ------------------------------ | ----------- | ------------| -------------- | ------------ | --------------------------- | --------------------------- | -| /home | home directory | NFS, 2-Tier | 0.5 PB | 6 GB/s | Quota 250GB | Compute and login nodes | backed up | -| /scratch/work | large project files | Lustre | 1.69 PB | 30 GB/s | Quota | Compute and login nodes | none | -| /scratch/temp | job temporary data | Lustre | 1.69 PB | 30 GB/s | Quota 100 TB | Compute and login nodes | files older 90 days removed | -| /ramdisk | job temporary data, node local | tmpfs | 110GB | 90 GB/s | none | Compute nodes, node local | purged after job ends | -| /mnt/global_ramdisk | job temporary data | BeeGFS | N*110GB | 3*(N+1) GB/s | none | Compute nodes, job shared | purged after job ends | +<table> + <tr> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Mountpoint</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Usage</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Protocol</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Net Capacity</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Throughput</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Space/Inodes quota</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Access</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Service</td> + </tr> + <tr> + <td>/home</td> + <td>home directory</td> + <td>NFS, 2-Tier</td> + <td>0.5 PB</td> + <td>6 GB/s</td> + <td>250 GB / 500 k</td> + <td>Compute and login nodes</td> + <td>backed up</td> + </tr> + <tr> + <td style="background-color: #D3D3D3;">/scratch/work</td> + <td style="background-color: #D3D3D3;">large project files</td> + <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">Lustre</td> + <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">1.69 PB</td> + <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">30 GB/s</td> + <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">100 TB / 10 M</td> + <td style="background-color: #D3D3D3;">Compute and login nodes</td> + <td style="background-color: #D3D3D3;">none</td> + </tr> + <tr> + <td style="background-color: #D3D3D3;">/scratch/temp</td> + <td style="background-color: #D3D3D3;">job temporary data</td> + <td style="background-color: #D3D3D3;">Compute and login nodes</td> + <td style="background-color: #D3D3D3;">files older 90 days removed</td> + </tr> + <tr> + <td>/ramdisk</td> + <td>job temporary data, node local</td> + <td>tmpfs</td> + <td>110 GB</td> + <td>90 GB/s</td> + <td>none / none</td> + <td>Compute nodes, node local</td> + <td>purged after job ends</td> + </tr> + <tr> + <td style="background-color: #D3D3D3;">/mnt/global_ramdisk</td> + <td style="background-color: #D3D3D3;">job temporary data</td> + <td style="background-color: #D3D3D3;">BeeGFS</td> + <td style="background-color: #D3D3D3;">(N*110) GB</td> + <td style="background-color: #D3D3D3;">3*(N+1) GB/s</td> + <td style="background-color: #D3D3D3;">none / none</td> + <td style="background-color: #D3D3D3;">Compute nodes, job shared</td> + <td style="background-color: #D3D3D3;">purged after job ends</td> + </tr> +</table> N = number of compute nodes in the job.