Skip to content
Snippets Groups Projects
Commit 24465a34 authored by Marek Chrastina's avatar Marek Chrastina
Browse files

Add inodes quota on salomon

parent 5eebaa60
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!226inodes quotas
...@@ -148,7 +148,7 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl ...@@ -148,7 +148,7 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl
| Mountpoint | /scratch | | Mountpoint | /scratch |
| Capacity | 146 TB | | Capacity | 146 TB |
| Throughput | 6 GB/s | | Throughput | 6 GB/s |
| User quota | 100 TB | | User space quota | 100 TB |
| User inodes quota | 10 M | | User inodes quota | 10 M |
| Default stripe size | 1 MB | | Default stripe size | 1 MB |
| Default stripe count | 1 | | Default stripe count | 1 |
...@@ -303,13 +303,13 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir ...@@ -303,13 +303,13 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir
## Summary ## Summary
| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Space/Inodes quota | Access | Services | | | Mountpoint | Usage | Protocol | Net Capacity | Throughput | Space/Inodes quota | Access | Services | |
| ---------- | ------------------------- | -------- | -------------- | ---------- | ------------------ | ----------------------- | --------------------------- | ------ | | ---------- | ------------------------- | -------- | -------------- | ---------- | ------------------------ | ----------------------- | --------------------------- | ------ |
| /home | home directory | Lustre | 320 TiB | 2 GB/s | 250 GB / 500 k | Compute and login nodes | backed up | | | /home | home directory | Lustre | 320 TiB | 2 GB/s | 250 GB / 500 k | Compute and login nodes | backed up | |
| /scratch | cluster shared jobs' data | Lustre | 146 TiB | 6 GB/s | 100 TB / 10 M | Compute and login nodes | files older 90 days removed | | | /scratch | cluster shared jobs' data | Lustre | 146 TiB | 6 GB/s | 100 TB / 10 M | Compute and login nodes | files older 90 days removed | |
| /lscratch | node local jobs' data | local | 330 GB | 100 MB/s | none / none | Compute nodes | purged after job ends | | | /lscratch | node local jobs' data | local | 330 GB | 100 MB/s | none / none | Compute nodes | purged after job ends | |
| /ramdisk | node local jobs' data | local | 60, 90, 500 GB | 5-50 GB/s | none / none | Compute nodes | purged after job ends | | | /ramdisk | node local jobs' data | local | 60, 90, 500 GB | 5-50 GB/s | none / none | Compute nodes | purged after job ends | |
| /tmp | local temporary files | local | 9.5 GB | 100 MB/s | none / none | Compute and login nodes | auto | purged | | /tmp | local temporary files | local | 9.5 GB | 100 MB/s | none / none | Compute and login nodes | auto | purged |
## CESNET Data Storage ## CESNET Data Storage
......
...@@ -133,7 +133,7 @@ Filesystem: /scratch ...@@ -133,7 +133,7 @@ Filesystem: /scratch
Space used: 377G Space used: 377G
Space limit: 93T Space limit: 93T
Entries: 14k Entries: 14k
Entries limit: 0 Entries limit: 10m
# based on Lustre quota # based on Lustre quota
Filesystem: /scratch Filesystem: /scratch
...@@ -144,6 +144,8 @@ Entries: 14k ...@@ -144,6 +144,8 @@ Entries: 14k
Filesystem: /scratch/work Filesystem: /scratch/work
Space used: 377G Space used: 377G
Entries: 14k Entries: 14k
Entries: 40k
Entries limit: 1.0m
# based on Robinhood # based on Robinhood
Filesystem: /scratch/temp Filesystem: /scratch/temp
...@@ -235,42 +237,36 @@ The files on HOME will not be deleted until end of the [users lifecycle][10]. ...@@ -235,42 +237,36 @@ The files on HOME will not be deleted until end of the [users lifecycle][10].
The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files. The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
| HOME workspace | | | HOME workspace | |
| -------------- | -------------- | | ----------------- | -------------- |
| Accesspoint | /home/username | | Accesspoint | /home/username |
| Capacity | 0.5 PB | | Capacity | 0.5 PB |
| Throughput | 6 GB/s | | Throughput | 6 GB/s |
| User quota | 250 GB | | User space quota | 250 GB |
| Protocol | NFS, 2-Tier | | User inodes quota | 500 k |
| Protocol | NFS, 2-Tier |
### Work ### Scratch
The WORK workspace resides on SCRATCH file system. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system.
!!! note !!! note
The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up. Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10 m inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. If 100 TB space or 10 m inodes should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request.
#### Work
The WORK workspace is hosted on SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system. The WORK workspace resides on SCRATCH file system. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid.
!!! note !!! note
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up.
| WORK workspace | | Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project.
| -------------------- | --------------------------------------------------------- |
| Accesspoints | /scratch/work/user/username, /scratch/work/user/projectid |
| Capacity | 1.6 PB |
| Throughput | 30 GB/s |
| User quota | 100 TB |
| Default stripe size | 1 MB |
| Default stripe count | 1 |
| Number of OSTs | 54 |
| Protocol | Lustre |
### Temp #### Temp
The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. If 100 TB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request. The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK.
!!! note !!! note
The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory. The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
...@@ -280,21 +276,50 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin ...@@ -280,21 +276,50 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin
!!! warning !!! warning
Files on the TEMP file system that are **not accessed for more than 90 days** will be automatically **deleted**. Files on the TEMP file system that are **not accessed for more than 90 days** will be automatically **deleted**.
The TEMP workspace is hosted on SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system. <table>
<tr>
!!! note <td style="background-color: rgba(0, 0, 0, 0.54); color: white;"></td>
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">WORK workspace</td>
<td style="background-color: rgba(0, 0, 0, 0.54); color: white;">TEMP workspace</td>
| TEMP workspace | | </tr>
| -------------------- | ------------- | <tr>
| Accesspoint | /scratch/temp | <td style="vertical-align : middle">Accesspoints</td>
| Capacity | 1.6 PB | <td>/scratch/work/user/username,<br />/scratch/work/user/projectid</td>
| Throughput | 30 GB/s | <td>/scratch/temp</td>
| User quota | 100 TB | </tr>
| Default stripe size | 1 MB | <tr>
| Default stripe count | 1 | <td>Capacity</td>
| Number of OSTs | 54 | <td colspan="2" style="vertical-align : middle;text-align:center;">1.6 PB</td>
| Protocol | Lustre | </tr>
<tr>
<td>Throughput</td>
<td colspan="2" style="vertical-align : middle;text-align:center;">30 GB/s</td>
</tr>
<tr>
<td>User space quota</td>
<td colspan="2" style="vertical-align : middle;text-align:center;">100 TB</td>
</tr>
<tr>
<td>User inodes quota</td>
<td colspan="2" style="vertical-align : middle;text-align:center;">10 M</td>
</tr>
<tr>
<td>Default stripe size</td>
<td colspan="2" style="vertical-align : middle;text-align:center;">1 MB</td>
</tr>
<tr>
<td>Default stripe count</td>
<td colspan="2" style="vertical-align : middle;text-align:center;">1</td>
</tr>
<tr>
<td>Number of OSTs</td>
<td colspan="2" style="vertical-align : middle;text-align:center;">54</td>
</tr>
<tr>
<td>Protocol</td>
<td colspan="2" style="vertical-align : middle;text-align:center;">Lustre</td>
</tr>
</table>
## RAM Disk ## RAM Disk
...@@ -368,21 +393,72 @@ within a job. ...@@ -368,21 +393,72 @@ within a job.
| ------------------ | --------------------------------------------------------------------------| | ------------------ | --------------------------------------------------------------------------|
| Mountpoint | /mnt/global_ramdisk | | Mountpoint | /mnt/global_ramdisk |
| Accesspoint | /mnt/global_ramdisk | | Accesspoint | /mnt/global_ramdisk |
| Capacity | N*110 GB | | Capacity | (N*110) GB |
| Throughput | 3*(N+1) GB/s, 2GB/s single POSIX thread | | Throughput | 3*(N+1) GB/s, 2 GB/s single POSIX thread |
| User quota | none | | User quota | none |
N = number of compute nodes in the job. N = number of compute nodes in the job.
## Summary ## Summary
| Mountpoint | Usage | Protocol | Net Capacity| Throughput | Limitations | Access | Service | <table>
| ------------------- | ------------------------------ | ----------- | ------------| -------------- | ------------ | --------------------------- | --------------------------- | <tr>
| /home | home directory | NFS, 2-Tier | 0.5 PB | 6 GB/s | Quota 250GB | Compute and login nodes | backed up | <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Mountpoint</td>
| /scratch/work | large project files | Lustre | 1.69 PB | 30 GB/s | Quota | Compute and login nodes | none | <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Usage</td>
| /scratch/temp | job temporary data | Lustre | 1.69 PB | 30 GB/s | Quota 100 TB | Compute and login nodes | files older 90 days removed | <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Protocol</td>
| /ramdisk | job temporary data, node local | tmpfs | 110GB | 90 GB/s | none | Compute nodes, node local | purged after job ends | <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Net Capacity</td>
| /mnt/global_ramdisk | job temporary data | BeeGFS | N*110GB | 3*(N+1) GB/s | none | Compute nodes, job shared | purged after job ends | <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Throughput</td>
<td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Space/Inodes quota</td>
<td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Access</td>
<td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Service</td>
</tr>
<tr>
<td>/home</td>
<td>home directory</td>
<td>NFS, 2-Tier</td>
<td>0.5 PB</td>
<td>6 GB/s</td>
<td>250&nbsp;GB / 500&nbsp;k</td>
<td>Compute and login nodes</td>
<td>backed up</td>
</tr>
<tr>
<td style="background-color: #D3D3D3;">/scratch/work</td>
<td style="background-color: #D3D3D3;">large project files</td>
<td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">Lustre</td>
<td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">1.69 PB</td>
<td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">30 GB/s</td>
<td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">100&nbsp;TB / 10&nbsp;M</td>
<td style="background-color: #D3D3D3;">Compute and login nodes</td>
<td style="background-color: #D3D3D3;">none</td>
</tr>
<tr>
<td style="background-color: #D3D3D3;">/scratch/temp</td>
<td style="background-color: #D3D3D3;">job temporary data</td>
<td style="background-color: #D3D3D3;">Compute and login nodes</td>
<td style="background-color: #D3D3D3;">files older 90 days removed</td>
</tr>
<tr>
<td>/ramdisk</td>
<td>job temporary data, node local</td>
<td>tmpfs</td>
<td>110 GB</td>
<td>90 GB/s</td>
<td>none / none</td>
<td>Compute nodes, node local</td>
<td>purged after job ends</td>
</tr>
<tr>
<td style="background-color: #D3D3D3;">/mnt/global_ramdisk</td>
<td style="background-color: #D3D3D3;">job temporary data</td>
<td style="background-color: #D3D3D3;">BeeGFS</td>
<td style="background-color: #D3D3D3;">(N*110) GB</td>
<td style="background-color: #D3D3D3;">3*(N+1) GB/s</td>
<td style="background-color: #D3D3D3;">none / none</td>
<td style="background-color: #D3D3D3;">Compute nodes, job shared</td>
<td style="background-color: #D3D3D3;">purged after job ends</td>
</tr>
</table>
N = number of compute nodes in the job. N = number of compute nodes in the job.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment