Commit 5eebaa60 authored by Marek Chrastina's avatar Marek Chrastina

Add inodes quotas on anselm

parent 1e302047
Pipeline #5652 passed with stages
in 1 minute and 22 seconds
......@@ -120,7 +120,8 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t
| Mountpoint | /home |
| Capacity | 320 TB |
| Throughput | 2 GB/s |
| User quota | 250 GB |
| User space quota | 250 GB |
| User inodes quota | 500 k |
| Default stripe size | 1 MB |
| Default stripe count | 1 |
| Number of OSTs | 22 |
......@@ -145,10 +146,11 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl
| SCRATCH filesystem | |
| -------------------- | -------- |
| Mountpoint | /scratch |
| Capacity | 146TB |
| Throughput | 6GB/s |
| User quota | 100TB |
| Default stripe size | 1MB |
| Capacity | 146 TB |
| Throughput | 6 GB/s |
| User quota | 100 TB |
| User inodes quota | 10 M |
| Default stripe size | 1 MB |
| Default stripe count | 1 |
| Number of OSTs | 10 |
......@@ -178,7 +180,7 @@ Filesystem: /scratch
Space used: 0
Space limit: 93T
Entries: 0
Entries limit: 0
Entries limit: 10m
```
In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
......@@ -269,8 +271,8 @@ The local scratch filesystem is intended for temporary scratch data generated du
| ------------------------ | -------------------- |
| Mountpoint | /lscratch |
| Accesspoint | /lscratch/$PBS_JOBID |
| Capacity | 330GB |
| Throughput | 100MB/s |
| Capacity | 330 GB |
| Throughput | 100 MB/s |
| User quota | none |
### RAM Disk
......@@ -287,13 +289,13 @@ The local RAM disk filesystem is intended for temporary scratch data generated d
!!! note
The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
| RAM disk | |
| ----------- | ------------------------------------------------------------------------------------------------------- |
| Mountpoint | /ramdisk |
| Accesspoint | /ramdisk/$PBS_JOBID |
| Capacity | 60GB at compute nodes without accelerator, 90GB at compute nodes with accelerator, 500GB at fat nodes |
| Throughput | over 1.5 GB/s write, over 5 GB/s read, single thread, over 10 GB/s write, over 50 GB/s read, 16 threads |
| User quota | none |
| RAM disk | |
| ----------- | -------------------------------------------------------------------------------------------------------- |
| Mountpoint | /ramdisk |
| Accesspoint | /ramdisk/$PBS_JOBID |
| Capacity | 60 GB at compute nodes without accelerator, 90 GB at compute nodes with accelerator, 500 GB at fat nodes |
| Throughput | over 1.5 GB/s write, over 5 GB/s read, single thread, over 10 GB/s write, over 50 GB/s read, 16 threads |
| User quota | none |
### Tmp
......@@ -301,13 +303,13 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir
## Summary
| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Limitations | Access | Services | |
| ---------- | ------------------------- | -------- | -------------- | ---------- | ----------- | ----------------------- | --------------------------- | ------ |
| /home | home directory | Lustre | 320 TiB | 2 GB/s | Quota 250GB | Compute and login nodes | backed up | |
| /scratch | cluster shared jobs' data | Lustre | 146 TiB | 6 GB/s | Quota 100TB | Compute and login nodes | files older 90 days removed | |
| /lscratch | node local jobs' data | local | 330 GB | 100 MB/s | none | Compute nodes | purged after job ends | |
| /ramdisk | node local jobs' data | local | 60, 90, 500 GB | 5-50 GB/s | none | Compute nodes | purged after job ends | |
| /tmp | local temporary files | local | 9.5 GB | 100 MB/s | none | Compute and login nodes | auto | purged |
| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Space/Inodes quota | Access | Services | |
| ---------- | ------------------------- | -------- | -------------- | ---------- | ------------------ | ----------------------- | --------------------------- | ------ |
| /home | home directory | Lustre | 320 TiB | 2 GB/s | 250 GB / 500 k | Compute and login nodes | backed up | |
| /scratch | cluster shared jobs' data | Lustre | 146 TiB | 6 GB/s | 100 TB / 10 M | Compute and login nodes | files older 90 days removed | |
| /lscratch | node local jobs' data | local | 330 GB | 100 MB/s | none / none | Compute nodes | purged after job ends | |
| /ramdisk | node local jobs' data | local | 60, 90, 500 GB | 5-50 GB/s | none / none | Compute nodes | purged after job ends | |
| /tmp | local temporary files | local | 9.5 GB | 100 MB/s | none / none | Compute and login nodes | auto | purged |
## CESNET Data Storage
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment