Skip to content
Snippets Groups Projects
Commit d68121b4 authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Update storage.md

parent 4971d4cc
No related branches found
No related tags found
No related merge requests found
Pipeline #29967 passed with warnings
...@@ -4,15 +4,14 @@ Karolina cluster provides two main shared filesystems, [HOME filesystem][1] and ...@@ -4,15 +4,14 @@ Karolina cluster provides two main shared filesystems, [HOME filesystem][1] and
## Archiving ## Archiving
Do not use shared filesystems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use the CESNET storage service, which is available via SSHFS. Shared filesystems should not be used as a backup for large amount of data or long-term data storage. The academic staff and students of research institutions in the Czech Republic can use the [CESNET storage][6] service, which is available via SSHFS.
For more information, see the [CESNET storage][6] section.
### HOME File System ### HOME File System
The HOME filesystem is an HA cluster of two active-passive NFS servers. This filesystem contains users' home directories `/home/username`. Accessible capacity is 31 TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 25 GB per user. Should 25 GB prove insufficient, contact [support][d], the quota may be increased upon request. The HOME filesystem is an HA cluster of two active-passive NFS servers. This filesystem contains users' home directories `/home/username`. Accessible capacity is 31 TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 25 GB per user. Should 25 GB prove insufficient, contact [support][d], the quota may be increased upon request.
!!! note !!! note
The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects. The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active projects.
The files on HOME filesystem will not be deleted until the end of the [user's lifecycle][4]. The files on HOME filesystem will not be deleted until the end of the [user's lifecycle][4].
...@@ -20,7 +19,7 @@ The filesystem is backed up, so that it can be restored in case of a catastrophi ...@@ -20,7 +19,7 @@ The filesystem is backed up, so that it can be restored in case of a catastrophi
| HOME filesystem | | | HOME filesystem | |
| -------------------- | ------------------------------ | | -------------------- | ------------------------------ |
| Accesspoint | /home/username | | Mountpoint | /home/username |
| Capacity | 31 TB | | Capacity | 31 TB |
| Throughput | 1.93 GB/s write, 3.1 GB/s read | | Throughput | 1.93 GB/s write, 3.1 GB/s read |
| User space quota | 25 GB | | User space quota | 25 GB |
...@@ -53,7 +52,7 @@ The filesystem is backed up, so that it can be restored in case of a catastrophi ...@@ -53,7 +52,7 @@ The filesystem is backed up, so that it can be restored in case of a catastrophi
### SCRATCH File System ### SCRATCH File System
The SCRATCH filesystem is realized as a parallel Lustre filesystem. It is accessible via the Infiniband network and is available from all login and computational nodes. Extended ACLs are provided on the Lustre filesystems for sharing data with other users using fine-grained control. For basic information about Lustre, see the [Understanding the Lustre Filesystem][7] subsection of the Barbora's storage documentation.* The SCRATCH filesystem is realized as a parallel Lustre filesystem. It is accessible via the Infiniband network and is available from all login and compute nodes. Extended ACLs are provided on the Lustre filesystems for sharing data with other users using fine-grained control. For basic information about Lustre, see the [Understanding the Lustre Filesystems][7] subsection of the Barbora's storage documentation.
The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 1000 TB, shared among all users. Users are restricted by PROJECT quotas set to 20 TB. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. Should 20 TB prove insufficient, contact [support][d], the quota may be increased upon request. The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 1000 TB, shared among all users. Users are restricted by PROJECT quotas set to 20 TB. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. Should 20 TB prove insufficient, contact [support][d], the quota may be increased upon request.
...@@ -111,12 +110,12 @@ Configuration of the storage: ...@@ -111,12 +110,12 @@ Configuration of the storage:
### PROJECT File System ### PROJECT File System
The PROJECT data storage is a central storage for projects'/users' data on IT4Innovations that is accessible from all clusters. The PROJECT data storage is a central storage for projects'/users' data at IT4Innovations that is accessible from all clusters.
For more information, see the [PROJECT storage][9] section. For more information, see the [PROJECT Data Storage][9] section.
### Disk Usage and Quota Commands ### Disk Usage and Quota Commands
For more information about Disk usage and user quotas, see the Barbora's [storage section][8]. For more information about disk usage and user quotas, see the Barbora's [storage section][8].
### Extended ACLs ### Extended ACLs
...@@ -130,7 +129,7 @@ For more information, see the [Access Control List][10] section of the documenta ...@@ -130,7 +129,7 @@ For more information, see the [Access Control List][10] section of the documenta
### TMP ### TMP
Each node is equipped with local /tmp directory of few GB capacity. The /tmp directory should be used to work with small temporary files. Old files in /tmp directory are automatically purged. Each node is equipped with a local `/tmp` directory of few GB capacity. The `/tmp` directory should be used to work with small temporary files. Old files in the `/tmp` directory are automatically purged.
## Summary ## Summary
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment