Skip to content
Snippets Groups Projects
Commit 24c57cc8 authored by Rastislav Kubala's avatar Rastislav Kubala Committed by Jan Siwiec
Browse files

"df -k /directory" used for getting size of directory (/home or /scratch)

"it4i-disk-usage -h" used for getting the quota settings
parent 7499c3e4
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
......@@ -3,7 +3,7 @@
!!! warning
Cluster integration in the progress. The resulting settings may vary. The documentation will be updated.
There are three main shared file systems on Barbora cluster, the [HOME][1], [SCRATCH][2] and [PROJECT][5]. All login and compute nodes may access same data on shared file systems. Compute nodes are also equipped with local (non-shared) scratch, RAM disk, and tmp file systems.
There are three main shared file systems on Barbora cluster, the [HOME][1], [SCRATCH][2], and [PROJECT][5]. All login and compute nodes may access same data on shared file systems. Compute nodes are also equipped with local (non-shared) scratch, RAM disk, and tmp file systems.
## Archiving
......@@ -11,7 +11,7 @@ Do not use shared filesystems as a backup for large amount of data or long-term
## Shared Filesystems
Barbora computer provides three main shared filesystems, the [HOME filesystem][1], [SCRATCH filesystem][2] and the [PROJECT filesystems][5].
Barbora computer provides three main shared filesystems, the [HOME filesystem][1], [SCRATCH filesystem][2], and the [PROJECT filesystems][5].
*Both HOME and SCRATCH filesystems are realized as a parallel Lustre filesystem. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both Lustre filesystems for sharing data with other users using fine-grained control.*
......@@ -52,7 +52,7 @@ $ lfs getstripe /scratch/username/
stripe_count: 10 stripe_size: 1048576 stripe_offset: -1
```
In this example, we view the current stripe setting of the /scratch/username/ directory. The stripe count is changed to all OSTs, and verified. All files written to this directory will be striped over 10 OSTs
In this example, we view the current stripe setting of the /scratch/username/ directory. The stripe count is changed to all OSTs and verified. All files written to this directory will be striped over 10 OSTs
Use lfs check OSTs to see the number and status of active OSTs for each filesystem on Barbora. Learn more by reading the man page:
......@@ -101,7 +101,7 @@ The architecture of Lustre on Barbora is composed of two metadata servers (MDS)
### HOME File System
The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request.
The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 26 TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 24 GB per user. Should 24 GB prove insufficient, contact [support][d], the quota may be lifted upon request.
!!! note
The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
......@@ -110,27 +110,22 @@ The HOME filesystem should not be used to archive data of past Projects or other
The files on HOME filesystem will not be deleted until the end of the [user's lifecycle][4].
The filesystem is backed up, so that it can be restored in case of a catastrophic failure resulting in significant data loss. However, this backup is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
The filesystem is backed up, so that it can be restored in case of a catastrophic failure resulting in significant data loss. However, this backup is not intended to restore old versions of user data or to restore (accidentally) deleted files.
The HOME filesystem is realized as Lustre parallel filesystem and is available on all login and computational nodes.
Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for the HOME filesystem.
!!! note
Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.
| HOME filesystem | |
| -------------------- | ------ |
| Mountpoint | /home |
| Capacity | 320 TB |
| Throughput | 2 GB/s |
| User quota | 250 GB |
| Default stripe size | 1 MB |
| Default stripe count | 1 |
| Number of OSTs | 22 |
| HOME filesystem | |
| -------------------- | --------------- |
| Accesspoint | /home/username |
| Capacity | 26 TB |
| Throughput | 1 GB/s |
| User space quota | 24 GB |
| User inodes quota | 500 k |
| Protocol | NFS |
### SCRATCH File System
The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request.
The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. There are 5 OSTs dedicated for the SCRATCH file system.
The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 282 TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 9,3 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. Should 9,3 TB prove insufficient, contact [support][d], the quota may be lifted upon request.
!!! note
The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
......@@ -140,20 +135,22 @@ The SCRATCH filesystem is mounted in directory /scratch. Users may freely create
!!! warning
Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
The SCRATCH filesystem is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 10 OSTs dedicated for the SCRATCH filesystem.
The SCRATCH filesystem is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 5 OSTs dedicated for the SCRATCH filesystem.
!!! note
Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.
| SCRATCH filesystem | |
| -------------------- | -------- |
| Mountpoint | /scratch |
| Capacity | 146TB |
| Throughput | 6GB/s |
| User quota | 100TB |
| Default stripe size | 1MB |
| Default stripe count | 1 |
| Number of OSTs | 10 |
| SCRATCH filesystem | |
| -------------------- | --------- |
| Mountpoint | /scratch |
| Capacity | 282 TB |
| Throughput | 5 GB/s |
| Throughput [Burst] | 38 GB/s |
| User space quota | 9,3 TB |
| User inodes quota | 10 M |
| Default stripe size | 1 MB |
| Default stripe count | 1 |
| Number of OSTs | 5 |
### PROJECT File System
......@@ -209,7 +206,7 @@ $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
2,7M .idb_13.0_linux_intel64_app
```
This will list all directories which are having MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
This will list all directories with MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
To have a better understanding of previous commands, you can read man pages:
......@@ -269,9 +266,9 @@ Do not use shared filesystems at IT4Innovations as a backup for large amount of
The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic.
User of data storage CESNET (DU) association can become organizations or an individual person who is either in the current employment relationship (employees) or the current study relationship (students) to a legal entity (organization) that meets the “Principles for access to CESNET Large infrastructure (Access Policy)”.
User of data storage CESNET (DU) association can become organizations or an individual person who is in the current employment relationship (employees) or the current study relationship (students) to a legal entity (organization) that meets the “Principles for access to CESNET Large infrastructure (Access Policy)”.
User may only use data storage CESNET for data transfer and storage associated with activities in science, research, development, the spread of education, culture and prosperity. For details, see “Acceptable Use Policy CESNET Large Infrastructure (Acceptable Use Policy, AUP)”.
User may only use data storage CESNET for data transfer and storage associated with activities in science, research, development, spread of education, culture, and prosperity. For details, see “Acceptable Use Policy CESNET Large Infrastructure (Acceptable Use Policy, AUP)”.
The service is documented [here][g]. For special requirements contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz][h].
......@@ -342,14 +339,14 @@ RSYNC finds files that need to be transferred using a "quick check" algorithm (b
[More about RSYNC][k].
Transfer large files to/from CESNET storage, assuming membership in the Storage VO
Transfer large files to/from CESNET storage, assuming membership in the Storage VO:
```console
$ rsync --progress datafile username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafile .
```
Transfer large directories to/from CESNET storage, assuming membership in the Storage VO
Transfer large directories to/from CESNET storage, assuming membership in the Storage VO:
```console
$ rsync --progress -av datafolder username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment