Skip to content
Snippets Groups Projects
Commit 434d0094 authored by Branislav Jansik's avatar Branislav Jansik
Browse files

Update storage.md

parent 33fcdbbd
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
# Storage - WORKING IN PROGRESS
!!! warning
Cluster integration in the progress. The resulting settings may vary. The documentation will be updated.
There are three main shared file systems on Barbora cluster: [HOME][1], [SCRATCH][2], and [PROJECT][5]. All login and compute nodes may access same data on shared file systems. Compute nodes are also equipped with local (non-shared) scratch, RAM disk, and tmp file systems.
There are three main shared file systems on the Barbora cluster: [HOME][1], [SCRATCH][2], and [PROJECT][5]. All login and compute nodes may access same data on shared file systems. Compute nodes are also equipped with local (non-shared) scratch, RAM disk, and tmp file systems.
## Archiving
......@@ -13,7 +10,13 @@ Do not use shared filesystems as a backup for large amount of data or long-term
Barbora computer provides three main shared filesystems, the [HOME filesystem][1], [SCRATCH filesystem][2], and the [PROJECT filesystems][5].
*Both HOME and SCRATCH filesystems are realized as a parallel Lustre filesystem. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both Lustre filesystems for sharing data with other users using fine-grained control.*
All filesystems are accessible via the Infiniband network.
The HOME and PROJECT filesystems are realized as NFS filesystem.
The SCRATCH filesystem is realized as a parallel Lustre filesystem.
Extended ACLs are provided on both Lustre filesystems for sharing data with other users using fine-grained control
### Understanding the Lustre Filesystems
......@@ -42,19 +45,14 @@ $ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename
Example:
```console
$ lfs getstripe /scratch/username/
/scratch/username/
stripe_count: 1 stripe_size: 1048576 stripe_offset: -1
$ lfs setstripe -c -1 /scratch/username/
$ lfs getstripe /scratch/username/
/scratch/username/
stripe_count: 10 stripe_size: 1048576 stripe_offset: -1
$ lfs getstripe /scratch/projname
$ lfs setstripe -c -1 /scratch/projname
$ lfs getstripe /scratch/projname
```
In this example, we view the current stripe setting of the /scratch/username/ directory. The stripe count is changed to all OSTs and verified. All files written to this directory will be striped over 10 OSTs
In this example, we view the current stripe setting of the /scratch/projname/ directory. The stripe count is changed to all OSTs and verified. All files written to this directory will be striped over 5 OSTs
Use `lfs check ostss` to see the number and status of active OSTs for each filesystem on Barbora. Learn more by reading the man page:
Use `lfs check osts` to see the number and status of active OSTs for each filesystem on Barbora. Learn more by reading the man page:
```console
$ lfs check osts
......@@ -79,24 +77,21 @@ Read more [here][c].
### Lustre on Barbora
The architecture of Lustre on Barbora is composed of two metadata servers (MDS) and four data/object storage servers (OSS). Two object storage servers are used for file system HOME and another two object storage servers are used for file system SCRATCH.
The architecture of Lustre on Barbora is composed of two metadata servers (MDS) and two data/object storage servers (OSS).
Configuration of the storages
Configuration of the SCRATCH storage
* HOME Lustre object storage
* One disk array NetApp E5400
* 22 OSTs
* 227 2TB NL-SAS 7.2krpm disks
* 22 groups of 10 disks in RAID6 (8+2)
* 7 hot-spare disks
* SCRATCH Lustre object storage
* 2x Metadata server
* 2x Object storage server
* Lustre object storage
* One disk array NetApp E2800
* 54x 8TB 10kRPM 2,5” SAS HDD
* 5 x RAID6(8+2)
* 5 x RAID6(8+2) OST Object storage target
* 4 hotspare
* Lustre metadata storage
* One disk array NetApp E2600
* 12 300GB SAS 15krpm disks
* 2 groups of 5 disks in RAID5
* 2 groups of 5 disks in RAID5 Metadata target
* 2 hot-spare disks
### HOME File System
......@@ -154,7 +149,7 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl
### PROJECT File System
to do...
TBD
### Disk Usage and Quota Commands
......@@ -210,14 +205,6 @@ This will list all directories with MegaBytes or GigaBytes of consumed space in
To have a better understanding of previous commands, you can read man pages:
```console
$ man lfs
```
```console
$ man du
```
### Extended ACLs
Extended ACLs provide another security mechanism beside the standard POSIX ACLs, which are defined by three entries (for owner/group/others). Extended ACLs have more than the three basic entries. In addition, they also contain a mask entry and may contain any number of named user and named group entries.
......@@ -251,11 +238,11 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir
## Summary
| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Limitations | Access | Services | |
| ---------- | ------------------------- | -------- | -------------- | ---------- | ----------- | ----------------------- | --------------------------- | ------ |
| /home | home directory | Lustre | 320 TiB | 2 GB/s | Quota 250GB | Compute and login nodes | backed up | |
| /scratch | cluster shared jobs' data | Lustre | 146 TiB | 6 GB/s | Quota 100TB | Compute and login nodes | files older 90 days removed | |
| /tmp | local temporary files | local | 9.5 / 0.5 GB | 100 MB/s | none | Compute / login nodes | auto | purged |
| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Limitations | Access | Services |
| ---------- | ------------------------- | -------- | -------------- | ------------------------------ | ----------- | ----------------------- | --------------------------- |
| /home | home directory | NFS | 26 TiB | 1 GB/s | Quota 25GB | Compute and login nodes | backed up |
| /scratch | scratch temoporary | Lustre | 282 TiB | 5 GB/s, 30 GB/s burst buffer | Quota 9.3TB | Compute and login nodes | files older 90 days removed |
| /lscratch | local scratch ramdisk | tmpfs | 180 GB | 130 GB/s | none | Node local | auto purged |
## CESNET Data Storage
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment