Skip to content
Snippets Groups Projects

Spell check

Merged David Hrbáč requested to merge spell_check into master
1 file
+ 20
20
Compare changes
  • Side-by-side
  • Inline
+ 20
20
@@ -36,13 +36,13 @@ Configuration of the SCRATCH Lustre storage
- SCRATCH Lustre object storage
- Disk array SFA12KX
- 540 4 TB SAS 7.2krpm disks
- 54 OSTs of 10 disks in RAID6 (8+2)
- 15 hot-spare disks
- 540 x 4 TB SAS 7.2krpm disk
- 54 x OST of 10 disks in RAID6 (8+2)
- 15 x hot-spare disk
- 4 x 400 GB SSD cache
- SCRATCH Lustre metadata storage
- Disk array EF3015
- 12 600 GB SAS 15 krpm disks
- 12 x 600 GB SAS 15 krpm disk
### Understanding the Lustre Filesystems
@@ -218,7 +218,7 @@ Shared Workspaces
###HOME
Users home directories /home/username reside on HOME filesystem. Accessible capacity is 0.5PB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
Users home directories /home/username reside on HOME filesystem. Accessible capacity is 0.5 PB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250 GB per user. If 250 GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
!!! Note "Note"
The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
@@ -232,9 +232,9 @@ The workspace is backed up, such that it can be restored in case of catasthropi
|HOME workspace||
|---|---|
|Accesspoint|/home/username|
|Capacity|0.5PB|
|Throughput|6GB/s|
|User quota|250GB|
|Capacity|0.5 PB|
|Throughput|6 GB/s|
|User quota|250 GB|
|Protocol|NFS, 2-Tier|
### WORK
@@ -246,7 +246,7 @@ The WORK workspace resides on SCRATCH filesystem.  Users may create subdirector
Files on the WORK filesystem are **persistent** (not automatically deleted) throughout duration of the project.
The WORK workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH filesystem.
The WORK workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH filesystem.
!!! Note "Note"
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
@@ -254,17 +254,17 @@ The WORK workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as L
|WORK workspace||
|---|---|
|Accesspoints|/scratch/work/user/username, /scratch/work/user/projectid|
|Capacity |1.6P|
|Throughput|30GB/s|
|User quota|100TB|
|Default stripe size|1MB|
|Capacity |1.6 PB|
|Throughput|30 GB/s|
|User quota|100 TB|
|Default stripe size|1 MB|
|Default stripe count|1|
|Number of OSTs|54|
|Protocol|Lustre|
### TEMP
The TEMP workspace resides on SCRATCH filesystem. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6P, shared among all users on TEMP and WORK. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. >If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
The TEMP workspace resides on SCRATCH filesystem. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by filesystem usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. >If 100 TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
!!! Note "Note"
The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
@@ -273,7 +273,7 @@ The TEMP workspace resides on SCRATCH filesystem. The TEMP workspace accesspoint
Files on the TEMP filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
The TEMP workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH filesystem.
The TEMP workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH filesystem.
!!! Note "Note"
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
@@ -281,10 +281,10 @@ The TEMP workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as L
|TEMP workspace||
|---|---|
|Accesspoint|/scratch/temp|
|Capacity|1.6P|
|Throughput|30GB/s|
|User quota|100TB|
|Default stripe size|1MB|
|Capacity|1.6 PB|
|Throughput|30 GB/s|
|User quota|100 TB|
|Default stripe size|1 MB|
|Default stripe count|1|
|Number of OSTs|54|
|Protocol|Lustre|
@@ -319,7 +319,7 @@ Summary
|---|---|
| /home|home directory|NFS, 2-Tier|0.5 PB|6 GB/s|Quota 250GB|Compute and login nodes|backed up|
|/scratch/work|large project files|Lustre|1.69 PB|30 GB/s|Quota|Compute and login nodes|none|
|/scratch/temp|job temporary data|Lustre|1.69 PB|30 GB/s|Quota 100TB|Compute and login nodes|files older 90 days removed|
|/scratch/temp|job temporary data|Lustre|1.69 PB|30 GB/s|Quota 100 TB|Compute and login nodes|files older 90 days removed|
|/ramdisk|job temporary data, node local|local|120GB|90 GB/s|none|Compute nodes|purged after job ends|
CESNET Data Storage
Loading