Skip to content
Snippets Groups Projects

Spell check

Merged David Hrbáč requested to merge spell_check into master
1 file
+ 20
20
Compare changes
  • Side-by-side
  • Inline
+ 29
29
@@ -26,7 +26,7 @@ Salomon computer provides two main shared filesystems, the [ HOME filesystem](#h
###HOME filesystem
The HOME filesystem is realized as a Tiered filesystem, exported via NFS. The first tier has capacity 100TB, second tier has capacity 400TB. The filesystem is available on all login and computational nodes. The Home filesystem hosts the [HOME workspace](#home).
The HOME filesystem is realized as a Tiered filesystem, exported via NFS. The first tier has capacity 100 TB, second tier has capacity 400 TB. The filesystem is available on all login and computational nodes. The Home filesystem hosts the [HOME workspace](#home).
###SCRATCH filesystem
@@ -36,13 +36,13 @@ Configuration of the SCRATCH Lustre storage
- SCRATCH Lustre object storage
- Disk array SFA12KX
- 540 4TB SAS 7.2krpm disks
- 54 OSTs of 10 disks in RAID6 (8+2)
- 15 hot-spare disks
- 4x 400GB SSD cache
- 540 x 4 TB SAS 7.2krpm disk
- 54 x OST of 10 disks in RAID6 (8+2)
- 15 x hot-spare disk
- 4 x 400 GB SSD cache
- SCRATCH Lustre metadata storage
- Disk array EF3015
- 12 600GB SAS 15krpm disks
- 12 x 600 GB SAS 15 krpm disk
### Understanding the Lustre Filesystems
@@ -218,7 +218,7 @@ Shared Workspaces
###HOME
Users home directories /home/username reside on HOME filesystem. Accessible capacity is 0.5PB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
Users home directories /home/username reside on HOME filesystem. Accessible capacity is 0.5 PB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250 GB per user. If 250 GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
!!! Note "Note"
The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
@@ -232,9 +232,9 @@ The workspace is backed up, such that it can be restored in case of catasthropi
|HOME workspace||
|---|---|
|Accesspoint|/home/username|
|Capacity|0.5PB|
|Throughput|6GB/s|
|User quota|250GB|
|Capacity|0.5 PB|
|Throughput|6 GB/s|
|User quota|250 GB|
|Protocol|NFS, 2-Tier|
### WORK
@@ -246,7 +246,7 @@ The WORK workspace resides on SCRATCH filesystem.  Users may create subdirector
Files on the WORK filesystem are **persistent** (not automatically deleted) throughout duration of the project.
The WORK workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH filesystem.
The WORK workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH filesystem.
!!! Note "Note"
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
@@ -254,17 +254,17 @@ The WORK workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as L
|WORK workspace||
|---|---|
|Accesspoints|/scratch/work/user/username, /scratch/work/user/projectid|
|Capacity |1.6P|
|Throughput|30GB/s|
|User quota|100TB|
|Default stripe size|1MB|
|Capacity |1.6 PB|
|Throughput|30 GB/s|
|User quota|100 TB|
|Default stripe size|1 MB|
|Default stripe count|1|
|Number of OSTs|54|
|Protocol|Lustre|
### TEMP
The TEMP workspace resides on SCRATCH filesystem. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6P, shared among all users on TEMP and WORK. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. >If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
The TEMP workspace resides on SCRATCH filesystem. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by filesystem usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. >If 100 TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
!!! Note "Note"
The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
@@ -273,7 +273,7 @@ The TEMP workspace resides on SCRATCH filesystem. The TEMP workspace accesspoint
Files on the TEMP filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
The TEMP workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH filesystem.
The TEMP workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH filesystem.
!!! Note "Note"
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
@@ -281,10 +281,10 @@ The TEMP workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as L
|TEMP workspace||
|---|---|
|Accesspoint|/scratch/temp|
|Capacity|1.6P|
|Throughput|30GB/s|
|User quota|100TB|
|Default stripe size|1MB|
|Capacity|1.6 PB|
|Throughput|30 GB/s|
|User quota|100 TB|
|Default stripe size|1 MB|
|Default stripe count|1|
|Number of OSTs|54|
|Protocol|Lustre|
@@ -319,7 +319,7 @@ Summary
|---|---|
| /home|home directory|NFS, 2-Tier|0.5 PB|6 GB/s|Quota 250GB|Compute and login nodes|backed up|
|/scratch/work|large project files|Lustre|1.69 PB|30 GB/s|Quota|Compute and login nodes|none|
|/scratch/temp|job temporary data|Lustre|1.69 PB|30 GB/s|Quota 100TB|Compute and login nodes|files older 90 days removed|
|/scratch/temp|job temporary data|Lustre|1.69 PB|30 GB/s|Quota 100 TB|Compute and login nodes|files older 90 days removed|
|/ramdisk|job temporary data, node local|local|120GB|90 GB/s|none|Compute nodes|purged after job ends|
CESNET Data Storage
@@ -344,10 +344,10 @@ The procedure to obtain the CESNET access is quick and trouble-free.
CESNET storage access
---------------------
### Understanding Cesnet storage
### Understanding CESNET storage
!!! Note "Note"
It is very important to understand the Cesnet storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first.
It is very important to understand the CESNET storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first.
Once registered for CESNET Storage, you may [access the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods.
@@ -356,9 +356,9 @@ Once registered for CESNET Storage, you may [access the storage](https://du.cesn
!!! Note "Note"
SSHFS: The storage will be mounted like a local hard drive
The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable harddrive. Files can be than copied in and out in a usual fashion.
The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
First, create the mountpoint
First, create the mount point
```bash
$ mkdir cesnet
@@ -407,18 +407,18 @@ Rsync finds files that need to be transferred using a "quick check" algorithm (b
More about Rsync at <https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele>
Transfer large files to/from Cesnet storage, assuming membership in the Storage VO
Transfer large files to/from CESNET storage, assuming membership in the Storage VO
```bash
$ rsync --progress datafile username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafile .
```
Transfer large directories to/from Cesnet storage, assuming membership in the Storage VO
Transfer large directories to/from CESNET storage, assuming membership in the Storage VO
```bash
$ rsync --progress -av datafolder username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress -av username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafolder .
```
Transfer rates of about 28MB/s can be expected.
Transfer rates of about 28 MB/s can be expected.
Loading