Skip to content
Snippets Groups Projects
Commit 356618f0 authored by Jan Siwiec's avatar Jan Siwiec
Browse files

updated data storage units and cesnet storage numbers related to #86

parent 80ccdf32
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
...@@ -61,35 +61,35 @@ Disk usage and user quotas can be checked and reviewed using the following comma ...@@ -61,35 +61,35 @@ Disk usage and user quotas can be checked and reviewed using the following comma
$ it4i-disk-usage $ it4i-disk-usage
``` ```
Example: Example for Salomon:
```console ```console
$ it4i-disk-usage -h $ it4i-disk-usage -h
# Using human-readable format # Using human-readable format
# Using power of 1024 for space # Using power of 1000 for space
# Using power of 1000 for entries # Using power of 1000 for entries
Filesystem: /home Filesystem: /home
Space used: 110GiB Space used: 110GB
Space limit: 238GiB Space limit: 250GB
Entries: 40K Entries: 40K
Entries limit: 500K Entries limit: 500K
# based on filesystem quota # based on filesystem quota
Filesystem: /scratch Filesystem: /scratch
Space used: 377GiB Space used: 377GB
Space limit: 93TiB Space limit: 100TB
Entries: 14K Entries: 14K
Entries limit: 10M Entries limit: 10M
# based on Lustre quota # based on Lustre quota
Filesystem: /scratch Filesystem: /scratch
Space used: 377GiB Space used: 377GB
Entries: 14K Entries: 14K
# based on Robinhood # based on Robinhood
Filesystem: /scratch/work Filesystem: /scratch/work
Space used: 377GiB Space used: 377GB
Entries: 14K Entries: 14K
Entries: 40K Entries: 40K
Entries limit: 1.0M Entries limit: 1.0M
...@@ -102,7 +102,7 @@ Entries: 6 ...@@ -102,7 +102,7 @@ Entries: 6
``` ```
In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command. In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
Note that limits are imposed also on number of objects (files, directories, links, etc...) that are allowed to create. Note that limits are imposed also on number of objects (files, directories, links, etc.) that the user is allowed to create.
To have a better understanding of where the space is exactly used, use the following command: To have a better understanding of where the space is exactly used, use the following command:
...@@ -122,7 +122,7 @@ $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr ...@@ -122,7 +122,7 @@ $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
2,7M .idb_13.0_linux_intel64_app 2,7M .idb_13.0_linux_intel64_app
``` ```
This will list all directories with MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories. This will list all directories with megabytes or gigabytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
To have a better understanding of the previous commands, read the man pages: To have a better understanding of the previous commands, read the man pages:
...@@ -187,17 +187,17 @@ The workspace is backed up, such that it can be restored in case of catastrophic ...@@ -187,17 +187,17 @@ The workspace is backed up, such that it can be restored in case of catastrophic
| HOME workspace | | | HOME workspace | |
| ----------------- | -------------- | | ----------------- | -------------- |
| Accesspoint | /home/username | | Accesspoint | /home/username |
| Capacity | 0.5 PB | | Capacity | 500TB |
| Throughput | 6 GB/s | | Throughput | 6GB/s |
| User space quota | 250 GB | | User space quota | 250GB |
| User inodes quota | 500 k | | User inodes quota | 500K |
| Protocol | NFS, 2-Tier | | Protocol | NFS, 2-Tier |
### Scratch ### Scratch
The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. There are 54 OSTs dedicated for the SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. There are 54 OSTs dedicated for the SCRATCH file system.
Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10 m inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. Should 100 TB of space or 10 m inodes prove insufficient, contact [support][d], the quota may be lifted upon request. Accessible capacity is 1.6PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10M inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. Should 100TB of space or 10M inodes prove insufficient, contact [support][d], the quota may be lifted upon request.
#### Work #### Work
...@@ -233,19 +233,19 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin ...@@ -233,19 +233,19 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin
</tr> </tr>
<tr> <tr>
<td>Capacity</td> <td>Capacity</td>
<td colspan="2" style="vertical-align : middle;text-align:center;">1.6 PB</td> <td colspan="2" style="vertical-align : middle;text-align:center;">1.6PB</td>
</tr> </tr>
<tr> <tr>
<td>Throughput</td> <td>Throughput</td>
<td colspan="2" style="vertical-align : middle;text-align:center;">30 GB/s</td> <td colspan="2" style="vertical-align : middle;text-align:center;">30GB/s</td>
</tr> </tr>
<tr> <tr>
<td>User space quota</td> <td>User space quota</td>
<td colspan="2" style="vertical-align : middle;text-align:center;">100 TB</td> <td colspan="2" style="vertical-align : middle;text-align:center;">100TB</td>
</tr> </tr>
<tr> <tr>
<td>User inodes quota</td> <td>User inodes quota</td>
<td colspan="2" style="vertical-align : middle;text-align:center;">10 M</td> <td colspan="2" style="vertical-align : middle;text-align:center;">10M</td>
</tr> </tr>
<tr> <tr>
<td>Number of OSTs</td> <td>Number of OSTs</td>
...@@ -281,8 +281,8 @@ It is not recommended to allocate large amount of memory and use large amount of ...@@ -281,8 +281,8 @@ It is not recommended to allocate large amount of memory and use large amount of
| ----------- | ------------------------------------------------------------------------------------------------------- | | ----------- | ------------------------------------------------------------------------------------------------------- |
| Mountpoint | /ramdisk | | Mountpoint | /ramdisk |
| Accesspoint | /ramdisk/$PBS_JOBID | | Accesspoint | /ramdisk/$PBS_JOBID |
| Capacity | 110 GB | | Capacity | 110GB |
| Throughput | over 1.5 GB/s write, over 5 GB/s read, single thread, over 10 GB/s write, over 50 GB/s read, 16 threads | | Throughput | over 1.5GB/s write, over 5GB/s read, single thread, over 10GB/s write, over 50GB/s read, 16 threads |
| User quota | none | | User quota | none |
### Global RAM Disk ### Global RAM Disk
...@@ -329,8 +329,8 @@ within a job. ...@@ -329,8 +329,8 @@ within a job.
| ------------------ | --------------------------------------------------------------------------| | ------------------ | --------------------------------------------------------------------------|
| Mountpoint | /mnt/global_ramdisk | | Mountpoint | /mnt/global_ramdisk |
| Accesspoint | /mnt/global_ramdisk | | Accesspoint | /mnt/global_ramdisk |
| Capacity | (N*110) GB | | Capacity | (N*110)GB |
| Throughput | 3*(N+1) GB/s, 2 GB/s single POSIX thread | | Throughput | 3*(N+1)GB/s, 2GB/s single POSIX thread |
| User quota | none | | User quota | none |
N = number of compute nodes in the job. N = number of compute nodes in the job.
...@@ -352,9 +352,9 @@ N = number of compute nodes in the job. ...@@ -352,9 +352,9 @@ N = number of compute nodes in the job.
<td>/home</td> <td>/home</td>
<td>home directory</td> <td>home directory</td>
<td>NFS, 2-Tier</td> <td>NFS, 2-Tier</td>
<td>0.5 PB</td> <td>500TB</td>
<td>6 GB/s</td> <td>6GB/s</td>
<td>250&nbsp;GB / 500&nbsp;k</td> <td>250GB / 500K</td>
<td>Compute and login nodes</td> <td>Compute and login nodes</td>
<td>backed up</td> <td>backed up</td>
</tr> </tr>
...@@ -362,9 +362,9 @@ N = number of compute nodes in the job. ...@@ -362,9 +362,9 @@ N = number of compute nodes in the job.
<td style="background-color: #D3D3D3;">/scratch/work</td> <td style="background-color: #D3D3D3;">/scratch/work</td>
<td style="background-color: #D3D3D3;">large project files</td> <td style="background-color: #D3D3D3;">large project files</td>
<td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">Lustre</td> <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">Lustre</td>
<td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">1.69 PB</td> <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">1.69PB</td>
<td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">30 GB/s</td> <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">30GB/s</td>
<td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">100&nbsp;TB / 10&nbsp;M</td> <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">100TB / 10M</td>
<td style="background-color: #D3D3D3;">Compute and login nodes</td> <td style="background-color: #D3D3D3;">Compute and login nodes</td>
<td style="background-color: #D3D3D3;">none</td> <td style="background-color: #D3D3D3;">none</td>
</tr> </tr>
...@@ -378,8 +378,8 @@ N = number of compute nodes in the job. ...@@ -378,8 +378,8 @@ N = number of compute nodes in the job.
<td>/ramdisk</td> <td>/ramdisk</td>
<td>job temporary data, node local</td> <td>job temporary data, node local</td>
<td>tmpfs</td> <td>tmpfs</td>
<td>110 GB</td> <td>110GB</td>
<td>90 GB/s</td> <td>90GB/s</td>
<td>none / none</td> <td>none / none</td>
<td>Compute nodes, node local</td> <td>Compute nodes, node local</td>
<td>purged after job ends</td> <td>purged after job ends</td>
...@@ -388,8 +388,8 @@ N = number of compute nodes in the job. ...@@ -388,8 +388,8 @@ N = number of compute nodes in the job.
<td style="background-color: #D3D3D3;">/mnt/global_ramdisk</td> <td style="background-color: #D3D3D3;">/mnt/global_ramdisk</td>
<td style="background-color: #D3D3D3;">job temporary data</td> <td style="background-color: #D3D3D3;">job temporary data</td>
<td style="background-color: #D3D3D3;">BeeGFS</td> <td style="background-color: #D3D3D3;">BeeGFS</td>
<td style="background-color: #D3D3D3;">(N*110) GB</td> <td style="background-color: #D3D3D3;">(N*110)GB</td>
<td style="background-color: #D3D3D3;">3*(N+1) GB/s</td> <td style="background-color: #D3D3D3;">3*(N+1)GB/s</td>
<td style="background-color: #D3D3D3;">none / none</td> <td style="background-color: #D3D3D3;">none / none</td>
<td style="background-color: #D3D3D3;">Compute nodes, job shared</td> <td style="background-color: #D3D3D3;">Compute nodes, job shared</td>
<td style="background-color: #D3D3D3;">purged after job ends</td> <td style="background-color: #D3D3D3;">purged after job ends</td>
...@@ -437,10 +437,11 @@ First, create the mount point: ...@@ -437,10 +437,11 @@ First, create the mount point:
$ mkdir cesnet $ mkdir cesnet
``` ```
Mount the storage. Note that you can choose among the ssh.du1.cesnet.cz (Plzen), ssh.du2.cesnet.cz (Jihlava), ssh.du3.cesnet.cz (Brno) Mount tier1_home **(only 5120M !)**: Mount the storage. Note that you can choose among ssh.du4.cesnet.cz (Ostrava) and ssh.du5.cesnet.cz (Jihlava).
Mount tier1_home **(only 5120M !)**:
```console ```console
$ sshfs username@ssh.du1.cesnet.cz:. cesnet/ $ sshfs username@ssh.du4.cesnet.cz:. cesnet/
``` ```
For easy future access, install your public key: For easy future access, install your public key:
...@@ -452,7 +453,7 @@ $ cp .ssh/id_rsa.pub cesnet/.ssh/authorized_keys ...@@ -452,7 +453,7 @@ $ cp .ssh/id_rsa.pub cesnet/.ssh/authorized_keys
Mount tier1_cache_tape for the Storage VO: Mount tier1_cache_tape for the Storage VO:
```console ```console
$ sshfs username@ssh.du1.cesnet.cz:/cache_tape/VO_storage/home/username cesnet/ $ sshfs username@ssh.du4.cesnet.cz:/cache_tape/VO_storage/home/username cesnet/
``` ```
View the archive, copy the files and directories in and out: View the archive, copy the files and directories in and out:
...@@ -483,18 +484,18 @@ More about Rsync [here][j]. ...@@ -483,18 +484,18 @@ More about Rsync [here][j].
Transfer large files to/from CESNET storage, assuming membership in the Storage VO: Transfer large files to/from CESNET storage, assuming membership in the Storage VO:
```console ```console
$ rsync --progress datafile username@ssh.du1.cesnet.cz:VO_storage-cache_tape/. $ rsync --progress datafile username@ssh.du4.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafile . $ rsync --progress username@ssh.du4.cesnet.cz:VO_storage-cache_tape/datafile .
``` ```
Transfer large directories to/from CESNET storage, assuming membership in the Storage VO: Transfer large directories to/from CESNET storage, assuming membership in the Storage VO:
```console ```console
$ rsync --progress -av datafolder username@ssh.du1.cesnet.cz:VO_storage-cache_tape/. $ rsync --progress -av datafolder username@ssh.du4.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress -av username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafolder . $ rsync --progress -av username@ssh.du4.cesnet.cz:VO_storage-cache_tape/datafolder .
``` ```
Transfer rates of about 28 MB/s can be expected. Transfer rates of about 28MB/s can be expected.
[1]: #home [1]: #home
[2]: #shared-filesystems [2]: #shared-filesystems
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment