4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
This will list all directories with MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
This will list all directories with megabytes or gigabytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
To have a better understanding of the previous commands, read the man pages:
To have a better understanding of the previous commands, read the man pages:
...
@@ -187,17 +187,17 @@ The workspace is backed up, such that it can be restored in case of catastrophic
...
@@ -187,17 +187,17 @@ The workspace is backed up, such that it can be restored in case of catastrophic
| HOME workspace | |
| HOME workspace | |
| ----------------- | -------------- |
| ----------------- | -------------- |
| Accesspoint | /home/username |
| Accesspoint | /home/username |
| Capacity | 0.5 PB |
| Capacity | 500TB |
| Throughput | 6GB/s |
| Throughput | 6GB/s |
| User space quota | 250GB |
| User space quota | 250GB |
| User inodes quota | 500 k |
| User inodes quota | 500K |
| Protocol | NFS, 2-Tier |
| Protocol | NFS, 2-Tier |
### Scratch
### Scratch
The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. There are 54 OSTs dedicated for the SCRATCH file system.
The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. There are 54 OSTs dedicated for the SCRATCH file system.
Accessible capacity is 1.6PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10 m inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. Should 100TB of space or 10 m inodes prove insufficient, contact [support][d], the quota may be lifted upon request.
Accessible capacity is 1.6PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10M inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. Should 100TB of space or 10M inodes prove insufficient, contact [support][d], the quota may be lifted upon request.
#### Work
#### Work
...
@@ -233,19 +233,19 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin
...
@@ -233,19 +233,19 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin
<tdstyle="background-color: #D3D3D3;">purged after job ends</td>
<tdstyle="background-color: #D3D3D3;">purged after job ends</td>
...
@@ -437,10 +437,11 @@ First, create the mount point:
...
@@ -437,10 +437,11 @@ First, create the mount point:
$mkdir cesnet
$mkdir cesnet
```
```
Mount the storage. Note that you can choose among the ssh.du1.cesnet.cz (Plzen), ssh.du2.cesnet.cz (Jihlava), ssh.du3.cesnet.cz (Brno) Mount tier1_home **(only 5120M !)**:
Mount the storage. Note that you can choose among ssh.du4.cesnet.cz (Ostrava) and ssh.du5.cesnet.cz (Jihlava).