Skip to content
Snippets Groups Projects

Cesnet storage section

Merged Jan Siwiec requested to merge cesnet-storage-section into master
4 files
+ 104
212
Compare changes
  • Side-by-side
  • Inline
Files
4
+ 1
106
@@ -251,107 +251,9 @@ Each node is equipped with RAMDISK storage accessible at /tmp, /lscratch and /ra
| /scratch | scratch temoporary | Lustre | 310TB | 5GB/s, 30GB/s burst buffer | Quota 10TB | Compute and login nodes |files older 90 days autoremoved |
| /lscratch | local scratch ramdisk | tmpfs | 180GB | 130GB/s | none | Node local | auto purged after job end |
## CESNET Data Storage
Do not use shared filesystems at IT4Innovations as a backup for large amount of data or long-term archiving purposes.
!!! note
IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service][f].
The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic.
Users of data storage CESNET (DU) association can become organizations or individual persons who are in the current employment relationship (employees) or the current study relationship (students) to a legal entity (organization) that meets the “Principles for access to CESNET Large infrastructure (Access Policy)”.
User may only use data storage CESNET for data transfer and storage associated with activities in science, research, development, spread of education, culture, and prosperity. For details, see “Acceptable Use Policy CESNET Large Infrastructure (Acceptable Use Policy, AUP)”.
The service is documented [here][g]. For special requirements contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz][h].
The procedure to obtain the CESNET access is quick and simple.
## CESNET Storage Access
### Understanding CESNET Storage
!!! note
It is very important to understand the CESNET storage before uploading data. [Read][i] first.
Once registered for CESNET Storage, you may [access the storage][j] in number of ways. We recommend the SSHFS and RSYNC methods.
### SSHFS Access
!!! note
SSHFS: The storage will be mounted like a local hard drive
The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in the usual fashion.
First, create the mount point:
```console
$ mkdir cesnet
```
Mount the storage. Note that you can choose among `ssh.du4.cesnet.cz` (Ostrava) and `ssh.du5.cesnet.cz` (Jihlava). Mount tier1_home **(only 5120MB!)**:
```console
$ sshfs username@ssh.du4.cesnet.cz:. cesnet/
```
For convenient future access from Barbora, install your public key:
```console
$ cp .ssh/id_rsa.pub cesnet/.ssh/authorized_keys
```
Mount tier1_cache_tape for the Storage VO:
```console
$ sshfs username@ssh.du4.cesnet.cz:/cache_tape/VO_storage/home/username cesnet/
```
View the archive, copy the files and directories in and out:
```console
$ ls cesnet/
$ cp -a mydir cesnet/.
$ cp cesnet/myfile .
```
Once done, remember to unmount the storage:
```console
$ fusermount -u cesnet
```
### RSYNC Access
!!! info
RSYNC provides delta transfer for best performance and can resume interrupted transfers.
RSYNC is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. RSYNC is widely used for backups and mirroring and as an improved copy command for everyday use.
RSYNC finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated.
[More about RSYNC][k].
Transfer large files to/from CESNET storage, assuming membership in the Storage VO:
```console
$ rsync --progress datafile username@ssh.du4.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress username@ssh.du4.cesnet.cz:VO_storage-cache_tape/datafile .
```
Transfer large directories to/from CESNET storage, assuming membership in the Storage VO:
```console
$ rsync --progress -av datafolder username@ssh.du4.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress -av username@ssh.du4.cesnet.cz:VO_storage-cache_tape/datafolder .
```
Transfer rates of about 28 MB/s can be expected.
[1]: #home-file-system
[2]: #scratch-file-system
[3]: #cesnet-data-storage
[3]: ../storage/cesnet-storage.md
[4]: ../general/obtaining-login-credentials/obtaining-login-credentials.md
[5]: #project-file-system
[6]: ../storage/project-storage.md
@@ -361,10 +263,3 @@ Transfer rates of about 28 MB/s can be expected.
[c]: http://doc.lustre.org/lustre_manual.xhtml#managingstripingfreespace
[d]: https://support.it4i.cz/rt
[e]: http://man7.org/linux/man-pages/man1/nfs4_setfacl.1.html
[f]: https://du.cesnet.cz/
[g]: https://du.cesnet.cz/en/start
[h]: mailto:du-support@cesnet.cz
[i]: https://du.cesnet.cz/en/navody/home-migrace-plzen/start
[j]: https://du.cesnet.cz/en/navody/faq/start
[k]: https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele
[l]: http://man7.org/linux/man-pages/man1/nfs4_getfacl.1.html
Loading