storage.md 21.4 KB
Newer Older
David Hrbáč's avatar
David Hrbáč committed
1
# Storage
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
2

3
There are two main shared file systems on Anselm cluster, the [HOME](#home) and [SCRATCH](#scratch). All login and compute nodes may access same data on shared file systems. Compute nodes are also equipped with local (non-shared) scratch, RAM disk and tmp file systems.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
4

David Hrbáč's avatar
David Hrbáč committed
5
## Archiving
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
6

Pavel Jirásek's avatar
Pavel Jirásek committed
7
Please don't use shared filesystems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service](#cesnet-data-storage), which is available via SSHFS.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
8

David Hrbáč's avatar
David Hrbáč committed
9
## Shared Filesystems
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
10

Pavel Jirásek's avatar
Pavel Jirásek committed
11
Anselm computer provides two main shared filesystems, the [HOME filesystem](#home) and the [SCRATCH filesystem](#scratch). Both HOME and SCRATCH filesystems are realized as a parallel Lustre filesystem. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both Lustre filesystems for the purpose of sharing data with other users using fine-grained control.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
12

Lukáš Krupčík's avatar
Lukáš Krupčík committed
13
### [Understanding the Lustre Filesystems](http://www.nas.nasa.gov)
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
14
15
16

A user file on the Lustre filesystem can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing.

David Hrbáč's avatar
David Hrbáč committed
17
When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
18
19
20
21
22

If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results.

There is default stripe configuration for Anselm Lustre filesystems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance:

Lukáš Krupčík's avatar
Lukáš Krupčík committed
23
24
25
1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Anselm Lustre filesystems
1. stripe_count the number of OSTs to stripe across; default is 1 for Anselm Lustre filesystems one can specify -1 to use all OSTs in the filesystem.
1. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
26

David Hrbáč's avatar
David Hrbáč committed
27
!!! note
Lukáš Krupčík's avatar
Lukáš Krupčík committed
28
    Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
29

Lukáš Krupčík's avatar
Lukáš Krupčík committed
30
Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file access patterns.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
31

Lukáš Krupčík's avatar
Lukáš Krupčík committed
32
```console
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
33
$ lfs getstripe dir|filename
Lukáš Krupčík's avatar
Lukáš Krupčík committed
34
$ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
35
36
37
38
```

Example:

Lukáš Krupčík's avatar
Lukáš Krupčík committed
39
```console
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
40
41
42
43
44
45
46
47
48
49
50
51
52
53
$ lfs getstripe /scratch/username/
/scratch/username/
stripe_count:   1 stripe_size:    1048576 stripe_offset:  -1

$ lfs setstripe -c -1 /scratch/username/
$ lfs getstripe /scratch/username/
/scratch/username/
stripe_count:  10 stripe_size:    1048576 stripe_offset:  -1
```

In this example, we view current stripe setting of the /scratch/username/ directory. The stripe count is changed to all OSTs, and verified. All files written to this directory will be striped over 10 OSTs

Use lfs check OSTs to see the number and status of active OSTs for each filesystem on Anselm. Learn more by reading the man page

Lukáš Krupčík's avatar
Lukáš Krupčík committed
54
```console
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
55
56
57
58
59
60
$ lfs check osts
$ man lfs
```

### Hints on Lustre Stripping

David Hrbáč's avatar
David Hrbáč committed
61
!!! note
Lukáš Krupčík's avatar
Lukáš Krupčík committed
62
    Increase the stripe_count for parallel I/O to the same file.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
63
64
65
66
67

When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs the file will be written to. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file.

Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes.

David Hrbáč's avatar
David Hrbáč committed
68
!!! note
Lukáš Krupčík's avatar
Lukáš Krupčík committed
69
    Using a large stripe size can improve performance when accessing very large files
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
70
71
72

Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file.

Lukáš Krupčík's avatar
Lukáš Krupčík committed
73
Read more on [here](http://doc.lustre.org/lustre_manual.xhtml#managingstripingfreespace)
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
74
75
76

### Lustre on Anselm

Lukáš Krupčík's avatar
->    
Lukáš Krupčík committed
77
The architecture of Lustre on Anselm is composed of two metadata servers (MDS) and four data/object storage servers (OSS). Two object storage servers are used for file system HOME and another two object storage servers are used for file system SCRATCH.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
78
79
80

 Configuration of the storages

Lukáš Krupčík's avatar
* -> *    
Lukáš Krupčík committed
81
* HOME Lustre object storage
Lukáš Krupčík's avatar
Lukáš Krupčík committed
82
83
84
85
86
  * One disk array NetApp E5400
  * 22 OSTs
  * 227 2TB NL-SAS 7.2krpm disks
  * 22 groups of 10 disks in RAID6 (8+2)
  * 7 hot-spare disks
Lukáš Krupčík's avatar
* -> *    
Lukáš Krupčík committed
87
* SCRATCH Lustre object storage
Lukáš Krupčík's avatar
Lukáš Krupčík committed
88
89
90
91
92
  * Two disk arrays NetApp E5400
  * 10 OSTs
  * 106 2TB NL-SAS 7.2krpm disks
  * 10 groups of 10 disks in RAID6 (8+2)
  * 6 hot-spare disks
Lukáš Krupčík's avatar
* -> *    
Lukáš Krupčík committed
93
* Lustre metadata storage
Lukáš Krupčík's avatar
Lukáš Krupčík committed
94
95
96
97
  * One disk array NetApp E2600
  * 12 300GB SAS 15krpm disks
  * 2 groups of 5 disks in RAID5
  * 2 hot-spare disks
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
98

David Hrbáč's avatar
David Hrbáč committed
99
### HOME File System
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
100

David Hrbáč's avatar
David Hrbáč committed
101
The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
102

David Hrbáč's avatar
David Hrbáč committed
103
!!! note
Lukáš Krupčík's avatar
Lukáš Krupčík committed
104
    The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
105
106
107

The HOME filesystem should not be used to archive data of past Projects or other unrelated data.

David Hrbáč's avatar
David Hrbáč committed
108
The files on HOME filesystem will not be deleted until end of the [users lifecycle](../general/obtaining-login-credentials/obtaining-login-credentials/).
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
109

David Hrbáč's avatar
David Hrbáč committed
110
The filesystem is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
111
112
113
114

The HOME filesystem is realized as Lustre parallel filesystem and is available on all login and computational nodes.
Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for the HOME filesystem.

David Hrbáč's avatar
David Hrbáč committed
115
!!! note
Lukáš Krupčík's avatar
Lukáš Krupčík committed
116
    Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
117

David Hrbáč's avatar
David Hrbáč committed
118
| HOME filesystem      |        |
Lukáš Krupčík's avatar
Lukáš Krupčík committed
119
| -------------------- | ------ |
David Hrbáč's avatar
David Hrbáč committed
120
121
122
123
124
125
126
| Mountpoint           | /home  |
| Capacity             | 320 TB |
| Throughput           | 2 GB/s |
| User quota           | 250 GB |
| Default stripe size  | 1 MB   |
| Default stripe count | 1      |
| Number of OSTs       | 22     |
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
127

David Hrbáč's avatar
David Hrbáč committed
128
### SCRATCH File System
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
129
130
131

The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.

David Hrbáč's avatar
David Hrbáč committed
132
!!! note
Lukáš Krupčík's avatar
->    
Lukáš Krupčík committed
133
    The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
134

Lukáš Krupčík's avatar
Lukáš Krupčík committed
135
    Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
136

137
!!! warning
David Hrbáč's avatar
David Hrbáč committed
138
    Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
139
140
141

The SCRATCH filesystem is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 10 OSTs dedicated for the SCRATCH filesystem.

David Hrbáč's avatar
David Hrbáč committed
142
!!! note
Lukáš Krupčík's avatar
Lukáš Krupčík committed
143
    Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
144

David Hrbáč's avatar
David Hrbáč committed
145
| SCRATCH filesystem   |          |
Lukáš Krupčík's avatar
Lukáš Krupčík committed
146
| -------------------- | -------- |
David Hrbáč's avatar
David Hrbáč committed
147
148
149
150
151
152
153
| Mountpoint           | /scratch |
| Capacity             | 146TB    |
| Throughput           | 6GB/s    |
| User quota           | 100TB    |
| Default stripe size  | 1MB      |
| Default stripe count | 1        |
| Number of OSTs       | 10       |
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
154

David Hrbáč's avatar
David Hrbáč committed
155
### Disk Usage and Quota Commands
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
156

157
Disk usage and user quotas can be checked and reviewed using following command:
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
158

Lukáš Krupčík's avatar
Lukáš Krupčík committed
159
```console
160
$ it4i-disk-usage
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
161
162
```

163
Example:
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
164

Lukáš Krupčík's avatar
Lukáš Krupčík committed
165
```console
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
$ it4i-disk-usage -h
# Using human-readable format
# Using power of 1024 for space
# Using power of 1000 for entries

Filesystem:    /home
Space used:    112G
Space limit:   238G
Entries:       15k
Entries limit: 500k

Filesystem:    /scratch
Space used:    0
Space limit:   93T
Entries:       0
Entries limit: 0
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
182
183
```

David Hrbáč's avatar
David Hrbáč committed
184
In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
185
Note that limits are imposed also on number of objects (files, directories, links, etc...) that are allowed to create.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
186
187
188

To have a better understanding of where the space is exactly used, you can use following command to find out.

Lukáš Krupčík's avatar
Lukáš Krupčík committed
189
```console
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
190
191
192
193
194
$ du -hs dir
```

Example for your HOME directory:

Lukáš Krupčík's avatar
Lukáš Krupčík committed
195
```console
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
196
197
198
199
200
201
202
203
204
$ cd /home
$ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
258M     cuda-samples
15M      .cache
13M      .mozilla
5,5M     .eclipse
2,7M     .idb_13.0_linux_intel64_app
```

David Hrbáč's avatar
David Hrbáč committed
205
This will list all directories which are having MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
206
207
208

To have a better understanding of previous commands, you can read manpages.

Lukáš Krupčík's avatar
Lukáš Krupčík committed
209
```console
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
210
211
212
$ man lfs
```

Lukáš Krupčík's avatar
Lukáš Krupčík committed
213
```console
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
214
215
216
217
218
$ man du
```

### Extended ACLs

David Hrbáč's avatar
David Hrbáč committed
219
Extended ACLs provide another security mechanism beside the standard POSIX ACLs which are defined by three entries (for owner/group/others). Extended ACLs have more than the three basic entries. In addition, they also contain a mask entry and may contain any number of named user and named group entries.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
220
221
222

ACLs on a Lustre file system work exactly like ACLs on any Linux file system. They are manipulated with the standard tools in the standard manner. Below, we create a directory and allow a specific user access.

Lukáš Krupčík's avatar
Lukáš Krupčík committed
223
```console
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
224
225
226
[vop999@login1.anselm ~]$ umask 027
[vop999@login1.anselm ~]$ mkdir test
[vop999@login1.anselm ~]$ ls -ld test
Lukáš Krupčík's avatar
->    
Lukáš Krupčík committed
227
drwxr-x--- 2 vop999 vop999 4096 Nov 5 14:17 test
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
228
229
230
231
232
233
234
235
236
237
[vop999@login1.anselm ~]$ getfacl test
# file: test
# owner: vop999
# group: vop999
user::rwx
group::r-x
other::---

[vop999@login1.anselm ~]$ setfacl -m user:johnsm:rwx test
[vop999@login1.anselm ~]$ ls -ld test
Lukáš Krupčík's avatar
->    
Lukáš Krupčík committed
238
drwxrwx---+ 2 vop999 vop999 4096 Nov 5 14:17 test
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
239
240
241
242
243
244
245
246
247
248
249
250
251
[vop999@login1.anselm ~]$ getfacl test
# file: test
# owner: vop999
# group: vop999
user::rwx
user:johnsm:rwx
group::r-x
mask::rwx
other::---
```

Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory (-d flag to setfacl) will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory. Refer to this page for more information on Linux ACL:

Pavel Gajdušek's avatar
Pavel Gajdušek committed
252
[redhat guide](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/ch09s05.html)
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
253

David Hrbáč's avatar
David Hrbáč committed
254
## Local Filesystems
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
255
256
257

### Local Scratch

David Hrbáč's avatar
David Hrbáč committed
258
!!! note
Lukáš Krupčík's avatar
Lukáš Krupčík committed
259
    Every computational node is equipped with 330GB local scratch disk.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
260
261
262
263
264

Use local scratch in case you need to access large amount of small files during your calculation.

The local scratch disk is mounted as /lscratch and is accessible to user at /lscratch/$PBS_JOBID directory.

Lukáš Krupčík's avatar
->    
Lukáš Krupčík committed
265
The local scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs that access large number of small files within the calculation must use the local scratch filesystem as their working directory. This is required for performance reasons, as frequent access to number of small files may overload the metadata servers (MDS) of the Lustre filesystem.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
266

David Hrbáč's avatar
David Hrbáč committed
267
!!! note
Lukáš Krupčík's avatar
Lukáš Krupčík committed
268
    The local scratch directory /lscratch/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
269

David Hrbáč's avatar
David Hrbáč committed
270
| local SCRATCH filesystem |                      |
Lukáš Krupčík's avatar
Lukáš Krupčík committed
271
| ------------------------ | -------------------- |
David Hrbáč's avatar
David Hrbáč committed
272
273
274
275
276
| Mountpoint               | /lscratch            |
| Accesspoint              | /lscratch/$PBS_JOBID |
| Capacity                 | 330GB                |
| Throughput               | 100MB/s              |
| User quota               | none                 |
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
277

David Hrbáč's avatar
David Hrbáč committed
278
### RAM Disk
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
279
280
281

Every computational node is equipped with filesystem realized in memory, so called RAM disk.

David Hrbáč's avatar
David Hrbáč committed
282
!!! note
Lukáš Krupčík's avatar
Lukáš Krupčík committed
283
    Use RAM disk in case you need really fast access to your data of limited size during your calculation. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
284
285
286

The local RAM disk is mounted as /ramdisk and is accessible to user at /ramdisk/$PBS_JOBID directory.

David Hrbáč's avatar
David Hrbáč committed
287
The local RAM disk filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk filesystem is limited. Be very careful, use of RAM disk filesystem is at the expense of operational memory.  It is not recommended to allocate large amount of memory and use large amount of data in RAM disk filesystem at the same time.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
288

David Hrbáč's avatar
David Hrbáč committed
289
!!! note
Lukáš Krupčík's avatar
Lukáš Krupčík committed
290
    The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
291

David Hrbáč's avatar
David Hrbáč committed
292
| RAM disk    |                                                                                                         |
Lukáš Krupčík's avatar
Lukáš Krupčík committed
293
| ----------- | ------------------------------------------------------------------------------------------------------- |
David Hrbáč's avatar
David Hrbáč committed
294
295
296
297
298
| Mountpoint  | /ramdisk                                                                                                |
| Accesspoint | /ramdisk/$PBS_JOBID                                                                                     |
| Capacity    | 60GB at compute nodes without accelerator, 90GB at compute nodes with accelerator, 500GB at fat nodes   |
| Throughput  | over 1.5 GB/s write, over 5 GB/s read, single thread, over 10 GB/s write, over 50 GB/s read, 16 threads |
| User quota  | none                                                                                                    |
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
299

David Hrbáč's avatar
David Hrbáč committed
300
### Tmp
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
301
302
303

Each node is equipped with local /tmp directory of few GB capacity. The /tmp directory should be used to work with small temporary files. Old files in /tmp directory are automatically purged.

David Hrbáč's avatar
David Hrbáč committed
304
## Summary
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
305

David Hrbáč's avatar
David Hrbáč committed
306
| Mountpoint | Usage                     | Protocol | Net Capacity   | Throughput | Limitations | Access                  | Services                    |        |
Lukáš Krupčík's avatar
Lukáš Krupčík committed
307
| ---------- | ------------------------- | -------- | -------------- | ---------- | ----------- | ----------------------- | --------------------------- | ------ |
David Hrbáč's avatar
David Hrbáč committed
308
309
310
311
312
313
314
| /home      | home directory            | Lustre   | 320 TiB        | 2 GB/s     | Quota 250GB | Compute and login nodes | backed up                   |        |
| /scratch   | cluster shared jobs' data | Lustre   | 146 TiB        | 6 GB/s     | Quota 100TB | Compute and login nodes | files older 90 days removed |        |
| /lscratch  | node local jobs' data     | local    | 330 GB         | 100 MB/s   | none        | Compute nodes           | purged after job ends       |        |
| /ramdisk   | node local jobs' data     | local    | 60, 90, 500 GB | 5-50 GB/s  | none        | Compute nodes           | purged after job ends       |        |
| /tmp       | local temporary files     | local    | 9.5 GB         | 100 MB/s   | none        | Compute and login nodes | auto                        | purged |

## CESNET Data Storage
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
315
316
317

Do not use shared filesystems at IT4Innovations as a backup for large amount of data or long-term archiving purposes.

David Hrbáč's avatar
David Hrbáč committed
318
!!! note
Lukáš Krupčík's avatar
Lukáš Krupčík committed
319
    The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/).
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
320
321
322
323
324
325
326

The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic.

User of data storage CESNET (DU) association can become organizations or an individual person who is either in the current employment relationship (employees) or the current study relationship (students) to a legal entity (organization) that meets the “Principles for access to CESNET Large infrastructure (Access Policy)”.

User may only use data storage CESNET for data transfer and storage which are associated with activities in science, research, development, the spread of education, culture and prosperity. In detail see “Acceptable Use Policy CESNET Large Infrastructure (Acceptable Use Policy, AUP)”.

Pavel Jirásek's avatar
Links    
Pavel Jirásek committed
327
The service is documented [here](https://du.cesnet.cz/en/start). For special requirements please contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz](mailto:du-support@cesnet.cz).
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
328
329
330
331
332

The procedure to obtain the CESNET access is quick and trouble-free.

(source [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage"))

David Hrbáč's avatar
David Hrbáč committed
333
## CESNET Storage Access
David Hrbáč's avatar
David Hrbáč committed
334

David Hrbáč's avatar
David Hrbáč committed
335
### Understanding CESNET Storage
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
336

David Hrbáč's avatar
David Hrbáč committed
337
!!! note
Pavel Jirásek's avatar
links    
Pavel Jirásek committed
338
    It is very important to understand the CESNET storage before uploading data. [Please read](https://du.cesnet.cz/en/navody/home-migrace-plzen/start) first.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
339
340
341
342
343

Once registered for CESNET Storage, you may [access the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods.

### SSHFS Access

David Hrbáč's avatar
David Hrbáč committed
344
!!! note
Lukáš Krupčík's avatar
Lukáš Krupčík committed
345
    SSHFS: The storage will be mounted like a local hard drive
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
346

Lukáš Krupčík's avatar
->    
Lukáš Krupčík committed
347
The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
348

David Hrbáč's avatar
David Hrbáč committed
349
First, create the mount point
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
350

Lukáš Krupčík's avatar
Lukáš Krupčík committed
351
352
```console
$ mkdir cesnet
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
353
354
```

355
Mount the storage. Note that you can choose among the ssh.du1.cesnet.cz (Plzen), ssh.du2.cesnet.cz (Jihlava), ssh.du3.cesnet.cz (Brno) Mount tier1_home **(only 5120 MB !)**:
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
356

Lukáš Krupčík's avatar
Lukáš Krupčík committed
357
358
```console
$ sshfs username@ssh.du1.cesnet.cz:. cesnet/
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
359
360
361
362
```

For easy future access from Anselm, install your public key

Lukáš Krupčík's avatar
Lukáš Krupčík committed
363
364
```console
$ cp .ssh/id_rsa.pub cesnet/.ssh/authorized_keys
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
365
366
367
368
```

Mount tier1_cache_tape for the Storage VO:

Lukáš Krupčík's avatar
Lukáš Krupčík committed
369
370
```console
$ sshfs username@ssh.du1.cesnet.cz:/cache_tape/VO_storage/home/username cesnet/
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
371
372
373
374
```

View the archive, copy the files and directories in and out

Lukáš Krupčík's avatar
Lukáš Krupčík committed
375
376
377
378
```console
$ ls cesnet/
$ cp -a mydir cesnet/.
$ cp cesnet/myfile .
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
379
380
381
382
```

Once done, please remember to unmount the storage

Lukáš Krupčík's avatar
Lukáš Krupčík committed
383
384
```console
$ fusermount -u cesnet
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
385
386
```

Lukáš Krupčík's avatar
fix    
Lukáš Krupčík committed
387
### RSYNC Access
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
388

389
390
!!! Note "Note"
	RSYNC provides delta transfer for best performance, can resume interrupted transfers
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
391

392
RSYNC is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination.  RSYNC is widely used for backups and mirroring and as an improved copy command for everyday use.
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
393

394
395
396
RSYNC finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time.  Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated.

[More about RSYNC](https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele)
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
397

David Hrbáč's avatar
David Hrbáč committed
398
Transfer large files to/from CESNET storage, assuming membership in the Storage VO
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
399

Lukáš Krupčík's avatar
Lukáš Krupčík committed
400
401
402
```console
$ rsync --progress datafile username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafile .
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
403
404
```

David Hrbáč's avatar
David Hrbáč committed
405
Transfer large directories to/from CESNET storage, assuming membership in the Storage VO
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
406

Lukáš Krupčík's avatar
Lukáš Krupčík committed
407
408
409
```console
$ rsync --progress -av datafolder username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress -av username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafolder .
Pavel Jirásek's avatar
Merged    
Pavel Jirásek committed
410
411
```

David Hrbáč's avatar
David Hrbáč committed
412
Transfer rates of about 28 MB/s can be expected.