Skip to content
Snippets Groups Projects
storage.md 17.2 KiB
Newer Older
  • Learn to ignore specific revisions
  • Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    # Storage - WORKING IN PROGRESS
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    There are three main shared file systems on the Barbora cluster: [HOME][1], [SCRATCH][2], and [PROJECT][5]. All login and compute nodes may access same data on shared file systems. Compute nodes are also equipped with local (non-shared) scratch, RAM disk, and tmp file systems.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ## Archiving
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    Do not use shared filesystems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service][3], which is available via SSHFS.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ## Shared Filesystems
    
    
    Barbora computer provides three main shared filesystems, the [HOME filesystem][1], [SCRATCH filesystem][2], and the [PROJECT filesystems][5].
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    All filesystems are accessible via the Infiniband network.   
    
    The HOME and PROJECT filesystems are realized as NFS filesystem.  
    
    The SCRATCH filesystem is realized as a parallel Lustre filesystem.  
    
    Extended ACLs are provided on both Lustre filesystems for sharing data with other users using fine-grained control
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ### Understanding the Lustre Filesystems
    
    A user file on the [Lustre filesystem][a] can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing.
    
    When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server (MDS) and the metadata target (MDT) for the layout and location of the [file's stripes][b]. Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval.
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency, so that all clients see consistent results.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    There is default stripe configuration for Barbora Lustre filesystems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    1. `stripe_size` the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Barbora Lustre filesystems
    1. `stripe_count` the number of OSTs to stripe across; default is 1 for Barbora Lustre filesystems one can specify -1 to use all OSTs in the filesystem.
    1. `stripe_offset` the index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    !!! note
    
    Jan Siwiec's avatar
    Jan Siwiec committed
        Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    Use the `lfs getstripe` command for getting the stripe parameters. Use `lfs setstripe` for setting the stripe parameters to get optimal I/O performance. The correct stripe setting depends on your needs and file access patterns.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```console
    $ lfs getstripe dir|filename
    $ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename
    ```
    
    Example:
    
    ```console
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    $ lfs getstripe /scratch/projname
    $ lfs setstripe -c -1 /scratch/projname
    $ lfs getstripe /scratch/projname
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    In this example, we view the current stripe setting of the /scratch/projname/ directory. The stripe count is changed to all OSTs and verified. All files written to this directory will be striped over 5 OSTs
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    Use `lfs check osts` to see the number and status of active OSTs for each filesystem on Barbora. Learn more by reading the man page:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```console
    $ lfs check osts
    $ man lfs
    ```
    
    ### Hints on Lustre Stripping
    
    !!! note
    
    Jan Siwiec's avatar
    Jan Siwiec committed
        Increase the `stripe_count` for parallel I/O to the same file.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the `stripe_count` is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the `ls -l` command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes.
    
    !!! note
        Using a large stripe size can improve performance when accessing very large files
    
    Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file.
    
    Read more [here][c].
    
    ### Lustre on Barbora
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    The architecture of Lustre on Barbora is composed of two metadata servers (MDS) and two data/object storage servers (OSS). 
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
     Configuration of the SCRATCH storage
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    * 2x Metadata server
    * 2x Object storage server 
    * Lustre object storage
      * One disk array NetApp E2800
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
      * 54x 8TB 10kRPM 2,5” SAS HDD
    
    Branislav Jansik's avatar
    Branislav Jansik committed
      * 5 x RAID6(8+2) OST Object storage target
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
      * 4 hotspare
    * Lustre metadata storage
      * One disk array NetApp E2600
      * 12 300GB SAS 15krpm disks
    
    Branislav Jansik's avatar
    Branislav Jansik committed
      * 2 groups of 5 disks in RAID5 Metadata target
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
      * 2 hot-spare disks
    
    ### HOME File System
    
    
    The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 26 TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 24 GB per user. Should 24 GB prove insufficient, contact [support][d], the quota may be lifted upon request.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    !!! note
        The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
    
    The HOME filesystem should not be used to archive data of past Projects or other unrelated data.
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    The files on HOME filesystem will not be deleted until the end of the [user's lifecycle][4].
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    The filesystem is backed up, so that it can be restored in case of a catastrophic failure resulting in significant data loss. However, this backup is not intended to restore old versions of user data or to restore (accidentally) deleted files.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    | HOME filesystem      |                 |
    | -------------------- | --------------- |
    | Accesspoint          | /home/username  |
    | Capacity             | 26 TB           |
    | Throughput           | 1 GB/s          |
    | User space quota     | 24 GB           |
    | User inodes quota    | 500 k           |
    | Protocol             | NFS             |
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ### SCRATCH File System
    
    
    The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. There are 5 OSTs dedicated for the SCRATCH file system.
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 282 TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 9.3 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. Should 9.3 TB prove insufficient, contact [support][d], the quota may be lifted upon request.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    !!! note
    
    Jan Siwiec's avatar
    Jan Siwiec committed
        The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high-performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
        Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files.
    
    !!! warning
        Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
    
    
    The SCRATCH filesystem is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 5 OSTs dedicated for the SCRATCH filesystem.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    !!! note
    
    Jan Siwiec's avatar
    Jan Siwiec committed
        Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    | SCRATCH filesystem   |           |
    | -------------------- | --------- |
    | Mountpoint           | /scratch  |
    | Capacity             | 282 TB    |
    | Throughput           | 5 GB/s    |
    | Throughput [Burst]   | 38 GB/s   |
    | User space quota     | 9,3 TB    |
    | User inodes quota    | 10 M      |
    | Default stripe size  | 1 MB      |
    | Default stripe count | 1         |
    | Number of OSTs       | 5         |
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ### PROJECT File System
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    TBD
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ### Disk Usage and Quota Commands
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    Disk usage and user quotas can be checked and reviewed using the following command:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```console
    $ it4i-disk-usage
    ```
    
    Example:
    
    ```console
    $ it4i-disk-usage -h
    # Using human-readable format
    # Using power of 1024 for space
    # Using power of 1000 for entries
    
    Filesystem:    /home
    Space used:    112G
    Space limit:   238G
    Entries:       15k
    Entries limit: 500k
    
    Filesystem:    /scratch
    Space used:    0
    Space limit:   93T
    Entries:       0
    Entries limit: 0
    ```
    
    In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command.
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    Note that limits are imposed also on number of objects (files, directories, links, etc.) that are allowed to create.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    To have a better understanding of where the space is exactly used, you can use following command to find out.
    
    ```console
    $ du -hs dir
    ```
    
    Example for your HOME directory:
    
    ```console
    $ cd /home
    $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
    258M     cuda-samples
    15M      .cache
    13M      .mozilla
    5,5M     .eclipse
    2,7M     .idb_13.0_linux_intel64_app
    ```
    
    
    This will list all directories with MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    To have a better understanding of previous commands, you can read man pages:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ### Extended ACLs
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    Extended ACLs provide another security mechanism beside the standard POSIX ACLs, which are defined by three entries (for owner/group/others). Extended ACLs have more than the three basic entries. In addition, they also contain a mask entry and may contain any number of named user and named group entries.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ACLs on a Lustre file system work exactly like ACLs on any Linux file system. They are manipulated with the standard tools in the standard manner. Below, we create a directory and allow a specific user access.
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    * [nfs4_setfacl][e]
    * [nfs4_getfacl][l]
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```console
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    vop999@login1:~$ nfs4_getfacl test
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    # file: test
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    A::OWNER@:rwaxtTcCy
    A::GROUP@:rwatcy
    A::EVERYONE@:rtcy
    vop999@login1:~$ nfs4_setfacl -a A::GROUP@:RWX test
    vop999@login1:~$ nfs4_getfacl test
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    # file: test
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    A::OWNER@:rwaxtTcCy
    A::GROUP@:rwaxtcy
    A::EVERYONE@:rtcy
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ## Local Filesystems
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    ### TMP
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    Each node is equipped with local /tmp directory of few GB capacity. The /tmp directory should be used to work with small temporary files. Old files in /tmp directory are automatically purged.
    
    ## Summary
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    | Mountpoint | Usage                     | Protocol | Net Capacity     | Throughput                     | Limitations | Access                   | Services                    |
    | ---------- | ------------------------- | -------- | --------------   | ------------------------------ | ----------- | -----------------------  | --------------------------- |
    | /home      | home directory            | NFS      | 26 TiB           | 1 GB/s                         | Quota 25GB  | Compute and login nodes  | backed up                   | 
    | /scratch   | scratch temoporary        | Lustre   | 282 TiB          | 5 GB/s, 30 GB/s burst buffer   | Quota 9.3TB | Compute and login nodes  | files older 90 days removed |
    | /lscratch  | local scratch ramdisk     | tmpfs    | 180 GB           | 130 GB/s                       | none        | Node local               | auto purged                 |
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ## CESNET Data Storage
    
    Do not use shared filesystems at IT4Innovations as a backup for large amount of data or long-term archiving purposes.
    
    !!! note
    
    Jan Siwiec's avatar
    Jan Siwiec committed
        IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service][f].
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic.
    
    
    Users of data storage CESNET (DU) association can become organizations or individual persons who are in the current employment relationship (employees) or the current study relationship (students) to a legal entity (organization) that meets the “Principles for access to CESNET Large infrastructure (Access Policy)”.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    User may only use data storage CESNET for data transfer and storage associated with activities in science, research, development, spread of education, culture, and prosperity. For details, see “Acceptable Use Policy CESNET Large Infrastructure (Acceptable Use Policy, AUP)”.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    The service is documented [here][g]. For special requirements contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz][h].
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    The procedure to obtain the CESNET access is quick and simple.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ## CESNET Storage Access
    
    ### Understanding CESNET Storage
    
    !!! note
        It is very important to understand the CESNET storage before uploading data. [Read][i] first.
    
    Once registered for CESNET Storage, you may [access the storage][j] in number of ways. We recommend the SSHFS and RSYNC methods.
    
    ### SSHFS Access
    
    !!! note
        SSHFS: The storage will be mounted like a local hard drive
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in the usual fashion.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    First, create the mount point:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```console
    $ mkdir cesnet
    ```
    
    
    Mount the storage. Note that you can choose among `ssh.du4.cesnet.cz` (Ostrava) and `ssh.du5.cesnet.cz` (Jihlava). Mount tier1_home **(only 5120 MB!)**:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```console
    
    $ sshfs username@ssh.du4.cesnet.cz:. cesnet/
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    
    For convenient future access from Barbora, install your public key:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```console
    $ cp .ssh/id_rsa.pub cesnet/.ssh/authorized_keys
    ```
    
    Mount tier1_cache_tape for the Storage VO:
    
    ```console
    
    $ sshfs username@ssh.du4.cesnet.cz:/cache_tape/VO_storage/home/username cesnet/
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    View the archive, copy the files and directories in and out:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```console
    $ ls cesnet/
    $ cp -a mydir cesnet/.
    $ cp cesnet/myfile .
    ```
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    Once done, remember to unmount the storage:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```console
    $ fusermount -u cesnet
    ```
    
    ### RSYNC Access
    
    !!! info
    
    Jan Siwiec's avatar
    Jan Siwiec committed
        RSYNC provides delta transfer for best performance and can resume interrupted transfers.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    RSYNC is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. RSYNC is widely used for backups and mirroring and as an improved copy command for everyday use.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    RSYNC finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time.  Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated.
    
    [More about RSYNC][k].
    
    
    Transfer large files to/from CESNET storage, assuming membership in the Storage VO:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```console
    $ rsync --progress datafile username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
    $ rsync --progress username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafile .
    ```
    
    
    Transfer large directories to/from CESNET storage, assuming membership in the Storage VO:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```console
    $ rsync --progress -av datafolder username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
    $ rsync --progress -av username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafolder .
    ```
    
    Transfer rates of about 28 MB/s can be expected.
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    [1]: #home-file-system
    [2]: #scratch-file-system
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    [3]: #cesnet-data-storage
    [4]: ../general/obtaining-login-credentials/obtaining-login-credentials.md
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    [5]: #project-file-system
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    [a]: http://www.nas.nasa.gov
    [b]: http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping
    [c]: http://doc.lustre.org/lustre_manual.xhtml#managingstripingfreespace
    [d]: https://support.it4i.cz/rt
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    [e]: http://man7.org/linux/man-pages/man1/nfs4_setfacl.1.html
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    [f]: https://du.cesnet.cz/
    [g]: https://du.cesnet.cz/en/start
    [h]: mailto:du-support@cesnet.cz
    [i]: https://du.cesnet.cz/en/navody/home-migrace-plzen/start
    [j]: https://du.cesnet.cz/en/navody/faq/start
    [k]: https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    [l]: http://man7.org/linux/man-pages/man1/nfs4_getfacl.1.html