4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
@@ -52,60 +52,6 @@ When a client (a compute node from your job) needs to create or access a file, t
...
@@ -52,60 +52,6 @@ When a client (a compute node from your job) needs to create or access a file, t
If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results.
If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results.
There is default stripe configuration for Salomon Lustre file systems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance:
1. stripe_size: The size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre file systems.
1. stripe_count: The number of OSTs to stripe across; default is -1 for Salomon Lustre file systems, hence one uses all OSTs in the file system.
1. stripe_offset: The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
!!! note
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file access patterns.
In this example, we view current stripe setting of the /scratch/username/ directory. The stripe count is changed to all OSTs, and verified. All files written to this directory will be striped over all (54) OSTs
Use lfs check OSTs to see the number and status of active OSTs for each file system on Salomon. Learn more by reading the man page
```console
$lfs check osts
$man lfs
```
### Hints on Lustre Stripping
!!! note
Increase the stripe_count for parallel I/O to the same file.
When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs the file will be written to. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file.
Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes.
!!! note
Using a large stripe size can improve performance when accessing very large files
Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file.
Read more [here][b].
## Disk Usage and Quota Commands
## Disk Usage and Quota Commands
Disk usage and user quotas can be checked and reviewed using following command:
Disk usage and user quotas can be checked and reviewed using following command:
...
@@ -248,10 +194,7 @@ The workspace is backed up, such that it can be restored in case of catasthropic
...
@@ -248,10 +194,7 @@ The workspace is backed up, such that it can be restored in case of catasthropic
### Scratch
### Scratch
The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is set to -1. There are 54 OSTs dedicated for the SCRATCH file system.
The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. There are 54 OSTs dedicated for the SCRATCH file system.
!!! note
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10 m inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. If 100 TB space or 10 m inodes should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request.
Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10 m inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. If 100 TB space or 10 m inodes should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request.
...
@@ -303,14 +246,6 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin
...
@@ -303,14 +246,6 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin