diff --git a/docs.it4i/anselm/job-priority.md b/docs.it4i/anselm/job-priority.md index 30eba2bd004ff45ec5532d06ea556bdd08b7f56c..09acc4cedc9a6cad4ac9b51fdc46ddac2ef91ad2 100644 --- a/docs.it4i/anselm/job-priority.md +++ b/docs.it4i/anselm/job-priority.md @@ -1,4 +1,4 @@ -# Job scheduling +# Job Scheduling ## Job Execution Priority @@ -54,7 +54,7 @@ Job execution priority (job sort formula) is calculated as: ---8<--- "job_sort_formula.md" -### Job backfilling +### Job Backfilling Anselm cluster uses job backfilling. diff --git a/docs.it4i/anselm/remote-visualization.md b/docs.it4i/anselm/remote-visualization.md index e5a439b4654da5342101d15287212501b87c0df9..93d5cd23b4fc8c2c6856a511dcb7a31cf4d4fb00 100644 --- a/docs.it4i/anselm/remote-visualization.md +++ b/docs.it4i/anselm/remote-visualization.md @@ -1,4 +1,4 @@ -# Remote visualization service +# Remote Visualization Service ## Introduction diff --git a/docs.it4i/anselm/resources-allocation-policy.md b/docs.it4i/anselm/resources-allocation-policy.md index 7f06ab8c1b7cf71f12e423301e7fe91183f15fa9..34f1ee4186e532b7ee26ef93a30bed7f4229d452 100644 --- a/docs.it4i/anselm/resources-allocation-policy.md +++ b/docs.it4i/anselm/resources-allocation-policy.md @@ -1,6 +1,6 @@ # Resources Allocation Policy -## Job queue policies +## Job Queue Policies The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The Fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview: @@ -27,7 +27,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const * **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project. * **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours. -## Queue notes +## Queue Notes The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](job-submission-and-execution/). @@ -35,7 +35,7 @@ Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatica Anselm users may check current queue configuration at <https://extranet.it4i.cz/anselm/queues>. -## Queue status +## Queue Status !!! tip Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/> diff --git a/docs.it4i/anselm/storage.md b/docs.it4i/anselm/storage.md index c7e40c458e9ed508d3c2e3701c5fb814e3b44f01..7009674731c0683cf75b46b807fa70d315938172 100644 --- a/docs.it4i/anselm/storage.md +++ b/docs.it4i/anselm/storage.md @@ -98,7 +98,7 @@ The architecture of Lustre on Anselm is composed of two metadata servers (MDS) a * 2 groups of 5 disks in RAID5 * 2 hot-spare disks -### HOME +### HOME File System The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. @@ -127,7 +127,7 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t | Default stripe count | 1 | | Number of OSTs | 22 | -### SCRATCH +### SCRATCH File System The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.