Skip to content
Snippets Groups Projects
Commit 3a08e841 authored by David Hrbáč's avatar David Hrbáč
Browse files

Merge branch 'bjansik' into 'master'

Bjansik

See merge request !159
parents 87097b2b 95361394
No related branches found
No related tags found
6 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!196Master,!159Bjansik
# Resources Allocation Policy ## Resources Allocation Policy
## Introduction ### Job queue policies
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The Fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview: The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The Fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
...@@ -27,7 +27,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const ...@@ -27,7 +27,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
* **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project. * **qnvidia**, qmic, qfat, the Dedicated queues: The queue qnvidia is dedicated to access the Nvidia accelerated nodes, the qmic to access MIC nodes and qfat the Fat nodes. It is required that active project with nonzero remaining resources is specified to enter these queues. 23 nvidia, 4 mic and 2 fat nodes are included. Full nodes, 16 cores per node are allocated. The queues run with very high priority, the jobs will be scheduled before the jobs coming from the qexp queue. An PI needs explicitly ask [support](https://support.it4i.cz/rt/) for authorization to enter the dedicated queues for all users associated to her/his Project.
* **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours. * **qfree**, The Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 16 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
### Notes ### Queue notes
The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](job-submission-and-execution/). The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](job-submission-and-execution/).
...@@ -35,7 +35,7 @@ Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatica ...@@ -35,7 +35,7 @@ Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatica
Anselm users may check current queue configuration at <https://extranet.it4i.cz/anselm/queues>. Anselm users may check current queue configuration at <https://extranet.it4i.cz/anselm/queues>.
### Queue Status ### Queue status
!!! tip !!! tip
Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/> Check the status of jobs, queues and compute nodes at <https://extranet.it4i.cz/anselm/>
...@@ -106,24 +106,8 @@ Options: ...@@ -106,24 +106,8 @@ Options:
--incl-finished Include finished jobs --incl-finished Include finished jobs
``` ```
## Resources Accounting Policy ---8<--- "resource_accounting.md"
### Core-Hours
The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (16 cores) for 1 hour accounts to 16 core-hours. See example in the [Job submission and execution](job-submission-and-execution/) section. ---8<--- "mathjax.md"
### Check Consumed Resources
!!! note
The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
User may check at any time, how many core-hours have been consumed by himself/herself and his/her projects. The command is available on clusters' login nodes.
```console
$ it4ifree
Password:
PID Total Used ...by me Free
-------- ------- ------ -------- -------
OPEN-0-0 1500000 400644 225265 1099356
DD-13-1 10000 2606 2606 7394
```
# Resource Allocation and Job Execution
To run a [job](/#terminology-frequently-used-on-these-pages), [computational resources](/salomon/resources-allocation-policy/#resource-accounting-policy) for this particular job must be allocated. This is done via the PBS Pro job workload manager software, which distributes workloads across the supercomputer. Extensive information about PBS Pro can be found in the [PBS Pro User's Guide](/pbspro).
## Resources Allocation Policy
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](/salomon/job-priority/#fair-share-priority) ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are are the most important:
* **qexp**, the Express queue
* **qprod**, the Production queue
* **qlong**, the Long queue
* **qmpp**, the Massively parallel queue
* **qnvidia**, **qmic**, **qfat**, the Dedicated queues
* **qfree**, the Free resource utilization queue
!!! note
Check the queue status at <https://extranet.it4i.cz/>
Read more on the [Resource AllocationPolicy](/salomon/resources-allocation-policy) page.
## Job Submission and Execution
!!! note
Use the **qsub** command to submit your jobs.
The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 16 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
Read more on the [Job submission and execution](/salomon/job-submission-and-execution) page.
## Capacity Computing
!!! note
Use Job arrays when running huge number of jobs.
Use GNU Parallel and/or Job arrays when running (many) single core jobs.
In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization. In this chapter, we discuss the the recommended way to run huge number of jobs, including **ways to run huge number of single core jobs**.
Read more on [Capacity computing](/salomon/capacity-computing) page.
\ No newline at end of file
...@@ -31,7 +31,7 @@ In many cases, you will run your own code on the cluster. In order to fully expl ...@@ -31,7 +31,7 @@ In many cases, you will run your own code on the cluster. In order to fully expl
* **node:** a computer, interconnected by network to other computers - Computational nodes are powerful computers, designed and dedicated for executing demanding scientific computations. * **node:** a computer, interconnected by network to other computers - Computational nodes are powerful computers, designed and dedicated for executing demanding scientific computations.
* **core:** processor core, a unit of processor, executing computations * **core:** processor core, a unit of processor, executing computations
* **corehours:** wall clock hours of processor core time - Each node is equipped with **X** processor cores, provides **X** corehours per 1 wall clock hour. * **core-hour:** also normalized core-hour, NCH. A metric of computer utilization, [see definition](salomon/resources-allocation-policy/#normalized-core-hours-nch).
* **job:** a calculation running on the supercomputer - The job allocates and utilizes resources of the supercomputer for certain time. * **job:** a calculation running on the supercomputer - The job allocates and utilizes resources of the supercomputer for certain time.
* **HPC:** High Performance Computing * **HPC:** High Performance Computing
* **HPC (computational) resources:** corehours, storage capacity, software licences * **HPC (computational) resources:** corehours, storage capacity, software licences
...@@ -59,4 +59,7 @@ local $ ...@@ -59,4 +59,7 @@ local $
## Errata ## Errata
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in the text or the code we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this documentation. If you find any errata, please report them by visiting <http://support.it4i.cz/rt>, creating a new ticket, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website. Although we have taken every care to ensure the accuracy of the content, mistakes do happen.
If you find an inconsistency or error, please report it by visiting <http://support.it4i.cz/rt>, creating a new ticket, and entering the details.
By doing so, you can save other readers from frustration and help us improve.
We will fix the problem as soon as possible.
# Resources Allocation Policy ## Resources Allocation Policy
### Job queue policies
The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview: The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. The fair-share at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
...@@ -16,7 +18,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const ...@@ -16,7 +18,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
| **qviz** Visualization queue | yes | none required | 2 (with NVIDIA Quadro K5000) | 4 | 150 | no | 1 / 8h | | **qviz** Visualization queue | yes | none required | 2 (with NVIDIA Quadro K5000) | 4 | 150 | no | 1 / 8h |
!!! note !!! note
**The qfree queue is not free of charge**. [Normal accounting](resources-allocation-policy/#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue. **The qfree queue is not free of charge**. [Normal accounting](#resource-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply to Directors Discretion (DD projects) but may be allowed upon request.
* **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour. * **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
* **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours. * **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
...@@ -29,15 +31,15 @@ The resources are allocated to the job in a fair-share fashion, subject to const ...@@ -29,15 +31,15 @@ The resources are allocated to the job in a fair-share fashion, subject to const
!!! note !!! note
To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/). To access node with Xeon Phi co-processor user needs to specify that in [job submission select statement](job-submission-and-execution/).
## Notes ### Queue notes
The job wall clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](job-submission-and-execution/). The job wall-clock time defaults to **half the maximum time**, see table above. Longer wall time limits can be [set manually, see examples](job-submission-and-execution/).
Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. Wall clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R). Jobs that exceed the reserved wall-clock time (Req'd Time) get killed automatically. Wall-clock time limit can be changed for queuing jobs (state Q) using the qalter command, however can not be changed for a running job (state R).
Salomon users may check current queue configuration at <https://extranet.it4i.cz/rsweb/salomon/queues>. Salomon users may check current queue configuration at <https://extranet.it4i.cz/rsweb/salomon/queues>.
## Queue Status ### Queue Status
!!! note !!! note
Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon) Check the status of jobs, queues and compute nodes at [https://extranet.it4i.cz/rsweb/salomon/](https://extranet.it4i.cz/rsweb/salomon)
...@@ -109,24 +111,8 @@ Options: ...@@ -109,24 +111,8 @@ Options:
--incl-finished Include finished jobs --incl-finished Include finished jobs
``` ```
## Resources Accounting Policy ---8<--- "resource_accounting.md"
### Core-Hours
The resources that are currently subject to accounting are the core-hours. The core-hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. 1 core-hour is defined as 1 processor core allocated for 1 hour of wall clock time. Allocating a full node (24 cores) for 1 hour accounts to 24 core-hours. See example in the [Job submission and execution](job-submission-and-execution/) section.
### Check Consumed Resources
!!! note
The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
User may check at any time, how many core-hours have been consumed by himself/herself and his/her projects. The command is available on clusters' login nodes. ---8<--- "mathjax.md"
```console
$ it4ifree
Password:
PID Total Used ...by me Free
-------- ------- ------ -------- -------
OPEN-0-0 1500000 400644 225265 1099356
DD-13-1 10000 2606 2606 7394
```
## Resource Accounting Policy
### Wall-clock Core-Hours WCH
The wall-clock core-hours (WCH) are the basic metric of computer utilization time.
1 wall-clock core-hour is defined as 1 processor core allocated for 1 hour of wall-clock time. Allocating a full node (16 cores Anselm, 24 cores Salomon)
for 1 hour amounts to 16 wall-clock core-hours (Anselm) or 24 wall-clock core-hours (Salomon).
### Normalized Core-Hours NCH
The resources subject to accounting are the normalized core-hours (NCH).
The normalized core-hours are obtained from WCH by applying a normalization factor:
$$
NCH = F*WCH
$$
All jobs are accounted in normalized core-hours, using factor F valid at the time of the execution:
| System | F | Validity |
| ------------------------------- | - | -------- |
| Salomon | 1.00 | 2017-09-11 to 2018-06-01 |
| Anselm | 0.65 | 2017-09-11 to 2018-06-01 |
The accounting runs whenever the computational cores are allocated via the PBS Pro workload manager (the qsub command), regardless of whether
the cores are actually used for any calculation.
!!! note
**The allocations are requested/granted in normalized core-hours NCH.**
!!! warning
Whenever the term core-hour is used in this documentation, we mean the normalized core-hour, NCH.
The normalized core-hours were introduced to treat systems of different age on equal footing.
Normalized core-hour is an accounting tool to discount the legacy systems. The past (before 2017-09-11) F factors are all 1.0.
In future, the factors F will be updated, as new systems are installed. Factors F are expected to only decrease in time.
See examples in the [Job submission and execution](job-submission-and-execution/) section.
### Consumed Resources
Check how many core-hours have been consumed. The command it4ifree is available on cluster login nodes.
```console
$ it4ifree
Projects I am participating in
==============================
PID Days left Total Used WCHs Used NCHs WCHs by me NCHs by me Free
---------- ----------- ------- ----------- ----------- ------------ ------------ -------
OPEN-XX-XX 323 0 5169947 5169947 50001 50001 1292555
Projects I am Primarily Investigating
=====================================
PID Login Used WCHs Used NCHs
---------- ---------- ----------- -----------
OPEN-XX-XX user1 376670 376670
user2 4793277 4793277
Legend
======
WCH = Wall-clock Core Hour
NCH = Normalized Core Hour
```
The **it4ifree** command is a part of it4i.portal.clients package, located here: <https://pypi.python.org/pypi/it4i.portal.clients>
\ No newline at end of file
...@@ -25,6 +25,7 @@ pages: ...@@ -25,6 +25,7 @@ pages:
- X Window System: general/accessing-the-clusters/graphical-user-interface/x-window-system.md - X Window System: general/accessing-the-clusters/graphical-user-interface/x-window-system.md
- VNC: general/accessing-the-clusters/graphical-user-interface/vnc.md - VNC: general/accessing-the-clusters/graphical-user-interface/vnc.md
- VPN Access: general/accessing-the-clusters/vpn-access.md - VPN Access: general/accessing-the-clusters/vpn-access.md
- Resource allocation and job execution: general/resource_allocation_and_job_execution.md
- Salomon Cluster: - Salomon Cluster:
- Introduction: salomon/introduction.md - Introduction: salomon/introduction.md
- Hardware Overview: salomon/hardware-overview.md - Hardware Overview: salomon/hardware-overview.md
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment