Newer
Older
Resources are allocated to jobs in a fair-share fashion,
subject to constraints set by the queue and the resources available to the project.
The fair-share system ensures that individual users may consume approximately equal amounts of resources per week.
Detailed information can be found in the [Job scheduling][1] section.
Resources are accessible via several queues for queueing the jobs.
Queues provide prioritized and exclusive access to the computational resources.
Computational resources are subject to [accounting policy][7].
!!! important
Queues are divided based on a resource type: `qcpu_` for non-accelerated nodes and `qgpu_` for accelerated nodes. <br><br>
On the Karolina's `qgpu` queue, **you can now allocate 1/8 of the node - 1 GPU and 16 cores**. For more information, see [Allocation of vnodes on qgpu][4].<br><br>
### New Queues
| <div style="width:86px">Queue</div>| Description |
| -------------------------------- | ----------- |
| `qcpu` | Production queue for non-accelerated nodes intended for standard production runs. Requires an active project with nonzero remaining resources. Full nodes are allocated. Identical to `qprod`. |
| `qgpu` | Dedicated queue for accessing the NVIDIA accelerated nodes. Requires an active project with nonzero remaining resources. It utilizes 8x NVIDIA A100 with 320GB HBM2 memory per node. The PI needs to explicitly ask support for authorization to enter the queue for all users associated with their project. **On Karolina, you can allocate 1/8 of the node - 1 GPU and 16 cores**. For more information, see [Allocation of vnodes on qgpu][4]. |
| `qcpu_biz`<br>`qgpu_biz` | Commercial queues, slightly higher priority. |
| `qcpu_eurohpc`<br>`qgpu_eurohpc` | EuroHPC queues, slightly higher priority, **Karolina only**. |
| `qcpu_exp`<br>`qgpu_exp` | Express queues for testing and running very small jobs. There are 2 nodes always reserved (w/o accelerators), max 8 nodes available per user. The nodes may be allocated on a per core basis. It is configured to run one job and accept five jobs in a queue per user. |
| `qcpu_free`<br>`qgpu_free` | Intended for utilization of free resources, after a project exhausted all its allocated resources. Note that the queue is **not free of charge**. [Normal accounting][2] applies. (Does not apply to DD projects by default. DD projects have to request for permission after exhaustion of computational resources.). Consumed resources will be accounted to the Project. Access to the queue is removed if consumed resources exceed 150% of the allocation. Full nodes are allocated. |
| `qcpu_long` | Queues for long production runs. Require an active project with nonzero remaining resources. Only 200 nodes without acceleration may be accessed. Full nodes are allocated. |
| `qcpu_preempt`<br>`qgpu_preempt` | Free queues with the lowest priority (LP). The queues require a project with allocation of the respective resource type. There is no limit on resource overdraft. Jobs are killed if other jobs with a higher priority (HP) request the nodes and there are no other nodes available. LP jobs are automatically re-queued once HP jobs finish, so **make sure your jobs are re-runnable**. |
| `qdgx` | Queue for DGX-2, accessible from Barbora. |
| `qfat` | Queue for fat node, PI must request authorization to enter the queue for all users associated to their project. |
| `qviz` | Visualization queue Intended for pre-/post-processing using OpenGL accelerated graphics. Each user gets 8 cores of a CPU allocated (approx. 64 GB of RAM and 1/8 of the GPU capacity (default "chunk")). If more GPU power or RAM is required, it is recommended to allocate more chunks (with 8 cores each) up to one whole node per user. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum. |
### Legacy Queues
Legacy queues stay in production until early 2023.
| Legacy queue | Replaced by |
| ------------ | ------------------------- |
| `qexp` | `qcpu_exp` & `qgpu_exp` |
| `qprod` | `qcpu` |
| `nvidia` | `qgpu` Note that unlike in new queues, only full nodes can be allocated. |
| `qfree` | `qcpu_free` & `qgpu_free` |
See the following subsections for the list of queues:
* [Karolina queues][5]
* [Barbora queues][6]
The job wallclock time defaults to **half the maximum time**, see the table above. Longer wall time limits can be [set manually, see examples][3].
Jobs that exceed the reserved wall clock time (Req'd Time) get killed automatically. The wall clock time limit can be changed for queuing jobs (state Q) using the `qalter` command, however it cannot be changed for a running job (state R).
## Queue Status
!!! tip
Check the status of jobs, queues and compute nodes [here][c].

Display the queue status:
```console
$ qstat -q
```
The PBS allocation overview may also be obtained using the `rspbs` command:
```console
$ rspbs
Usage: rspbs [options]
Options:
--version show program's version number and exit
-h, --help show this help message and exit
--get-server-details Print server
--get-queues Print queues
--get-queues-details Print queues details
--get-reservations Print reservations
--get-reservations-details
Print reservations details
```
---8<--- "mathjax.md"
[1]: job-priority.md
[2]: #resource-accounting-policy
[3]: job-submission-and-execution.md
[5]: ./karolina-queues.md
[6]: ./barbora-queues.md
[7]: ./resource-accounting.md
[a]: https://support.it4i.cz/rt/
[c]: https://extranet.it4i.cz/rsweb