Skip to content
Snippets Groups Projects

Qfree update

Merged Jan Siwiec requested to merge qfree-update into master
5 files
+ 251
157
Compare changes
  • Side-by-side
  • Inline
Files
5
@@ -4,16 +4,7 @@ To run a [job][1], computational resources for this particular job must be alloc
@@ -4,16 +4,7 @@ To run a [job][1], computational resources for this particular job must be alloc
## Resources Allocation Policy
## Resources Allocation Policy
Resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share][3] ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are the most important:
Resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share][3] ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources.
* **qexp** - Express queue
* **qprod** - Production queue
* **qlong** - Long queue
* **qmpp** - Massively parallel queue
* **qnvidia**, **qfat** - Dedicated queues
* **qcpu_biz**, **qgpu_biz** - Queues for commercial users
* **qcpu_eurohpc**, **qgpu_eurohpc** - Queues for EuroHPC users
* **qfree** - Free resource utilization queue
!!! note
!!! note
See the queue status for [Karolina][a] or [Barbora][c].
See the queue status for [Karolina][a] or [Barbora][c].
@@ -38,7 +29,13 @@ Use GNU Parallel and/or Job arrays when running (many) single core jobs.
@@ -38,7 +29,13 @@ Use GNU Parallel and/or Job arrays when running (many) single core jobs.
In many cases, it is useful to submit a huge (100+) number of computational jobs into the PBS queue system. A huge number of (small) jobs is one of the most effective ways to execute parallel calculations, achieving best runtime, throughput and computer utilization. In this chapter, we discuss the recommended way to run huge numbers of jobs, including **ways to run huge numbers of single core jobs**.
In many cases, it is useful to submit a huge (100+) number of computational jobs into the PBS queue system. A huge number of (small) jobs is one of the most effective ways to execute parallel calculations, achieving best runtime, throughput and computer utilization. In this chapter, we discuss the recommended way to run huge numbers of jobs, including **ways to run huge numbers of single core jobs**.
Read more on [Capacity Computing][6] page.
Read more on the [Capacity Computing][6] page.
 
 
## Vnode Allocation
 
 
The `qgpu` queue on Karolina takes advantage of the division of nodes into vnodes. Accelerated node equipped with two 64-core processors and eight GPU cards is treated as eight vnodes, each containing 16 CPU cores and 1 GPU card. Vnodes can be allocated to jobs individually –⁠ through precise definition of resource list at job submission, you may allocate varying number of resources/GPU cards according to your needs.
 
 
Red more on the [Vnode Allocation][7] page.
[1]: ../index.md#terminology-frequently-used-on-these-pages
[1]: ../index.md#terminology-frequently-used-on-these-pages
[2]: ../pbspro.md
[2]: ../pbspro.md
@@ -46,6 +43,7 @@ Read more on [Capacity Computing][6] page.
@@ -46,6 +43,7 @@ Read more on [Capacity Computing][6] page.
[4]: resources-allocation-policy.md
[4]: resources-allocation-policy.md
[5]: job-submission-and-execution.md
[5]: job-submission-and-execution.md
[6]: capacity-computing.md
[6]: capacity-computing.md
 
[7]: vnode-allocation.md
[a]: https://extranet.it4i.cz/rsweb/karolina/queues
[a]: https://extranet.it4i.cz/rsweb/karolina/queues
[b]: https://www.altair.com/pbs-works/
[b]: https://www.altair.com/pbs-works/
Loading