Skip to content
Snippets Groups Projects
Commit 1614fc42 authored by Pavel Jirásek's avatar Pavel Jirásek
Browse files

Table formating

parent 9b107193
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!27Content revision
......@@ -5,10 +5,10 @@ Resources Allocation Policy
---------------------------
The resources are allocated to the job in a fairshare fashion, subject to constraints set by the queue and resources available to the Project. The Fairshare at Anselm ensures that individual users may consume approximately equal amount of resources per week. Detailed information in the [Job scheduling](job-priority/) section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following table provides the queue partitioning overview:
|queue |active project |project resources |nodes|min ncpus*|priority|authorization|walltime |
| --- | --- |
|**qexe** Express queue|no |none required |32 nodes, max 8 per user |24 |>150 |no |1 / 1h |
|**qprod** Production queue|yes |> 0 |>1006 nodes, max 86 per job |24 |0 |no |24 / 48h |
|queue |active project |project resources |nodes|min ncpus |priority|authorization|walltime |
| --- | --- |--- |--- |--- |--- |--- |--- |
|**qexe** Express queue|no |none required |32 nodes, max 8 per user |24 |150 |no |1 / 1h |
|**qprod** Production queue|yes |> 0 |1006 nodes, max 86 per job |24 |0 |no |24 / 48h |
|**qlong** Long queue |yes |> 0 |256 nodes, max 40 per job, only non-accelerated nodes allowed |24 |0 |no |72 / 144h |
|**qmpp** Massive parallel queue |yes |> 0 |1006 nodes |24 |0 |yes |2 / 4h |
|**qfat** UV2000 queue |yes |> 0 |1 (uv1) |8 |0 |yes |24 / 48h |
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment