From 43767aed7be95ced33d12bc565598a8c5c6beed2 Mon Sep 17 00:00:00 2001 From: John Cawley <john.cawley@vsb.cz> Date: Thu, 30 Nov 2017 13:56:37 +0100 Subject: [PATCH] Update resource-allocation-and-job-execution.md PROOFREAD line 11; should that be 'regular'? --- .../anselm/resource-allocation-and-job-execution.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs.it4i/anselm/resource-allocation-and-job-execution.md b/docs.it4i/anselm/resource-allocation-and-job-execution.md index 4585588dd..a24b81511 100644 --- a/docs.it4i/anselm/resource-allocation-and-job-execution.md +++ b/docs.it4i/anselm/resource-allocation-and-job-execution.md @@ -2,9 +2,9 @@ To run a [job](job-submission-and-execution/), [computational resources](resources-allocation-policy/) for this particular job must be allocated. This is done via the PBS Pro job workload manager software, which efficiently distributes workloads across the supercomputer. Extensive information about PBS Pro can be found in the [official documentation here](../pbspro/), especially in the PBS Pro User's Guide. -## Resources Allocation Policy +## Resource Allocation Policy -The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](job-priority/) at Anselm ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. Following queues are available to Anselm users: +The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share](job-priority/) system of Anselm ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. The following queues are available to Anselm users: * **qexp**, the Express queue * **qprod**, the Production queue @@ -22,17 +22,17 @@ Read more on the [Resource AllocationPolicy](resources-allocation-policy/) page. !!! note Use the **qsub** command to submit your jobs. -The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is entire node, 16 cores**, with exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.** +The qsub submits the job into the queue. The qsub command creates a request to the PBS Job manager for allocation of specified resources. The **smallest allocation unit is an entire node, 16 cores**, with the exception of the qexp queue. The resources will be allocated when available, subject to allocation policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on the first of the allocated nodes.** Read more on the [Job submission and execution](job-submission-and-execution/) page. ## Capacity Computing !!! note - Use Job arrays when running huge number of jobs. + Use Job arrays when running a huge number of jobs. Use GNU Parallel and/or Job arrays when running (many) single core jobs. -In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization. In this chapter, we discuss the the recommended way to run huge number of jobs, including **ways to run huge number of single core jobs**. +In many cases, it is useful to submit a huge (100+) number of computational jobs into the PBS queue system. A huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving the best runtime, throughput, and computer utilization. In this chapter, we discuss the the recommended way to run a huge number of jobs, including **ways to run a huge number of single core jobs**. -Read more on [Capacity computing](capacity-computing/) page. +Read more on the [Capacity computing](capacity-computing/) page. -- GitLab