Skip to content
Snippets Groups Projects
Commit c786d577 authored by Pavel Jirásek's avatar Pavel Jirásek
Browse files

Headers

parent 83791f1c
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!41Content revision2
Pipeline #
...@@ -16,11 +16,13 @@ However, executing huge number of jobs via the PBS queue may strain the system. ...@@ -16,11 +16,13 @@ However, executing huge number of jobs via the PBS queue may strain the system.
Policy Policy
------ ------
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays). 1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
2. The array size is at most 1000 subjobs. 2. The array size is at most 1000 subjobs.
Job arrays Job arrays
-------------- --------------
!!! Note "Note" !!! Note "Note"
Huge number of jobs may be easily submitted and managed as a job array. Huge number of jobs may be easily submitted and managed as a job array.
...@@ -221,7 +223,7 @@ In this example, we submit a job of 101 tasks. 16 input files will be processed ...@@ -221,7 +223,7 @@ In this example, we submit a job of 101 tasks. 16 input files will be processed
Please note the #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue. Please note the #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
Job arrays and GNU parallel Job arrays and GNU parallel
------------------------------- ---------------------------
!!! Note "Note" !!! Note "Note"
Combine the Job arrays and GNU parallel for best throughput of single core jobs Combine the Job arrays and GNU parallel for best throughput of single core jobs
...@@ -307,6 +309,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo ...@@ -307,6 +309,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo
Examples Examples
-------- --------
Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs. Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
Unzip the archive in an empty directory on Anselm and follow the instructions in the README file Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
......
...@@ -3,6 +3,7 @@ Capacity computing ...@@ -3,6 +3,7 @@ Capacity computing
Introduction Introduction
------------ ------------
In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization. In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization.
However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1500 per job array** However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1500 per job array**
...@@ -16,11 +17,13 @@ However, executing huge number of jobs via the PBS queue may strain the system. ...@@ -16,11 +17,13 @@ However, executing huge number of jobs via the PBS queue may strain the system.
Policy Policy
------ ------
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays). 1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
2. The array size is at most 1000 subjobs. 2. The array size is at most 1000 subjobs.
Job arrays Job arrays
-------------- --------------
!!! Note "Note" !!! Note "Note"
Huge number of jobs may be easily submitted and managed as a job array. Huge number of jobs may be easily submitted and managed as a job array.
...@@ -152,6 +155,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/) ...@@ -152,6 +155,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/)
GNU parallel GNU parallel
---------------- ----------------
!!! Note "Note" !!! Note "Note"
Use GNU parallel to run many single core tasks on one node. Use GNU parallel to run many single core tasks on one node.
...@@ -223,6 +227,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo ...@@ -223,6 +227,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo
Job arrays and GNU parallel Job arrays and GNU parallel
------------------------------- -------------------------------
!!! Note "Note" !!! Note "Note"
Combine the Job arrays and GNU parallel for best throughput of single core jobs Combine the Job arrays and GNU parallel for best throughput of single core jobs
...@@ -307,6 +312,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo ...@@ -307,6 +312,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo
Examples Examples
-------- --------
Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs. Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
Unzip the archive in an empty directory on Anselm and follow the instructions in the README file Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment