From c786d5777e8017241abbd10f47ef113aec0b350e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pavel=20Jir=C3=A1sek?= <pavel.jirasek@vsb.cz> Date: Mon, 23 Jan 2017 10:48:47 +0100 Subject: [PATCH] Headers --- .../anselm-cluster-documentation/capacity-computing.md | 5 ++++- docs.it4i/salomon/capacity-computing.md | 6 ++++++ 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/docs.it4i/anselm-cluster-documentation/capacity-computing.md b/docs.it4i/anselm-cluster-documentation/capacity-computing.md index 90b9934da..78350a812 100644 --- a/docs.it4i/anselm-cluster-documentation/capacity-computing.md +++ b/docs.it4i/anselm-cluster-documentation/capacity-computing.md @@ -16,11 +16,13 @@ However, executing huge number of jobs via the PBS queue may strain the system. Policy ------ + 1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays). 2. The array size is at most 1000 subjobs. Job arrays -------------- + !!! Note "Note" Huge number of jobs may be easily submitted and managed as a job array. @@ -221,7 +223,7 @@ In this example, we submit a job of 101 tasks. 16 input files will be processed Please note the #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue. Job arrays and GNU parallel -------------------------------- +--------------------------- !!! Note "Note" Combine the Job arrays and GNU parallel for best throughput of single core jobs @@ -307,6 +309,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo Examples -------- + Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs. Unzip the archive in an empty directory on Anselm and follow the instructions in the README file diff --git a/docs.it4i/salomon/capacity-computing.md b/docs.it4i/salomon/capacity-computing.md index 28c57a64e..b405e5c3d 100644 --- a/docs.it4i/salomon/capacity-computing.md +++ b/docs.it4i/salomon/capacity-computing.md @@ -3,6 +3,7 @@ Capacity computing Introduction ------------ + In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization. However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1500 per job array** @@ -16,11 +17,13 @@ However, executing huge number of jobs via the PBS queue may strain the system. Policy ------ + 1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays). 2. The array size is at most 1000 subjobs. Job arrays -------------- + !!! Note "Note" Huge number of jobs may be easily submitted and managed as a job array. @@ -152,6 +155,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/) GNU parallel ---------------- + !!! Note "Note" Use GNU parallel to run many single core tasks on one node. @@ -223,6 +227,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo Job arrays and GNU parallel ------------------------------- + !!! Note "Note" Combine the Job arrays and GNU parallel for best throughput of single core jobs @@ -307,6 +312,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo Examples -------- + Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs. Unzip the archive in an empty directory on Anselm and follow the instructions in the README file -- GitLab