Skip to content
Snippets Groups Projects
job-arrays.md 5.16 KiB
Newer Older
  • Learn to ignore specific revisions
  • Jan Siwiec's avatar
    Jan Siwiec committed
    # Job Arrays
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    A job array is a compact representation of many jobs called tasks. Tasks share the same job script, and have the same values for all attributes and resources, with the following exceptions:
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    * each task has a unique index, `$SLURM_ARRAY_TASK_ID`
    * job Identifiers of tasks only differ by their indices
    * the state of tasks can differ
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    All tasks within a job array have the same scheduling priority and schedule as independent jobs. An entire job array is submitted through a single `sbatch` command and may be managed by `squeue`, `scancel` and `scontrol` commands as a single job.
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    ## Shared Jobscript
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    All tasks in a job array use the very same single jobscript. Each task runs its own instance of the jobscript. The instances execute different work controlled by the `$SLURM_ARRAY_TASK_ID` variable.
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    Example:
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    Assume we have 900 input files with the name of each beginning with "file" (e.g. file001, ..., file900). Assume we would like to use each of these input files with myprog.x program executable,
    each as a separate, single node job running 128 threats.
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    First, we create a `tasklist` file, listing all tasks - all input files in our example:
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    ```console
    $ find . -name 'file*' > tasklist
    ```
    
    Then we create a jobscript:
    
    ```bash
    #!/bin/bash
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    #SBATCH -p qcpu
    #SBATCH -A SERVICE
    #SBATCH --nodes 1 --ntasks-per-node 1 --cpus-per-task 128 
    #SBATCH -t 02:00:00
    #SBATCH -o /dev/null
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    # change to scratch directory
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    SCRDIR=/scratch/project/$SLURM_JOB_ACCOUNT/$SLURM_JOB_USER/$SLURM_JOB_ID
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    mkdir -p $SCRDIR
    cd $SCRDIR || exit
    
    # get individual tasks from tasklist with index from PBS JOB ARRAY
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    TASK=$(sed -n "${SLURM_ARRAY_TASK_ID}p" $SLURM_SUBMIT_DIR/tasklist)
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    # copy input file and executable to scratch
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    cp $SLURM_SUBMIT_DIR/$TASK input
    cp $SLURM_SUBMIT_DIR/myprog.x .
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    # execute the calculation
    ./myprog.x < input > output
    
    # copy output file to submit directory
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    cp output $SLURM_SUBMIT_DIR/$TASK.out
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    ```
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    In this example, the submit directory contains the 900 input files, the myprog.x executable,
    and the jobscript file. As an input for each run, we take the filename of the input file from the created
    tasklist file. We copy the input file to a scratch directory  `/scratch/project/$SLURM_JOB_ACCOUNT/$SLURM_JOB_USER/$SLURM_JOB_ID`,
    execute the myprog.x and copy the output file back to the submit directory, under the `$TASK.out` name. The myprog.x executable runs on one node only and must use threads to run in parallel.
    Be aware, that if the myprog.x **is not multithreaded or multi-process (MPI)**, then all the **jobs are run as single-thread programs, wasting node resources**.
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    ## Submiting Job Array
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    To submit the job array, use the `sbatch --array` command. The 900 jobs of the [example above][3] may be submitted like this:
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    ```console
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    $ sbatch -J JOBNAME --array 1-900 ./jobscript
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    ```
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    In this example, we submit a job array of 900 tasks. Each task will run on one full node and is assumed to take less than 2 hours (note the #SBATCH directives in the beginning of the jobscript file, do not forget to set your valid PROJECT_ID and desired queue).
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    ## Managing Job Array
    
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    Check status of the job array using the `squeue --me` command, alternatively `squeue --me --array`.
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    
    ```console
    
    Branislav Jansik's avatar
    Branislav Jansik committed
    $  squeue --me --long
                 JOBID PARTITION     NAME     USER    STATE       TIME TIME_LIMI  NODES NODELIST(REASON)
    2499924_[5-101]      qcpu  myarray   jansik  PENDING       0:00      1:00      1 (Resources)
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    ```
    
    When the status is B, it means that some subjobs are already running.
    Check the status of the first 100 subjobs using the `qstat` command.
    
    ```console
    $ qstat -a 12345[1-100].dm2
    
    dm2:
                                                                Req'd Req'd   Elap
    Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time S Time
    --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
    12345[1].dm2    user2    qprod    xx          13516   1 16    --  00:50 R 00:02
    12345[2].dm2    user2    qprod    xx          13516   1 16    --  00:50 R 00:02
    12345[3].dm2    user2    qprod    xx          13516   1 16    --  00:50 R 00:01
    12345[4].dm2    user2    qprod    xx          13516   1 16    --  00:50 Q   --
         .             .        .      .             .    .   .     .    .   .    .
         ,             .        .      .             .    .   .     .    .   .    .
    12345[100].dm2 user2    qprod    xx          13516   1 16    --  00:50 Q   --
    ```
    
    Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted.
    
    ```console
    $ qdel 12345[].dm2
    ```
    
    Deleting large job arrays may take a while.
    Display status information for all user's jobs, job arrays, and subjobs.
    
    ```console
    $ qstat -u $USER -t
    ```
    
    Display status information for all user's subjobs.
    
    ```console
    $ qstat -u $USER -tJ
    ```
    
    For more information on job arrays, see the [PBSPro Users guide][1].
    
    ## Examples
    
    Download the examples in [capacity.zip][2], illustrating the above listed ways to run a huge number of jobs. We recommend trying out the examples before using this for running production jobs.
    
    Unzip the archive in an empty directory on cluster and follow the instructions in the README file-
    
    ```console
    $ unzip capacity.zip
    $ cat README
    ```
    
    [1]: ../pbspro.md
    [2]: capacity.zip
    [3]: #shared-jobscript