Skip to content
Snippets Groups Projects
job-submission-and-execution.md 19.1 KiB
Newer Older
  • Learn to ignore specific revisions
  • David Hrbáč's avatar
    David Hrbáč committed
    # Job submission and execution
    
    ## Job Submission
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    When allocating computational resources for the job, please specify
    
    1.  suitable queue for your job (default is qprod)
    2.  number of computational nodes required
    3.  number of cores per node required
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    4.  maximum wall time allocated to your calculation, note that jobs exceeding maximum wall time will be killed
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    5.  Project ID
    6.  Jobscript or interactive switch
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	Use the **qsub** command to submit your job to a queue for allocation of the computational resources.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    Submit the job using the qsub command:
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsub -A Project_ID -q queue -l select=x:ncpus=y,walltime=[[hh:]mm:]ss[.ms] jobscript
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    The qsub submits the job into the queue, in another words the qsub command creates a request to the PBS Job manager for allocation of specified resources. The resources will be allocated when available, subject to above described policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	PBS statement nodes (qsub -l nodes=nodespec) is not supported on Salomon cluster.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ### Job Submission Examples
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsub -A OPEN-0-0 -q qprod -l select=64:ncpus=24,walltime=03:00:00 ./myjob
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    In this example, we allocate 64 nodes, 24 cores per node, for 3 hours. We allocate these resources via the qprod queue, consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. Jobscript myjob will be executed on the first node in the allocation.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsub -q qexp -l select=4:ncpus=24 -I
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    In this example, we allocate 4 nodes, 24 cores per node, for 1 hour. We allocate these resources via the qexp queue. The resources will be available interactively
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsub -A OPEN-0-0 -q qlong -l select=10:ncpus=24 ./myjob
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    In this example, we allocate 10 nodes, 24 cores per node, for  72 hours. We allocate these resources via the qlong queue. Jobscript myjob will be executed on the first node in the allocation.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=24 ./myjob
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    In this example, we allocate 10  nodes, 24 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    ### Intel Xeon Phi Co-Processors
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    To allocate a node with Xeon Phi co-processor, user needs to specify that in select statement. Currently only allocation of whole nodes with both Phi cards as the smallest chunk is supported. Standard PBSPro approach through attributes "accelerator", "naccelerators" and "accelerator_model" is used. The "accelerator_model" can be omitted, since on Salomon only one type of accelerator type/model is available.
    
    The absence of specialized queue for accessing the nodes with cards means, that the Phi cards can be utilized in any queue, including qexp for testing/experiments, qlong for longer jobs, qfree after the project resources have been spent, etc. The Phi cards are thus also available to PRACE users. There's no need to ask for permission to utilize the Phi cards in project proposals.
    
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsub  -A OPEN-0-0 -I -q qprod -l select=1:ncpus=24:accelerator=True:naccelerators=2:accelerator_model=phi7120 ./myjob
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    In this example, we allocate 1 node, with 24 cores, with 2 Xeon Phi 7120p cards, running batch job ./myjob. The default time for qprod is used, e. g. 24 hours.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsub  -A OPEN-0-0 -I -q qlong -l select=4:ncpus=24:accelerator=True:naccelerators=2 -l walltime=56:00:00 -I
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    In this example, we allocate 4 nodes, with 24 cores per node (totalling 96 cores), with 2 Xeon Phi 7120p cards per node (totalling 8 Phi cards), running interactive job for 56 hours. The accelerator model name was omitted.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ### UV2000 SMP
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	14 NUMA nodes available on UV2000
        Per NUMA node allocation.
        Jobs are isolated by cpusets.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    The UV2000 (node uv1) offers 3328GB of RAM and 112 cores, distributed in 14 NUMA nodes. A NUMA node packs 8 cores and approx. 236GB RAM. In the PBS  the UV2000 provides 14 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of  the NUMA nodes allocated to itself.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```bash
    
    David Hrbáč's avatar
    David Hrbáč committed
     $ qsub -A OPEN-0-0 -q qfat -l select=14 ./myjob
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    In this example, we allocate all 14 NUMA nodes (corresponds to 14 chunks), 112 cores of the SGI UV2000 node  for 72 hours. Jobscript myjob will be executed on the node uv1.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsub -A OPEN-0-0 -q qfat -l select=1:mem=2000GB ./myjob
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    In this example, we allocate 2000GB of memory on the UV2000 for 72 hours. By requesting 2000GB of memory, 10 chunks are allocated. Jobscript myjob will be executed on the node uv1.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    ### Useful Tricks
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    All qsub options may be [saved directly into the jobscript](#example-jobscript-for-mpi-calculation-with-preloaded-inputs). In such a case, no options to qsub are needed.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsub ./myjob
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    By default, the PBS batch system sends an e-mail only when the job is aborted. Disabling mail events completely can be done like this:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsub -m n
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    ## Advanced Job Placement
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    ### Placement by Name
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	Not useful for ordinary computing, suitable for node testing/bechmarking and management tasks.
    
    Specific nodes may be selected using PBS resource attribute host (for hostnames):
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=24:host=r24u35n680+1:ncpus=24:host=r24u36n681 -I
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Specific nodes may be selected using PBS resource attribute cname (for short names in cns[0-1]+ format):
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=24:host=cns680+1:ncpus=24:host=cns681 -I
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    In this example, we allocate nodes r24u35n680 and r24u36n681, all 24 cores per node, for 24 hours.  Consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. The resources will be available interactively.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    ### Placement by Network Location
    
    David Hrbáč's avatar
    David Hrbáč committed
    Network location of allocated nodes in the [InifiBand network](network/) influences efficiency of network communication between nodes of job. Nodes on the same InifiBand switch communicate faster with lower latency than distant nodes. To improve communication efficiency of jobs, PBS scheduler on Salomon is configured to allocate nodes - from currently available resources - which are as close as possible in the network topology.
    
    David Hrbáč's avatar
    David Hrbáč committed
    For communication intensive jobs it is possible to set stricter requirement - to require nodes directly connected to the same InifiBand switch or to require nodes located in the same dimension group of the InifiBand network.
    
    David Hrbáč's avatar
    David Hrbáč committed
    ### Placement by InifiBand Switch
    
    David Hrbáč's avatar
    David Hrbáč committed
    Nodes directly connected to the same InifiBand switch can communicate most efficiently. Using the same switch prevents hops in the network and provides for unbiased, most efficient network communication. There are 9 nodes directly connected to every InifiBand switch.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	We recommend allocating compute nodes of a single switch when the best possible computational network performance is required to run job efficiently.
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    Nodes directly connected to the one InifiBand switch can be allocated using node grouping on PBS resource attribute switch. 
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    In this example, we request all 9 nodes directly connected to the same switch using node grouping placement.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ```bash
    
    $ qsub -A OPEN-0-0 -q qprod -l select=9:ncpus=24 -l place=group=switch ./myjob
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    ### Placement by Specific InifiBand Switch
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	Not useful for ordinary computing, suitable for testing and management tasks.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    Nodes directly connected to the specific InifiBand switch can be selected using the PBS resource attribute _switch_.
    
    
    In this example, we request all 9 nodes directly connected to r4i1s0sw1 switch.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    $ qsub -A OPEN-0-0 -q qprod -l select=9:ncpus=24:switch=r4i1s0sw1 ./myjob
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    List of all InifiBand switches:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    $ qmgr -c 'print node @a' | grep switch | awk '{print $6}' | sort -u
    r1i0s0sw0
    r1i0s0sw1
    r1i1s0sw0
    r1i1s0sw1
    r1i2s0sw0
    ...
    ...
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    List of all all nodes directly connected to the specific InifiBand switch:
    
    ```bash
    $ qmgr -c 'p n @d' | grep 'switch = r36sw3' | awk '{print $3}' | sort
    r36u31n964
    r36u32n965
    r36u33n966
    r36u34n967
    r36u35n968
    r36u36n969
    r37u32n970
    r37u33n971
    r37u34n972
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    ### Placement by Hypercube Dimension
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    Nodes located in the same dimension group may be allocated using node grouping on PBS resource attribute ehc\_[1-7]d .
    
    David Hrbáč's avatar
    David Hrbáč committed
    | Hypercube dimension | node_group_key | #nodes per group |
    | ------------------- | -------------- | ---------------- |
    | 1D                  | ehc_1d         | 18               |
    | 2D                  | ehc_2d         | 36               |
    | 3D                  | ehc_3d         | 72               |
    | 4D                  | ehc_4d         | 144              |
    | 5D                  | ehc_5d         | 144,288          |
    | 6D                  | ehc_6d         | 432,576          |
    | 7D                  | ehc_7d         | all              |
    
    
    In this example, we allocate 16 nodes in the same [hypercube dimension](7d-enhanced-hypercube/) 1 group.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    $ qsub -A OPEN-0-0 -q qprod -l select=16:ncpus=24 -l place=group=ehc_1d -I
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    For better understanding:
    
    List of all groups in dimension 1:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    $ qmgr -c 'p n @d' | grep ehc_1d | awk '{print $6}' | sort |uniq -c
         18 r1i0
         18 r1i1
         18 r1i2
         18 r1i3
    ...
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    List of all all nodes in specific dimension 1 group:
    
    ```bash
    $ $ qmgr -c 'p n @d' | grep 'ehc_1d = r1i0' | awk '{print $3}' | sort
    r1i0n0
    r1i0n1
    r1i0n10
    r1i0n11
    ...
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    ## Job Management
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	Check status of your jobs using the **qstat** and **check-pbs-jobs** commands
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qstat -a
    $ qstat -a -u username
    $ qstat -an -u username
    $ qstat -f 12345.isrv5
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    Example:
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qstat -a
    
    srv11:
                                                                Req'd  Req'd   Elap
    Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
    --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
    16287.isrv5     user1    qlong    job1         6183   4  64    --  144:0 R 38:25
    16468.isrv5     user1    qlong    job2         8060   4  64    --  144:0 R 17:44
    16547.isrv5     user2    qprod    job3x       13516   2  32    --  48:00 R 00:58
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    
    In this example user1 and user2 are running jobs named job1, job2 and job3x. The jobs job1 and job2 are using 4 nodes, 16 cores per node each. The job1 already runs for 38 hours and 25 minutes, job2 for 17 hours 44 minutes. The job1 already consumed 64 x 38.41 = 2458.6 core hours. The job3x already consumed 0.96 x 32 = 30.93 core hours. These consumed core hours will be accounted on the respective project accounts, regardless of whether the allocated cores were actually used for computations.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    Check status of your jobs using check-pbs-jobs command. Check presence of user's PBS jobs' processes on execution hosts. Display load, processes. Display job standard and error output. Continuously display (tail -f) job standard or error output.
    
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ check-pbs-jobs --check-all
    $ check-pbs-jobs --print-load --print-processes
    $ check-pbs-jobs --print-job-out --print-job-err
    $ check-pbs-jobs --jobid JOBID --check-all --print-all
    $ check-pbs-jobs --jobid JOBID --tailf-job-out
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    Examples:
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ check-pbs-jobs --check-all
    JOB 35141.dm2, session_id 71995, user user2, nodes r3i6n2,r3i6n3
    Check session id: OK
    Check processes
    r3i6n2: OK
    r3i6n3: No process
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    In this example we see that job 35141.dm2 currently runs no process on allocated node r3i6n2, which may indicate an execution error.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ check-pbs-jobs --print-load --print-processes
    JOB 35141.dm2, session_id 71995, user user2, nodes r3i6n2,r3i6n3
    Print load
    r3i6n2: LOAD: 16.01, 16.01, 16.00
    r3i6n3: LOAD:  0.01,  0.00,  0.01
    Print processes
           %CPU CMD
    r3i6n2:  0.0 -bash
    r3i6n2:  0.0 /bin/bash /var/spool/PBS/mom_priv/jobs/35141.dm2.SC
    r3i6n2: 99.7 run-task
    ...
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    In this example we see that job 35141.dm2 currently runs process run-task on node r3i6n2, using one thread only, while node r3i6n3 is empty, which may indicate an execution error.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ check-pbs-jobs --jobid 35141.dm2 --print-job-out
    JOB 35141.dm2, session_id 71995, user user2, nodes r3i6n2,r3i6n3
    Print job standard output:
    ======================== Job start  ==========================
    
    David Hrbáč's avatar
    David Hrbáč committed
    Started at    : Fri Aug 30 02:47:53 CEST 2013
    Script name   : script
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    Run loop 1
    Run loop 2
    Run loop 3
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    In this example, we see actual output (some iteration loops) of the job 35141.dm2
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	Manage your queued or running jobs, using the **qhold**, **qrls**, **qdel,** **qsig** or **qalter** commands
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    You may release your allocation at any time, using qdel command
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qdel 12345.isrv5
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    You may kill a running job by force, using qsig command
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsig -s 9 12345.isrv5
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    Learn more by reading the pbs man page
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ man pbs_professional
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    ## Job Execution
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ### Jobscript
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	Prepare the jobscript to run batch jobs in the PBS queue system
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    The Jobscript is a user made script, controlling sequence of commands for executing the calculation. It is often written in bash, other scripts may be used as well. The jobscript is supplied to PBS **qsub** command as an argument and executed by the PBS Professional workload manager.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	The jobscript or interactive shell is executed on first of the allocated nodes.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsub -q qexp -l select=4:ncpus=24 -N Name0 ./myjob
    $ qstat -n -u username
    
    isrv5:
                                                                Req'd  Req'd   Elap
    Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
    --------------- -------- --  |---|---| ------ --- --- ------ ----- - -----
    15209.isrv5     username qexp     Name0        5530   4  96    --  01:00 R 00:00
       r21u01n577/0*24+r21u02n578/0*24+r21u03n579/0*24+r21u04n580/0*24
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    In this example, the nodes r21u01n577, r21u02n578, r21u03n579, r21u04n580 were allocated for 1 hour via the qexp queue. The jobscript myjob will be executed on the node r21u01n577, while the nodes r21u02n578, r21u03n579, r21u04n580 are available for use as well.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	The jobscript or interactive shell is by default executed in home directory
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    $ qsub -q qexp -l select=4:ncpus=24 -I
    qsub: waiting for job 15210.isrv5 to start
    qsub: job 15210.isrv5 ready
    
    $ pwd
    /home/username
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    In this example, 4 nodes were allocated interactively for 1 hour via the qexp queue. The interactive shell is executed in the home directory.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    David Hrbáč's avatar
    David Hrbáč committed
    	All nodes within the allocation may be accessed via ssh.  Unallocated nodes are not accessible to user.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    The allocated nodes are accessible via ssh from login nodes. The nodes may access each other via ssh as well.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    Calculations on allocated nodes may be executed remotely via the MPI, ssh, pdsh or clush. You may find out which nodes belong to the allocation by reading the $PBS_NODEFILE file
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    qsub -q qexp -l select=2:ncpus=24 -I
    qsub: waiting for job 15210.isrv5 to start
    qsub: job 15210.isrv5 ready
    
    $ pwd
    /home/username
    
    $ sort -u $PBS_NODEFILE
    r2i5n6.ib0.smc.salomon.it4i.cz
    r4i6n13.ib0.smc.salomon.it4i.cz
    r4i7n0.ib0.smc.salomon.it4i.cz
    r4i7n2.ib0.smc.salomon.it4i.cz
    
    $ pdsh -w r2i5n6,r4i6n13,r4i7n[0,2] hostname
    r4i6n13: r4i6n13
    r2i5n6: r2i5n6
    r4i7n2: r4i7n2
    r4i7n0: r4i7n0
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    In this example, the hostname program is executed via pdsh from the interactive shell. The execution runs on all four allocated nodes. The same result would be achieved if the pdsh is called from any of the allocated nodes or from the login nodes.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    ### Example Jobscript for MPI Calculation
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	Production jobs must use the /scratch directory for I/O
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    The recommended way to run production jobs is to change to /scratch directory early in the jobscript, copy all inputs to /scratch, execute the calculations and copy outputs to home directory.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    #!/bin/bash
    
    # change to scratch directory, exit on failure
    SCRDIR=/scratch/work/user/$USER/myjob
    mkdir -p $SCRDIR
    cd $SCRDIR || exit
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    # copy input file to scratch
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    cp $PBS_O_WORKDIR/input .
    cp $PBS_O_WORKDIR/mympiprog.x .
    
    # load the mpi module
    module load OpenMPI
    
    # execute the calculation
    mpiexec -pernode ./mympiprog.x
    
    # copy output file to home
    cp output $PBS_O_WORKDIR/.
    
    #exit
    exit
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    In this example, some directory on the /home holds the input file input and executable mympiprog.x . We create a directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI programm mympiprog.x and copy the output file back to the /home directory. The mympiprog.x is executed as one process per node, on all allocated nodes.
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    Pavel Jirásek's avatar
    Pavel Jirásek committed
    	Consider preloading inputs and executables onto [shared scratch](storage/) before the calculation starts.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    In some cases, it may be impractical to copy the inputs to scratch and outputs to home. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such a case, it is users responsibility to preload the input files on shared /scratch before the job submission and retrieve the outputs manually, after all calculations are finished.
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	Store the qsub options within the jobscript. Use **mpiprocs** and **ompthreads** qsub options to control the MPI job execution.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    ### Example Jobscript for MPI Calculation With Preloaded Inputs
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    Example jobscript for an MPI job with preloaded inputs and executables, options for qsub are stored within the script :
    
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    #!/bin/bash
    #PBS -q qprod
    #PBS -N MYJOB
    #PBS -l select=100:ncpus=24:mpiprocs=1:ompthreads=24
    #PBS -A OPEN-0-0
    
    # change to scratch directory, exit on failure
    SCRDIR=/scratch/work/user/$USER/myjob
    cd $SCRDIR || exit
    
    # load the mpi module
    module load OpenMPI
    
    # execute the calculation
    mpiexec ./mympiprog.x
    
    #exit
    exit
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    In this example, input and executable files are assumed preloaded manually in /scratch/$USER/myjob directory. Note the **mpiprocs** and **ompthreads** qsub options, controlling behavior of the MPI execution. The mympiprog.x is executed as one process per node, on all 100 allocated nodes. If mympiprog.x implements OpenMP threads, it will run 24 threads per node.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    HTML commented section #2 (examples need to be reworked)
    
    ### Example Jobscript for Single Node Calculation
    
    
    David Hrbáč's avatar
    David Hrbáč committed
    !!! Note
    
    	Local scratch directory is often useful for single node jobs. Local scratch will be deleted immediately after the job ends. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Pavel Jirásek's avatar
    Pavel Jirásek committed
    Example jobscript for single node calculation, using [local scratch](storage/) on the node:
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```bash
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    #!/bin/bash
    
    # change to local scratch directory
    cd /lscratch/$PBS_JOBID || exit
    
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    # copy input file to scratch
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    cp $PBS_O_WORKDIR/input .
    cp $PBS_O_WORKDIR/myprog.x .
    
    # execute the calculation
    ./myprog.x
    
    # copy output file to home
    cp output $PBS_O_WORKDIR/.
    
    #exit
    exit
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    ```
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    In this example, some directory on the home holds the input file input and executable myprog.x . We copy input and executable files from the home directory where the qsub was invoked ($PBS_O_WORKDIR) to local scratch /lscratch/$PBS_JOBID, execute the myprog.x and copy the output file back to the /home directory. The myprog.x runs on one node only and may use threads.