Skip to content
Snippets Groups Projects
Commit 0128bd83 authored by Roman Sliva's avatar Roman Sliva
Browse files

Update slurm-job-submission-and-execution.md

parent 95657d8b
No related branches found
No related tags found
No related merge requests found
Pipeline #33227 passed with warnings
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
## Getting Partitions Information ## Getting Partitions Information
Display partitions/queues Display partitions/queues on system:
```console ```console
$ sinfo -s $ sinfo -s
...@@ -29,11 +29,11 @@ qviz up 8:00:00 0/2/0/2 vizserv[1-2] ...@@ -29,11 +29,11 @@ qviz up 8:00:00 0/2/0/2 vizserv[1-2]
On Barbora cluster all queues/partitions provide full node allocation, whole nodes are allocated to job. On Barbora cluster all queues/partitions provide full node allocation, whole nodes are allocated to job.
On Complementary systems some queues/partitions provide full node allocation, see [Complementary systems documentation][2] for details. On Complementary systems only some queues/partitions provide full node allocation, see [Complementary systems documentation][2] for details.
## Getting Job Information ## Getting Job Information
Show my jobs Show my jobs:
```console ```console
$ squeue --me $ squeue --me
...@@ -41,41 +41,43 @@ $ squeue --me ...@@ -41,41 +41,43 @@ $ squeue --me
104 qcpu interact user R 1:48 2 cn[101-102] 104 qcpu interact user R 1:48 2 cn[101-102]
``` ```
Show job details for specific job Show job details for specific job:
```console ```console
$ scontrol show job JOBID $ scontrol show job JOBID
``` ```
Show job details for executing job from job session Show job details for executing job from job session:
```console ```console
$ scontrol show job $SLURM_JOBID $ scontrol show job $SLURM_JOBID
``` ```
Show my jobs with long output format (includes time limit) Show my jobs using long output format (includes time limit):
```console ```console
$ squeue --me -l $ squeue --me -l
``` ```
Show all jobs Show all jobs on system:
```console ```console
$ squeue $ squeue
``` ```
Show my jobs in given state Show my jobs in running state:
```console ```console
$ squeue --me -t running $ squeue --me -t running
``` ```
Show my jobs in pending state:
```console ```console
$ squeue --me -t pending $ squeue --me -t pending
``` ```
Show jobs for given project (Slurm account). Show jobs for given project:
```console ```console
squeue -A PROJECT-ID squeue -A PROJECT-ID
...@@ -83,19 +85,19 @@ squeue -A PROJECT-ID ...@@ -83,19 +85,19 @@ squeue -A PROJECT-ID
## Running Interactive Jobs ## Running Interactive Jobs
Run interactive job - queue qcpu_exp, one node by default, one task by default Run interactive job - queue qcpu_exp, one node by default, one task by default:
```console ```console
$ salloc -A PROJECT-ID -p qcpu_exp $ salloc -A PROJECT-ID -p qcpu_exp
``` ```
Run interactive job on four nodes, 36 tasks per node (Barbora cluster, cpu partition recommended value based on node core count), two hours time limit Run interactive job on four nodes, 36 tasks per node (Barbora cluster, cpu partition recommended value based on node core count), two hours time limit:
```console ```console
$ salloc -A PROJECT-ID -p qcpu -N 4 --ntasks-per-node 36 -t 2:00:00 $ salloc -A PROJECT-ID -p qcpu -N 4 --ntasks-per-node 36 -t 2:00:00
``` ```
Run interactive job, with X11 forwarding Run interactive job, with X11 forwarding:
```console ```console
$ salloc -A PROJECT-ID -p qcpu_exp --x11 $ salloc -A PROJECT-ID -p qcpu_exp --x11
...@@ -106,7 +108,7 @@ Run interactive job, with X11 forwarding ...@@ -106,7 +108,7 @@ Run interactive job, with X11 forwarding
## Running Batch Jobs ## Running Batch Jobs
Run batch job Run batch job:
```console ```console
$ sbatch script.sh $ sbatch script.sh
...@@ -158,14 +160,14 @@ set | grep ^SLURM ...@@ -158,14 +160,14 @@ set | grep ^SLURM
See relevant [Slurm srun documentation][3] for details. See relevant [Slurm srun documentation][3] for details.
Get job nodelist Get job nodelist:
``` ```
$ echo $SLURM_JOB_NODELIST $ echo $SLURM_JOB_NODELIST
cn[101-102] cn[101-102]
``` ```
Expand nodelist to list of nodes. Expand nodelist to list of nodes:
``` ```
$ scontrol show hostnames $ scontrol show hostnames
...@@ -175,17 +177,19 @@ cn102 ...@@ -175,17 +177,19 @@ cn102
## Modifying Jobs ## Modifying Jobs
In general:
``` ```
$ scontrol update JobId=JOBID ATTR=VALUE $ scontrol update JobId=JOBID ATTR=VALUE
``` ```
Modify job's time limit Modify job's time limit:
``` ```
scontrol update job JOBID timelimit=4:00:00 scontrol update job JOBID timelimit=4:00:00
``` ```
Set/modify job's comment Set/modify job's comment:
``` ```
$ scontrol update JobId=JOBID Comment='The best job ever' $ scontrol update JobId=JOBID Comment='The best job ever'
...@@ -193,43 +197,42 @@ $ scontrol update JobId=JOBID Comment='The best job ever' ...@@ -193,43 +197,42 @@ $ scontrol update JobId=JOBID Comment='The best job ever'
## Deleting Jobs ## Deleting Jobs
Delete job by job id. Delete job by job id:
``` ```
$ scancel JOBID $ scancel JOBID
``` ```
Delete all my jobs Delete all my jobs:
``` ```
$ scancel --me $ scancel --me
``` ```
Delete all my jobs in interactive mode Delete all my jobs in interactive mode, confirming every action:
``` ```
$ scancel --me -i $ scancel --me -i
``` ```
Delete all my running jobs:
Delete all my running jobs
``` ```
$ scancel --me -t running $ scancel --me -t running
``` ```
Delete all my pending jobs Delete all my pending jobs:
``` ```
$ scancel --me -t pending $ scancel --me -t pending
``` ```
Delete all my pending jobs for project PROJECT-ID Delete all my pending jobs for project PROJECT-ID:
``` ```
$ scancel --me -t pending -A PROJECT-ID $ scancel --me -t pending -A PROJECT-ID
``` ```
[1]: https://slurm.schedmd.com/ [1]: https://slurm.schedmd.com/
[2]: /cs/job-scheduling/ [2]: /cs/job-scheduling/#partitions
[3]: https://slurm.schedmd.com/srun.html#SECTION_OUTPUT-ENVIRONMENT-VARIABLES [3]: https://slurm.schedmd.com/srun.html#SECTION_OUTPUT-ENVIRONMENT-VARIABLES
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment