[Slurm][1] workload manager is used to allocate and access Barbora cluster and Complementary systems resources.
[Slurm][1] workload manager is used to allocate and access Barbora cluster and Complementary systems resources. Karolina cluster coming soon...
A `man` page exists for all Slurm commands, as well as `--help` command option, which provides a brief summary of options. Slurm [documentation][c] and [man pages][d] are also available online.
## Getting Partitions Information
...
...
@@ -27,12 +29,20 @@ qdgx up 2-00:00:00 0/1/0/1 cn202
qviz up 8:00:00 0/2/0/2 vizserv[1-2]
```
`NODES(A/I/O/T)` column sumarizes node count per state, where the `A/I/O/T` stands for `allocated/idle/other/total`.
On Barbora cluster all queues/partitions provide full node allocation, whole nodes are allocated to job.
On Complementary systems only some queues/partitions provide full node allocation, see [Complementary systems documentation][2] for details.
## Getting Job Information
Show all jobs on system:
```console
$squeue
```
Show my jobs:
```console
...
...
@@ -59,12 +69,6 @@ Show my jobs using long output format (includes time limit):
$squeue --me-l
```
Show all jobs on system:
```console
$squeue
```
Show my jobs in running state:
```console
...
...
@@ -103,6 +107,8 @@ Run interactive job, with X11 forwarding:
$salloc -A PROJECT-ID -p qcpu_exp --x11
```
To finish the interactive job, you can either use the `exit` keyword, or Ctrl+D (`^D`) control sequence.
!!! warning
Do not use `srun` for initiating interactive jobs, subsequent `srun`, `mpirun` invocations would block forever.