Starting September 20., we are migrating the Karolina's workload manager **from PBS to Slurm**.
Starting September 20., we are migrating the Karolina's workload manager **from PBS to Slurm**.
For more information on how to submit jobs in Slurm, see the [Slurm Job Submission and Execution][8] section.
For more information on how to submit jobs in Slurm, see the [Slurm Job Submission and Execution][7] section.
To run a [job][1], computational resources for this particular job must be allocated. This is done via the [PBS Pro][b] job workload manager software, which distributes workloads across the supercomputer. Extensive information about PBS Pro can be found in the [PBS Pro User's Guide][2].
To run a [job][1], computational resources for this particular job must be allocated. This is done via the [PBS Pro][b] job workload manager software, which distributes workloads across the supercomputer. Extensive information about PBS Pro can be found in the [PBS Pro User's Guide][2].
...
@@ -37,20 +37,13 @@ In many cases, it is useful to submit a huge (100+) number of computational jobs
...
@@ -37,20 +37,13 @@ In many cases, it is useful to submit a huge (100+) number of computational jobs
Read more on the [Capacity Computing][6] page.
Read more on the [Capacity Computing][6] page.
## Vnode Allocation
The `qgpu` queue on Karolina takes advantage of the division of nodes into vnodes. Accelerated node equipped with two 64-core processors and eight GPU cards is treated as eight vnodes, each containing 16 CPU cores and 1 GPU card. Vnodes can be allocated to jobs individually – through precise definition of resource list at job submission, you may allocate varying number of resources/GPU cards according to your needs.