Skip to content
Snippets Groups Projects
Commit 10dc4c5f authored by Roman Sliva's avatar Roman Sliva
Browse files

Update karolina-slurm.md

parent 2dee3574
No related branches found
No related tags found
No related merge requests found
Pipeline #33897 passed with warnings
......@@ -58,7 +58,7 @@ There is no need to specify the number of cores and memory size.
## Using GPU Queues
Access [GPU accelerated nodes][5].
Every GPU accelerated node is divided into eight parts, each part contains one GPU.
Every GPU accelerated node is divided into eight parts, each part contains one GPU, 16 CPU cores and corresponding memory.
By default only one part i.e. 1/8 of the node - one GPU and corresponding CPU cores and memory is allocated.
There is no need to specify the number of cores and memory size, on the contrary, it is undesirable.
There are emloyed some restrictions which aim to provide fair division and efficient use of resources.
......@@ -121,8 +121,8 @@ To allocate whole GPU accelerated node you can also use `--exclusive` option
## Using Fat Queue
Access [data analytics aka fat node][6].
Fat node is divided into 32 parts, one part per CPU.
By default only one part i.e. 1/32 of the node - one CPU and corresponding memory is allocated.
Fat node is divided into 32 parts, each part contains one processor (24 cores) and corresponding memory.
By default only one part i.e. 1/32 of the node - one processor and corresponding memory is allocated.
To allocate requested memory use `--mem` option.
Corresponding CPUs wil be allocated. Fat node has about 23TB of memory available for jobs.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment