From b589b2773f25e8e246df5a1343a93b98aa6a471b Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Roman=20Sl=C3=ADva?= <roman.sliva@vsb.cz>
Date: Tue, 5 Sep 2023 08:11:38 +0200
Subject: [PATCH] Update karolina-slurm.md

---
 docs.it4i/general/karolina-slurm.md | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/docs.it4i/general/karolina-slurm.md b/docs.it4i/general/karolina-slurm.md
index 7bfe2459f..5e03d09a1 100644
--- a/docs.it4i/general/karolina-slurm.md
+++ b/docs.it4i/general/karolina-slurm.md
@@ -4,10 +4,11 @@
 
 [Slurm][1] workload manager is used to allocate and access Karolina cluster's resources.
 General information about Slurm use at IT4Innovations can be found at [Slurm Job Submission and Execution][2].
+This page describes Karolina cluster specific Slurm settings and use.
 
-## Getting Partition Information
+## Partition Information
 
-Display partitions/queues on system:
+Partitions/queues on system:
 
 ```console
 $ sinfo -s
@@ -27,6 +28,8 @@ qfat            up 2-00:00:00          0/1/0/1 sdf1
 qviz            up    8:00:00          0/2/0/2 viz[1-2]
 ```
 
+For more information about Karolina's queues see [this page][8].
+
 Graphical representation of cluster usage, partitions, nodes, and jobs could be found
 at [https://extranet.it4i.cz/rsweb/karolina][3]
 
@@ -61,7 +64,7 @@ Access [GPU accelerated nodes][5].
 Every GPU accelerated node is divided into eight parts, each part contains one GPU, 16 CPU cores and corresponding memory.
 By default only one part i.e. 1/8 of the node - one GPU and corresponding CPU cores and memory is allocated.
 There is no need to specify the number of cores and memory size, on the contrary, it is undesirable.
-There are emloyed some restrictions which aim to provide fair division and efficient use of resources.
+There are emloyed some restrictions which aim to provide fair division and efficient use of node resources.
 
 ```console
 #!/usr/bin/bash
@@ -75,13 +78,13 @@ There are emloyed some restrictions which aim to provide fair division and effic
 To allocate more GPUs use `--gpus` option.
 The default behavior is to allocate enough nodes to satisfy the requested resources as expressed by --gpus option and without delaying the initiation of the job.
 
-Following code requests four gpus, scheduler can allocate one to four nodes depending on cluster state to fulfil the request.
+Following code requests four gpus, scheduler can allocate one up to four nodes depending on actual cluster state to fulfil the request.
 
 ```console
 #SBATCH --gpus 4
 ```
 
-Following code requests 16 gpus, scheduler can allocate two to sixteen nodes depending on cluster state to fulfil the request.
+Following code requests 16 gpus, scheduler can allocate two up to sixteen nodes depending on actual cluster state to fulfil the request.
 
 ```console
 #SBATCH --gpus 16
@@ -121,7 +124,7 @@ To allocate whole GPU accelerated node you can also use `--exclusive` option
 ## Using Fat Queue
 
 Access [data analytics aka fat node][6].
-Fat node is divided into 32 parts, each part contains one processor (24 cores) and corresponding memory.
+Fat node is divided into 32 parts, each part contains one socket/processor (24 cores) and corresponding memory.
 By default only one part i.e. 1/32 of the node - one processor and corresponding memory is allocated.
 
 To allocate requested memory use `--mem` option.
@@ -137,6 +140,8 @@ Corresponding CPUs wil be allocated. Fat node has about 23TB of memory available
 ...
 ```
 
+You can also specify CPU-oriented options (like --cpus-per-task), then appropriate memory will be allocated to job.
+
 To allocate whole fat node use `--exclusive` option
 
 ```console
@@ -166,3 +171,4 @@ $ salloc -A PROJECT-ID -p qviz --exclusive
 [5]: /karolina/compute-nodes/#compute-nodes-with-a-gpu-accelerator
 [6]: /karolina/compute-nodes/#data-analytics-compute-node
 [7]: /karolina/visualization/
+[8]: /general/karolina-queues
-- 
GitLab