diff --git a/docs.it4i/cs/job-scheduling.md b/docs.it4i/cs/job-scheduling.md
index 83938169bf1dc4d7d84cfa5df68c407d54e3bf69..a8e93d75802c8b749b202878e73364a7bba5689c 100644
--- a/docs.it4i/cs/job-scheduling.md
+++ b/docs.it4i/cs/job-scheduling.md
@@ -304,14 +304,16 @@ $ scontrol -d show node p02-intel02 | grep ActiveFeatures
    ActiveFeatures=x86_64,intel,icelake,ib,fpga,bitware,nvdimm,noht
 ```
 
-## Resources
+## Resources, GRES
 
 Slurm supports the ability to define and schedule arbitrary resources - Generic RESources (GRES) in Slurm's terminology. We use GRES for scheduling/allocating GPGPUs and FPGAs.
 
 !!! warning
 Use only allocated GPGPUs and FPGAs. Resource separation is not enforced. If you use non-allocated resources, you can observe strange behaviour and get into troubles.
 
-Get information about GRES on node:
+### Node resources
+
+Get information about GRES on node.
 
 ```
 $ scontrol -d show node p02-intel01 | grep Gres=
@@ -324,6 +326,10 @@ $ scontrol -d show node p03-amd02 | grep Gres=
    Gres=gpgpu:amd_mi100:4,fpga:xilinx_alveo_u280:2
 ```
 
+### Request GPGPUs or FPGAs
+
+To allocate required resources use --gres salloc/srun option.
+
 Example: Alocate one FPGA
 ```
 $ salloc -A PROJECT-ID -p p03-amd --gres fpga:1