diff --git a/docs.it4i/cs/job-scheduling.md b/docs.it4i/cs/job-scheduling.md
index 1becfa65308a044b7fec6556387854d93a620a53..0eb40966fde36c843d69edb98b1db7ceb0347465 100644
--- a/docs.it4i/cs/job-scheduling.md
+++ b/docs.it4i/cs/job-scheduling.md
@@ -105,7 +105,7 @@ sbatch -A PROJECT-ID -p p01-arm -N=8 ./script.sh
 
 FPGAs are treated as resources. See below for more details about resources.
 
-Partial allocation - per FPGA, resource separation is not enforced. 
+Partial allocation - per FPGA, resource separation is not enforced.
 
 One FPGA:
 
@@ -225,8 +225,8 @@ Users can select nodes based on the feature tags using --constraint option.
 | gpgpu | equipped with GPGPU |
 | fpga | equipped with FPGA |
 | nvdimm | equipped with NVDIMMs |
-| ht | Hyperthreading enabled | 
-| noht | Hyperthreading disabled | 
+| ht | Hyperthreading enabled |
+| noht | Hyperthreading disabled |
 
 ```
 $ sinfo -o '%16N %f'
@@ -255,6 +255,7 @@ $ scontrol -d show node p02-intel02 | grep ActiveFeatures
 Slurm supports the ability to define and schedule arbitrary resources - Generic RESources (GRES) in Slurm's terminology. We use GRES for scheduling/allocating GPGPUs and FPGAs.
 
 Get information about GRES on node:
+
 ```
 $ scontrol -d show node p03-amd01 | grep Gres=
    Gres=gpgpu:amd_mi100:4,fpga:xilinx_alveo_u250:2
@@ -263,6 +264,7 @@ $ scontrol -d show node p03-amd02 | grep Gres=
 ```
 
 Request specified GRES. GRES entry is using format "name[[:type]:count", in the following example name is fpga, type is xilinx_alveo_u280, and count is count 2.
+
 ```
 $ salloc -A PROJECT-ID -p p03-amd --gres=fpga:xilinx_alveo_u280:2
 salloc: Granted job allocation XXX