diff --git a/docs.it4i/general/slurmtopbs.md b/docs.it4i/general/slurmtopbs.md
index 810bd5af141163fcd2a74cf27a3d606eeb909fed..fc2588da5df277bb60a83509352f1d5253e4baa8 100644
--- a/docs.it4i/general/slurmtopbs.md
+++ b/docs.it4i/general/slurmtopbs.md
@@ -1,4 +1,4 @@
-# Migrating From Slurm
+# Migrating From SLURM
 
 SLURM-optimized parallel jobs will not under PBS out of the box.
 Conversion to PBS standards is necessary. Here we provide hints on how to proceed.
@@ -6,21 +6,22 @@ Conversion to PBS standards is necessary. Here we provide hints on how to procee
 It is important to notice that `mpirun`  is used here as an alternative to the `srun` in SLURM. The `-n` flag is used to regulate the number of tasks spawned by the MPI. The path to the script being run by MPI must be absolute. The script rights should be set to allow execution and reading.
 
 The PBS provides some useful variables that may be used in the jobscripts
-`PBS_O_WORKDIR` and `PBS_JOBID`. For example:  
-  
-The `PBS_O_WORKDIR` returns the directory, where the `qsub` command was submitted.  
+`PBS_O_WORKDIR` and `PBS_JOBID`. For example:
+
+The `PBS_O_WORKDIR` returns the directory, where the `qsub` command was submitted.
 The `PBS_JOBID` returns the numercal identifyer of the job.
 The `qsub` always starts execution in the `$HOME` directory.
 
-## Migrating PyTorch from SLURM
+## Migrating PyTorch From SLURM
 
 The Intel MPI provides some useful variables that may be used in the scripts executed via the MPI.
-these include `PMI_RANK`,`PMI_SIZE` and `MPI_LOCALRANKID`.  
+these include `PMI_RANK`,`PMI_SIZE` and `MPI_LOCALRANKID`.
 
-- The `PMI_RANK` and `MPI_LOCALRANKID` returns the process rank within the MPI_COMM_WORLD communicator - the process number 
-- The `PMI_SIZE` returns the process rank within the MPI_COMM_WORLD communicator - the number of processes  
+- The `PMI_RANK` and `MPI_LOCALRANKID` returns the process rank within the MPI_COMM_WORLD communicator - the process number
+- The `PMI_SIZE` returns the process rank within the MPI_COMM_WORLD communicator - the number of processes
 
 For example:
+
 ```
 $ mpirun -n 4 /bin/bash -c 'echo $PMI_SIZE'
 4
@@ -28,6 +29,7 @@ $ mpirun -n 4 /bin/bash -c 'echo $PMI_SIZE'
 4
 4
 ```
+
 In typical multi-gpu multi-node setting using PyTorch one needs to know:
 
 - World-size - i.e. the total number of GPUs in the system
@@ -43,6 +45,6 @@ The required changes are:
 - To get local GPU ID on the node (can be used to manually set `CUDA_VISIBLE_DEVICES`), access the `MPI_LOCALRANKID` variable.
 
 !!! hint
-    Some jobs may greatly benefit from [mixed precision training][1] since the new NVIDIA A100 GPUs have excellent support for it. 
+    Some jobs may greatly benefit from [mixed precision training][1] since the new NVIDIA A100 GPUs have excellent support for it.
 
 [1]: https://pytorch.org/docs/stable/amp.html