diff --git a/docs.it4i/general/resource_allocation_and_job_execution.md b/docs.it4i/general/resource_allocation_and_job_execution.md
index 306588afb9bcc2b343a0736231b04f7abc0f79d4..e021017e232b65993c4b5cd7b10c497dbb6eb5c7 100644
--- a/docs.it4i/general/resource_allocation_and_job_execution.md
+++ b/docs.it4i/general/resource_allocation_and_job_execution.md
@@ -2,12 +2,12 @@
     We migrated workload managers of all clusters (including Barbora and Karolina) **from PBS to Slurm**!
     For more information on how to submit jobs in Slurm, see the [Job Submission and Execution][5] section.
 
-# Resource Allocation
-
-To run a [job][1], computational resources for this particular job must be allocated. This is done via the [Slurm][a] job workload manager software, which distributes workloads across the supercomputer.
+# Introduction to running jobs
 
 ## Job Submission and Execution
 
+To run a [job][1], computational resources for this particular job must be allocated. This is done via the [Slurm][a] job workload manager software, which distributes workloads across the supercomputer.
+
 The `sbatch` or `salloc` command creates a request to the Slurm job manager for allocation of specified resources.
 The resources will be allocated when available, subject to allocation policies and constraints.
 **After the resources are allocated, the jobscript or interactive shell is executed on first of the allocated nodes.**