diff --git a/docs.it4i/software/mpi/running-mpich2.md b/docs.it4i/software/mpi/running-mpich2.md
index 1478fe1111a2a39bf872b8e55f7fd9818fa14743..0e39af3644c04a767e58adc214975012ad368de3 100644
--- a/docs.it4i/software/mpi/running-mpich2.md
+++ b/docs.it4i/software/mpi/running-mpich2.md
@@ -2,7 +2,7 @@
 
 ## MPICH2 Program Execution
 
-The MPICH2 programs use mpd daemon or ssh connection to spawn processes, no PBS support is needed. However the PBS allocation is required to access compute nodes. On Anselm, the **Intel MPI** and **mpich2 1.9** are MPICH2 based MPI implementations.
+The MPICH2 programs use MPD daemon or SSH connection to spawn processes, no PBS support is needed. However, the PBS allocation is required to access compute nodes. On Anselm, the **Intel MPI** and **mpich2 1.9** are MPICH2 based MPI implementations.
 
 ### Basic Usage
 
@@ -26,7 +26,7 @@ $ mpirun -ppn 1 -hostfile $PBS_NODEFILE ./helloworld_mpi.x
 In this example, we allocate 4 nodes via the express queue interactively. We set up the intel MPI environment and interactively run the helloworld_mpi.x program. We request MPI to spawn 1 process per node.
 Note that the executable helloworld_mpi.x must be available within the same path on all nodes. This is automatically fulfilled on the /home and /scratch filesystem.
 
-You need to preload the executable, if running on the local scratch /lscratch filesystem
+You need to preload the executable if running on the local scratch /lscratch filesystem:
 
 ```console
 $ pwd
@@ -39,7 +39,7 @@ $ mpirun -ppn 1 -hostfile $PBS_NODEFILE ./helloworld_mpi.x
     Hello world! from rank 3 of 4 on host cn110
 ```
 
-In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the cp command via mpirun, copying the executable from shared home to local scratch . Second mpirun will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node.
+In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the cp command via mpirun, copying the executable from shared home to local scratch. Second mpirun will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node.
 
 !!! note
     MPI process mapping may be controlled by PBS parameters.