From fdabc6bc2d48046bfe841d31196f78e08a8de7ae Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Luk=C3=A1=C5=A1=20Krup=C4=8D=C3=ADk?= <lukas.krupcik@vsb.cz>
Date: Wed, 7 Mar 2018 13:45:09 +0100
Subject: [PATCH] fix

---
 docs.it4i/software/tools/singularity-it4i.md | 45 ++++++++++++--------
 1 file changed, 27 insertions(+), 18 deletions(-)

diff --git a/docs.it4i/software/tools/singularity-it4i.md b/docs.it4i/software/tools/singularity-it4i.md
index 1b64a4d96..3c375c1e4 100644
--- a/docs.it4i/software/tools/singularity-it4i.md
+++ b/docs.it4i/software/tools/singularity-it4i.md
@@ -1,6 +1,6 @@
 # Singularity on IT4Innovations
 
-On clusters we have different versions of operating systems images. Below you see the available operating systems images.
+On clusters we have different versions of operating systems singularity images. Below you see the available operating systems singularity images.
 
 ```console
    Salomon                      Anselm
@@ -16,23 +16,17 @@ On clusters we have different versions of operating systems images. Below you se
           └── 16.04                   └── 16.04
 ```
 
-For current information on available images, refer to the `ml av` and see statement the `OS`
+For current information on available singularity images, refer to the `ml av` and see statement the `OS`
 
 ## Available Operating Systems Images
 
-On IT4Innovations clusters we support different operating system.
-
-**Supported OS:**
-
 * CentOS
 * Debian
 * Ubuntu
 * Fedora
 
-**Supported features:**
-
-* GPU - graphics cards
-* MIC - Intel Xeon Phi cards
+!!! note
+    We support graphical cards on Anselm singularity image and support Intel Xeon Phi cards on Salomon images. (OS/Version-[none|GPU|MIC])
 
 ## IT4Innovations Wrappers
 
@@ -57,7 +51,7 @@ CentOS Linux release 7.3.1708 (Core)
 
 **image-mpi**
 
-MPI wrapper. More in the chapter [Examples MPI](#MPI)
+MPI wrapper. More in the chapter [Examples MPI](#mpi)
 
 **image-run**
 
@@ -97,12 +91,10 @@ New version is ready. (/home/login/.singularity/images/CentOS-6.9_20180220092823
 
 ## Examples
 
-In next examples, we will be using Singularity images on clusters.
+In next examples, we will be using Singularity images on IT4Innovations clusters.
 
 ### Load Image
 
-For classic image
-
 ```console
 $ ml CentOS/6.9
 Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-6.9_20180220133305.img
@@ -111,20 +103,37 @@ Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-
 !!! note
     First usage image copy image from /apps/all/OS/... to your /home (.singularity/images)
 
-For special image (GPU, MIC)
+For GPU and MIC image
 
 ```console
 $ ml CentOS/6.9-GPU
 ```
 
+```console
+$ ml CentOS/6.9-MIC
+```
+
 !!! note
-    For GPU image, you must allocate node with GPU card and for MIC image, you must allocate node with MIC cards.
+    For the GPU image, you must allocate node with GPU card and for the MIC image, you must allocate node with the Intel Xeon Phi cards.
 
 ### MPI
 
-Submited job `qsub -A PROJECT -q qprod -l select=2:mpiprocs=24 -l walltime=00:30:00 -I`
+For example submit job `qsub -A PROJECT -q qprod -l select=2:mpiprocs=24 -l walltime=00:30:00 -I`
+
+!!! note
+    We have seen no major performance impact from running a job in a Singularity container.
+
+With Singularity, the MPI usage model is to call ‘mpirun’ from outside the container, and reference the container from your ‘mpirun’ command. Usage would look like this:
+
+```console
+$ mpirun -np 20 singularity exec container.img /path/to/contained_mpi_prog
+```
+
+By calling ‘mpirun’ outside the container, we solve several very complicated work-flow aspects. For example, if ‘mpirun’ is called from within the container it must have a method for spawning processes on remote nodes. Historically ssh is used for this which means that there must be an sshd running within the container on the remote nodes, and this sshd process must not conflict with the sshd running on that host! It is also possible for the resource manager to launch the job and (in Open MPI’s case) the Orted processes on the remote system, but that then requires resource manager modification and container awareness.
+
+In the end, we do not gain anything by calling ‘mpirun’ from within the container except for increasing the complexity levels and possibly losing out on some added performance benefits (e.g. if a container wasn’t built with the proper OFED as the host).
 
-### MPI Into Image
+### MPI Inside Image
 
 ```console
 $ ml CentOS/6.9
-- 
GitLab