Commit fab36ff2 authored by Josef Hrabal's avatar Josef Hrabal
Browse files

Update singularity-it4i.md

parent c21769d7
# Singularity on IT4Innovations
On our clusters, the singularity images of main linux distributions are prepared. List of available singularity images (05.04.2018):
On our clusters, the Singularity images of main linux distributions are prepared. List of available singularity images (05.04.2018):
```console
Salomon Anselm
......@@ -17,7 +17,7 @@ On our clusters, the singularity images of main linux distributions are prepared
└── 16.04-GPU
```
Current information about available singularity images can be obtained by the `ml av` command. The Images are listed in the `OS` section.
Current information about available Singularity images can be obtained by the `ml av` command. The Images are listed in the `OS` section.
The bootstrap scripts, wrappers, features, etc. are located [here](https://code.it4i.cz/sccs/it4i-singularity).
......@@ -26,7 +26,7 @@ The bootstrap scripts, wrappers, features, etc. are located [here](https://code.
## IT4Innovations Singularity Wrappers
For better user experience with singularity containers we prepared several wrappers:
For better user experience with Singularity containers we prepared several wrappers:
* image-exec
* image-mpi
......@@ -176,7 +176,7 @@ Hello world from MIC, #of cores: 244
In the following example, we are using a job submitted by the command: `qsub -A PROJECT -q qnvidia -l select=1:ncpus=16:mpiprocs=16 -l walltime=01:00:00 -I`
!!! note
GPU image was prepared only for the Anselm cluster
The GPU image was prepared only for the Anselm cluster
**Checking nvidia driver inside image**
......@@ -213,17 +213,21 @@ In the following example, we are using a job submitted by the command: `qsub -A
!!! note
We have seen no major performance impact from running a job in a Singularity container.
With Singularity, the MPI usage model is to call mpirun from outside the container, and reference the container from your mpirun command. Usage would look like this:
With Singularity, the MPI usage model is to call `mpirun` from outside the container, and reference the container from your `mpirun` command. Usage would look like this:
```console
$ mpirun -np 20 singularity exec container.img /path/to/contained_mpi_prog
$ mpirun -np 24 singularity exec container.img /path/to/contained_mpi_prog
```
By calling `mpirun` outside the container, we solve several very complicated work-flow aspects. For example, if `mpirun` is called from within the container it must have a method for spawning processes on remote nodes. Historically ssh is used for this which means that there must be an sshd running within the container on the remote nodes, and this sshd process must not conflict with the sshd running on that host! It is also possible for the resource manager to launch the job and (in Open MPI’s case) the Orted processes on the remote system, but that then requires resource manager modification and container awareness.
By calling `mpirun` outside of the container, we solve several very complicated work-flow aspects. For example, if `mpirun` is called from within the container it
must have a method for spawning processes on remote nodes. Historically the ssh is used for this which means that there must be an sshd running within the container
on the remote nodes, and this sshd process must not conflict with the sshd running on that host! It is also possible for the resource manager to launch the job and
(in Open MPI’s case) the Orted processes on the remote system, but that then requires resource manager modification and container awareness.
In the end, we do not gain anything by calling `mpirun` from within the container except for increasing the complexity levels and possibly losing out on some added performance benefits (e.g. if a container wasn’t built with the proper OFED as the host).
In the end, we do not gain anything by calling `mpirun` from within the container except for increasing the complexity levels and possibly losing out on some added
performance benefits (e.g. if a container wasn’t built with the proper OFED as the host).
#### MPI Inside Image
#### MPI Inside Singularity Image
```console
$ ml CentOS/6.9
......@@ -234,10 +238,9 @@ Singularity CentOS-6.9_20180220092823.img:~> mpirun hostname | wc -l
24
```
!!! warning
You allocate two nodes, but MPI inside image use only one node.
As you can see in this example, we allocated two nodes, but MPI can use only one node (24 processes) when it's used inside the Singularity image.
#### MPI Outside Image
#### MPI Outside Singularity Image
```console
$ ml CentOS/6.9
......@@ -246,32 +249,38 @@ $ image-mpi hostname | wc -l
48
```
In this case, the mpi wrapper behaves like `mpirun` command. The `mpirun` is called outside the container and the communication between nodes are propagated
into the container automatically.
## How to Use Own Image on Cluster?
* Prepare the image on your computer
* Prepare the image on your computer
* Transfer the images to your `/home` directory on the cluster (for example `.singularity/image`)
```console
local $ scp image login@login4.salomon.it4i.cz:/home/login/.singularity/image/image
local:$ scp container.img login@login4.salomon.it4i.cz:~/.singularity/image/container.img
```
* Load module Singularity (`ml Singularity`)
* Use your image
!!! note
If you want to use the Singularity wrappers with your own images, then load module `Singularity-wrappers/master` and set the environment variable `IMAGE_PATH_LOCAL=/path/to/container.img`.
## How to Edit IT4Innovations Image?
* Transfer the image to your computer
```console
local $ scp login@login4.salomon.it4i.cz:/home/login/.singularity/image/image image
local:$ scp login@login4.salomon.it4i.cz:/home/login/.singularity/image/container.img container.img
```
* Modify the image
* Transfer the image from your computer to your `/home` directory on the cluster
```console
local $ scp image login@login4.salomon.it4i.cz:/home/login/.singularity/image/image
local:$ scp container.img login@login4.salomon.it4i.cz:/home/login/.singularity/image/container.img
```
* Load module Singularity (`ml Singularity`)
* Use your image
* Use your image
\ No newline at end of file
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment