Commit 1987f993 authored by David Hrbáč's avatar David Hrbáč
Browse files

Corrections

parent fab36ff2
......@@ -34,25 +34,19 @@ For better user experience with Singularity containers we prepared several wrapp
* image-shell
* image-update
Listed wrappers helps you to use prepared singularity images loaded as modules. You can easily load singularity image like any other module on the cluster
by `ml OS/version` command. After the module is loaded for the first time, the prepared image is copied into your home folder and is ready for use. When you load the module
next time, the version of image is checked and image update (if exists) is offered. Then you can update your copy of image by the `image-update` command.
Listed wrappers help you to use prepared Singularity images loaded as modules. You can easily load Singularity image like any other module on the cluster by `ml OS/version` command. After the module is loaded for the first time, the prepared image is copied into your home folder and is ready for use. When you load the module next time, the version of image is checked and image update (if exists) is offered. Then you can update your copy of image by the `image-update` command.
!!! warning
With image update, all user changes to the image will be overridden.
The runscript inside the singularity image can be run by the `image-run` command. This command automatically mounts the `/scratch` and `/apps` storage and
invoke the image as writable, so user changes can be made.
The runscript inside the Singularity image can be run by the `image-run` command. This command automatically mounts the `/scratch` and `/apps` storage and invokes the image as writable, so user changes can be made.
Very similar to `image-run` is the `image-exec` command. The only difference is that `image-exec` runs user-defined command instead of the runscript. In this case,
the command to be run is specified as a parameter.
Very similar to `image-run` is the `image-exec` command. The only difference is that `image-exec` runs user-defined command instead of the runscript. In this case, the command to be run is specified as a parameter.
For development is very useful to use interactive shell inside the singularity container. In this interactive shell you can make any changes to the image you want,
but be aware that you can not use the `sudo` privileged commands directly on the cluster. To invoke interactive shell easily just use the `image-shell` command.
For development is very useful to use interactive shell inside the Singularity container. In this interactive shell you can make any changes to the image you want, but be aware that you can not use the `sudo` privileged commands directly on the cluster. To invoke interactive shell easily just use the `image-shell` command.
Another useful feature of the singularity is direct support of OpenMPI. For proper MPI function, you have to install the same version of OpenMPI as you use on cluster inside the
image. OpenMPI/2.1.1 is installed in prepared images. The MPI must be started outside of the container. The easiest way to start the MPI is to use the `image-mpi` command.
This command has the same parameters as the `mpirun`. Thanks to that, there is no difference between running normal MPI application and MPI application in singularity container.
Another useful feature of the Singularity is direct support of OpenMPI. For proper MPI function, you have to install the same version of OpenMPI inside the image as you use on cluster. OpenMPI/2.1.1 is installed in prepared images. The MPI must be started outside the container. The easiest way to start the MPI is to use the `image-mpi` command.
This command has the same parameters as the `mpirun`. Thanks to that, there is no difference between running normal MPI application and MPI application in Singularity container.
## Examples
......@@ -83,7 +77,7 @@ CentOS Linux release 7.3.1708 (Core)
**image-mpi**
MPI wrapper. More in the chapter [Examples MPI](#mpi).
MPI wrapper - see more in the chapter [Examples MPI](#mpi).
**image-run**
......@@ -103,7 +97,7 @@ Singularity CentOS-7.3_20180220104046.img:~>
### Update Image
This command is for updating your local copy of the Singularity image. The local copy is in this case overridden.
This command is for updating your local copy of the Singularity image. The local copy is overridden in this case.
```console
$ ml CentOS/6.9
......@@ -126,9 +120,9 @@ New version is ready. (/home/login/.singularity/images/CentOS-6.9_20180220092823
In the following example, we are using a job submitted by the command: `qsub -A PROJECT -q qprod -l select=1:mpiprocs=24:accelerator=true -I`
!!! info
The MIC image was prepared only for the Salomon cluster
The MIC image was prepared only for the Salomon cluster.
**Code for the offload test**
**Code for the Offload Test**
```c
#include <stdio.h>
......@@ -151,7 +145,7 @@ int main() {
}
```
**Compile and run**
**Compile and Run**
```console
[login@r38u03n975 ~]$ ml CentOS/6.9-MIC
......@@ -176,9 +170,9 @@ Hello world from MIC, #of cores: 244
In the following example, we are using a job submitted by the command: `qsub -A PROJECT -q qnvidia -l select=1:ncpus=16:mpiprocs=16 -l walltime=01:00:00 -I`
!!! note
The GPU image was prepared only for the Anselm cluster
The GPU image was prepared only for the Anselm cluster.
**Checking nvidia driver inside image**
**Checking NVIDIA Driver Inside Image**
```console
[login@cn199.anselm ~]$ image-shell
......@@ -211,7 +205,7 @@ Mon Mar 12 07:07:53 2018
In the following example, we are using a job submitted by the command: `qsub -A PROJECT -q qprod -l select=2:mpiprocs=24 -l walltime=00:30:00 -I`
!!! note
We have seen no major performance impact from running a job in a Singularity container.
We have seen no major performance impact for a job running in a Singularity container.
With Singularity, the MPI usage model is to call `mpirun` from outside the container, and reference the container from your `mpirun` command. Usage would look like this:
......@@ -219,10 +213,7 @@ With Singularity, the MPI usage model is to call `mpirun` from outside the conta
$ mpirun -np 24 singularity exec container.img /path/to/contained_mpi_prog
```
By calling `mpirun` outside of the container, we solve several very complicated work-flow aspects. For example, if `mpirun` is called from within the container it
must have a method for spawning processes on remote nodes. Historically the ssh is used for this which means that there must be an sshd running within the container
on the remote nodes, and this sshd process must not conflict with the sshd running on that host! It is also possible for the resource manager to launch the job and
(in Open MPI’s case) the Orted processes on the remote system, but that then requires resource manager modification and container awareness.
By calling `mpirun` outside of the container, we solve several very complicated work-flow aspects. For example, if `mpirun` is called from within the container it must have a method for spawning processes on remote nodes. Historically the SSH is used for this which means that there must be an `sshd` running within the container on the remote nodes, and this `sshd` process must not conflict with the `sshd` running on that host! It is also possible for the resource manager to launch the job and (in OpenMPI’s case) the Orted processes on the remote system, but that then requires resource manager modification and container awareness.
In the end, we do not gain anything by calling `mpirun` from within the container except for increasing the complexity levels and possibly losing out on some added
performance benefits (e.g. if a container wasn’t built with the proper OFED as the host).
......@@ -238,7 +229,7 @@ Singularity CentOS-6.9_20180220092823.img:~> mpirun hostname | wc -l
24
```
As you can see in this example, we allocated two nodes, but MPI can use only one node (24 processes) when it's used inside the Singularity image.
As you can see in this example, we allocated two nodes, but MPI can use only one node (24 processes) when used inside the Singularity image.
#### MPI Outside Singularity Image
......@@ -249,7 +240,7 @@ $ image-mpi hostname | wc -l
48
```
In this case, the mpi wrapper behaves like `mpirun` command. The `mpirun` is called outside the container and the communication between nodes are propagated
In this case, the MPI wrapper behaves like `mpirun` command. The `mpirun` is called outside the container and the communication between nodes are propagated
into the container automatically.
## How to Use Own Image on Cluster?
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment