Skip to content
Snippets Groups Projects
Commit fdabc6bc authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

fix

parent 16f585fa
No related branches found
No related tags found
6 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!196Master,!187Singularity
# Singularity on IT4Innovations
On clusters we have different versions of operating systems images. Below you see the available operating systems images.
On clusters we have different versions of operating systems singularity images. Below you see the available operating systems singularity images.
```console
Salomon Anselm
......@@ -16,23 +16,17 @@ On clusters we have different versions of operating systems images. Below you se
└── 16.04 └── 16.04
```
For current information on available images, refer to the `ml av` and see statement the `OS`
For current information on available singularity images, refer to the `ml av` and see statement the `OS`
## Available Operating Systems Images
On IT4Innovations clusters we support different operating system.
**Supported OS:**
* CentOS
* Debian
* Ubuntu
* Fedora
**Supported features:**
* GPU - graphics cards
* MIC - Intel Xeon Phi cards
!!! note
We support graphical cards on Anselm singularity image and support Intel Xeon Phi cards on Salomon images. (OS/Version-[none|GPU|MIC])
## IT4Innovations Wrappers
......@@ -57,7 +51,7 @@ CentOS Linux release 7.3.1708 (Core)
**image-mpi**
MPI wrapper. More in the chapter [Examples MPI](#MPI)
MPI wrapper. More in the chapter [Examples MPI](#mpi)
**image-run**
......@@ -97,12 +91,10 @@ New version is ready. (/home/login/.singularity/images/CentOS-6.9_20180220092823
## Examples
In next examples, we will be using Singularity images on clusters.
In next examples, we will be using Singularity images on IT4Innovations clusters.
### Load Image
For classic image
```console
$ ml CentOS/6.9
Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-6.9_20180220133305.img
......@@ -111,20 +103,37 @@ Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-
!!! note
First usage image copy image from /apps/all/OS/... to your /home (.singularity/images)
For special image (GPU, MIC)
For GPU and MIC image
```console
$ ml CentOS/6.9-GPU
```
```console
$ ml CentOS/6.9-MIC
```
!!! note
For GPU image, you must allocate node with GPU card and for MIC image, you must allocate node with MIC cards.
For the GPU image, you must allocate node with GPU card and for the MIC image, you must allocate node with the Intel Xeon Phi cards.
### MPI
Submited job `qsub -A PROJECT -q qprod -l select=2:mpiprocs=24 -l walltime=00:30:00 -I`
For example submit job `qsub -A PROJECT -q qprod -l select=2:mpiprocs=24 -l walltime=00:30:00 -I`
!!! note
We have seen no major performance impact from running a job in a Singularity container.
With Singularity, the MPI usage model is to call ‘mpirun’ from outside the container, and reference the container from your ‘mpirun’ command. Usage would look like this:
```console
$ mpirun -np 20 singularity exec container.img /path/to/contained_mpi_prog
```
By calling ‘mpirun’ outside the container, we solve several very complicated work-flow aspects. For example, if ‘mpirun’ is called from within the container it must have a method for spawning processes on remote nodes. Historically ssh is used for this which means that there must be an sshd running within the container on the remote nodes, and this sshd process must not conflict with the sshd running on that host! It is also possible for the resource manager to launch the job and (in Open MPI’s case) the Orted processes on the remote system, but that then requires resource manager modification and container awareness.
In the end, we do not gain anything by calling ‘mpirun’ from within the container except for increasing the complexity levels and possibly losing out on some added performance benefits (e.g. if a container wasn’t built with the proper OFED as the host).
### MPI Into Image
### MPI Inside Image
```console
$ ml CentOS/6.9
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment