Newer
Older
On clusters, we have different versions of singularity images of operating systems. Below you can see the available operating systems singularity images.
Our bootstrap for create image, wrappers, features are [here](https://code.it4i.cz/sccs/it4i-singularity).
```console
Salomon Anselm
├── CentOS ├── CentOS
│ ├── 6.9 │ ├── 6.9
│ ├── 6.9-MIC │ ├── 6.9-GPU
For current information on available singularity images, refer to the `ml av` and see statement the `OS`
We support graphics cards for Anselm singularity image and support Intel Xeon Phi cards on Salomon singularity image. (OS/Version[none|-GPU|-MIC])
* image-exec
* image-mpi
* image-run
* image-shell
* image-update
The latest version of the scripts is [here](https://code.it4i.cz/sccs/it4i-singularity/tree/master/bin)
Your image of CentOS/7.3 is at location: /home/login/.singularity/images/CentOS-7.3_20180220104046.img
CentOS Linux release 7.3.1708 (Core)
```
**image-mpi**
```console
$ ml CentOS/7.3
$ image-shell
Singularity: Invoking an interactive shell within container...
Singularity CentOS-7.3_20180220104046.img:~>
```
**image-update**
If a new image (version) exists, informs the user and offers an update image.
```console
$ ml CentOS/6.9
New version of CentOS image was found. (New: CentOS-6.9_20180220092823.img Old: CentOS-6.9_20170220092823.img)
For updating image use: image-update
Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-6.9_20170220092823.img
$ image-update
New version of CentOS image was found. (New: CentOS-6.9_20180220092823.img Old: CentOS-6.9_20170220092823.img)
Do you want to update local copy? (WARNING all user modification will be deleted) [y/N]: y
Updating image CentOS-6.9_20180220092823.img
2.71G 100% 199.49MB/s 0:00:12 (xfer#1, to-check=0/1)
sent 2.71G bytes received 31 bytes 163.98M bytes/sec
total size is 2.71G speedup is 1.00
New version is ready. (/home/login/.singularity/images/CentOS-6.9_20180220092823.img)
```
## Examples
In next examples, we will be use Singularity images on IT4Innovations clusters.
```console
$ ml CentOS/6.9
Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-6.9_20180220133305.img
```
!!! tip
First usage module with singularity image, copy singularity image from /apps/all/OS/... to your /home (.singularity/images)
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
### Intel Xeon Phi Cards - MIC
For example submit job `qsub -A PROJECT -q qprod -l select=1:mpiprocs=24:accelerator=true -I`
!!! info
MIC image was prepared for only Salomon cluster
**Code for testing offload**
```c
#include <stdio.h>
#include <thread>
#include <stdlib.h>
#include <unistd.h>
int main() {
char hostname[1024];
gethostname(hostname, 1024);
unsigned int nthreads = std::thread::hardware_concurrency();
printf("Hello world, #of cores: %d\n",nthreads);
#pragma offload target(mic)
{
nthreads = std::thread::hardware_concurrency();
printf("Hello world from MIC, #of cores: %d\n",nthreads);
}
}
```
**Compile and run**
```console
[login@r38u03n975 ~]$ ml CentOS/6.9-MIC
Your image of CentOS/6.9-MIC is at location: /home/login/.singularity/images/CentOS-6.9-MIC_20180220112004.img
[login@r38u03n975 ~]$ image-shell
Singularity: Invoking an interactive shell within container...
Singularity CentOS-6.9-MIC_20180220112004.img:~> ml intel/2017b
Singularity CentOS-6.9-MIC_20180220112004.img:~> ml
Currently Loaded Modules:
1) GCCcore/6.3.0 3) icc/2017.1.132-GCC-6.3.0-2.27 5) iccifort/2017.1.132-GCC-6.3.0-2.27 7) iimpi/2017a 9) intel/2017a
2) binutils/2.27-GCCcore-6.3.0 4) ifort/2017.1.132-GCC-6.3.0-2.27 6) impi/2017.1.132-iccifort-2017.1.132-GCC-6.3.0-2.27 8) imkl/2017.1.132-iimpi-2017a
Singularity CentOS-6.9-MIC_20180220112004.img:~> icpc -std=gnu++11 -qoffload=optional hello.c -o hello-host
Singularity CentOS-6.9-MIC_20180220112004.img:~> ./hello-host
Hello world, #of cores: 24
Hello world from MIC, #of cores: 244
```
For example submit job `qsub -A PROJECT -q qnvidia -l select=1:ncpus=16:mpiprocs=16 -l walltime=01:00:00 -I`
GPU image was prepared for only Anselm cluster
**Checking nvidia driver inside image**
```console
[login@cn187.anselm ~]$ ml CentOS/6.9-GPU
Your image of CentOS/6.9-GPU is at location: /home/login/.singularity/images/CentOS-6.9-GPU_20171024164243.img
[login@cn187.anselm ~]$ image-shell
Singularity: Invoking an interactive shell within container...
Singularity CentOS-6.9-GPU_20171024164243.img:~> nvidia-smi
+-----------------------------------------------------------------------------+
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K20m Off | 00000000:02:00.0 Off | 0 |
| N/A 29C P0 51W / 225W | 0MiB / 4743MiB | 94% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
For example submit job `qsub -A PROJECT -q qprod -l select=2:mpiprocs=24 -l walltime=00:30:00 -I`
!!! note
We have seen no major performance impact from running a job in a Singularity container.
With Singularity, the MPI usage model is to call ‘mpirun’ from outside the container, and reference the container from your ‘mpirun’ command. Usage would look like this:
```console
$ mpirun -np 20 singularity exec container.img /path/to/contained_mpi_prog
```
By calling `mpirun` outside the container, we solve several very complicated work-flow aspects. For example, if `mpirun` is called from within the container it must have a method for spawning processes on remote nodes. Historically ssh is used for this which means that there must be an sshd running within the container on the remote nodes, and this sshd process must not conflict with the sshd running on that host! It is also possible for the resource manager to launch the job and (in Open MPI’s case) the Orted processes on the remote system, but that then requires resource manager modification and container awareness.
In the end, we do not gain anything by calling `mpirun` from within the container except for increasing the complexity levels and possibly losing out on some added performance benefits (e.g. if a container wasn’t built with the proper OFED as the host).
Singularity: Invoking an interactive shell within container...
Singularity CentOS-6.9_20180220092823.img:~> mpirun hostname | wc -l
24
```
!!! warning
Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-6.9_20180220092823.img
1. Prepare the image on your computer
1. Transfer the images to your `/home` directory on the cluster (for example `.singularity/image`)
1. Load module Singularity (`ml Singularity`)
1. Use your image
1. Transfer the image to your computer
1. Modify the image
1. Transfer the image from your computer to your `/home` directory on the cluster
1. Load module Singularity (`ml Singularity`)
1. Use your image