On clusters we have different versions of operating systems singularity images. Below you see the available operating systems singularity images.
Our images bootstrap, wrappers, features are [here](https://code.it4i.cz/sccs/it4i-singularity).
```console
...
...
@@ -27,11 +28,11 @@ For current information on available singularity images, refer to the `ml av` an
* Fedora
!!! note
We support graphical cards on Anselm singularity image and support Intel Xeon Phi cards on Salomon images. (OS/Version-[none|GPU|MIC])
We support graphical cards on Anselm singularity image and support Intel Xeon Phi cards on Salomon images. (OS/Version[none|-GPU|-MIC])
## IT4Innovations Wrappers
For using modules we prepared special wrappers
For using our singularity images we prepared special wrappers:
* image-exec
* image-mpi
...
...
@@ -46,7 +47,7 @@ Open image and run command into image.
```console
$ml CentOS/7.3
Your image of CentOS/7.3 is at location: /home/login/.singularity/images/CentOS-7.3_20180220104046.img
image-exec cat /etc/centos-release
$image-exec cat /etc/centos-release
CentOS Linux release 7.3.1708 (Core)
```
...
...
@@ -56,11 +57,11 @@ MPI wrapper. More in the chapter [Examples MPI](#mpi)
**image-run**
Run subscript into image.
Run subscript inside image.
**image-shell**
Start shell into image.
Start shell inside image.
```console
$ml CentOS/7.3
...
...
@@ -101,21 +102,8 @@ $ ml CentOS/6.9
Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-6.9_20180220133305.img
```
!!! note
First usage image copy image from /apps/all/OS/... to your /home (.singularity/images)
For GPU and MIC images
```console
$ml CentOS/6.9-GPU
```
```console
$ml CentOS/6.9-MIC
```
!!! note
For the GPU image, you must allocate node with GPU card and for the MIC image, you must allocate node with the Intel Xeon Phi cards.
!!! tip
First usage module with singularity image, copy singularity image from /apps/all/OS/... to your /home (.singularity/images)
### Intel Xeon Phi Cards - MIC
...
...
@@ -144,9 +132,6 @@ int main() {
nthreads=std::thread::hardware_concurrency();
printf("Hello world from MIC, #of cores: %d\n",nthreads);
}
}
```
...
...
@@ -222,7 +207,7 @@ By calling `mpirun` outside the container, we solve several very complicated wor
In the end, we do not gain anything by calling `mpirun` from within the container except for increasing the complexity levels and possibly losing out on some added performance benefits (e.g. if a container wasn’t built with the proper OFED as the host).