Skip to content
Snippets Groups Projects
Commit 58de8b15 authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Apptainer singularity

parent 71d42eef
No related branches found
No related tags found
1 merge request!417Apptainer singularity
......@@ -100,14 +100,14 @@ kru0052@cn202:~$ exit
### Job Execution
The DGX-2 machine runs only a bare-bone, minimal operating system. Users are expected to run
**[singularity][1]** containers in order to enrich the environment according to the needs.
**[Apptainer/Singularity][1]** containers in order to enrich the environment according to the needs.
Containers (Docker images) optimized for DGX-2 may be downloaded from
[NVIDIA Gpu Cloud][2]. Select the code of interest and
copy the docker nvcr.io link from the Pull Command section. This link may be directly used
to download the container via singularity, see the example below:
to download the container via Apptainer/Singularity, see the example below:
#### Example - Singularity Run Tensorflow
#### Example - Apptainer/Singularity Run Tensorflow
```console
[kru0052@login2.barbora ~]$ qsub -q qdgx -l walltime=01:00:00 -I
......
......@@ -15,9 +15,9 @@ The catalog of all container images can be found on [NVIDIA site][a]. Supported
## Running Containers on DGX-2
NVIDIA expects usage of Docker as a containerization tool, but Docker is not a suitable solution in a multiuser environment. For this reason, the [Singularity container][b] solution is used.
NVIDIA expects usage of Docker as a containerization tool, but Docker is not a suitable solution in a multiuser environment. For this reason, the [Apptainer/Singularity container][b] solution is used.
Singularity can be used similarly to Docker, just change the image URL address. For example, original command for Docker `docker run -it nvcr.io/nvidia/theano:18.08` should be changed to `singularity shell docker://nvcr.io/nvidia/theano:18.08`. More about Singularity [here][1].
Singularity can be used similarly to Docker, just change the image URL address. For example, original command for Docker `docker run -it nvcr.io/nvidia/theano:18.08` should be changed to `singularity shell docker://nvcr.io/nvidia/theano:18.08`. More about Apptainer/Singularity [here][1].
For fast container deployment, all images are cached after first use in the *lscratch* directory. This behavior can be changed by the *SINGULARITY_CACHEDIR* environment variable, but the start time of the container will increase significantly.
......
......@@ -20,7 +20,7 @@ Currently, three types of container base images can be specified:
* **localimage: *path***: the location of an existing container image file
* **docker:*name***: the name of a Docker container image (to be downloaded from [Docker Hub][a].
* **shub:*name***: the name of a Singularity container image (to be downloaded from [Singularity Hub][b].
* **shub:*name***: the name of a Apptainer/Singularity container image (to be downloaded from [Singularity Hub][b].
## Building Container Images
......@@ -28,7 +28,7 @@ To instruct EasyBuild to also build a container image from the generated contain
EasyBuild will leverage functionality provided by the container software of choice (see containers_cfg_image_type) to build the container image.
For example, in the case of Singularity, EasyBuild will run `sudo /path/to/singularity build` on the generated container recipe.
For example, in the case of Apptainer/Singularity, EasyBuild will run `sudo /path/to/singularity build` on the generated container recipe.
The container image will be placed in the location specified by the `--containerpath` configuration option (see Location for generated container recipes & images (`--containerpath`)), next to the generated container recipe that was used to build the image.
......@@ -42,7 +42,7 @@ To let EasyBuild generate a container recipe for GCC 6.4.0 + binutils 2.28:
eb GCC-6.4.0-2.28.eb --containerize --container-base localimage:/tmp/example.simg --experimental
```
With other configuration options left to default (see the output of `eb --show-config`), this will result in a Singularity container recipe using example.simg as a base image, which will be stored in `$HOME/.local/easybuild/containers`:
With other configuration options left to default (see the output of `eb --show-config`), this will result in a Apptainer/Singularity container recipe using example.simg as a base image, which will be stored in `$HOME/.local/easybuild/containers`:
```console
$ eb GCC-6.4.0-2.28.eb --containerize --container-base localimage:/tmp/example.simg --experimental
......@@ -60,7 +60,7 @@ Below is an example of container recipe generated by EasyBuild, using the follow
eb Python-3.6.4-foss-2018a.eb OpenMPI-2.1.2-GCC-6.4.0-2.28.eb -C --container-base shub:shahzebsiddiqui/eb-singularity:centos-7.4.1708 --experimental
```
It uses the *shahzebsiddiqui/eb-singularity:centos-7.4.1708* base container image that is available from Singularity hub ([see this webpage][c]).
It uses the *shahzebsiddiqui/eb-singularity:centos-7.4.1708* base container image that is available from the Apptainer/Singularity hub ([see this webpage][c]).
```
Bootstrap: shub
......@@ -200,13 +200,13 @@ Use `eb --show-full-config | grep containerpath` to determine the currently acti
The format for container images that EasyBuild produces via the functionality provided by the container software can be controlled via the `--container-image-format` configuration setting.
For Singularity containers (see Type of container recipe/image to generate (`--container-type`)), three image formats are supported:
For Apptainer/Singularity containers (see Type of container recipe/image to generate (`--container-type`)), three image formats are supported:
* squashfs (default): compressed images using squashfs read-only file system
* ext3: writable image file using ext3 file system
* sandbox: container image in a regular directory
See also [link][d] [link][e].
See also official user guide on [Image Mounts format][d] and [Building a Container][e].
## Name for Container Recipe & Image
......@@ -228,7 +228,7 @@ The container software that EasyBuild leverages to build container images may be
You can instruct EasyBuild to pass an alternate location via the `--container-tmpdir` configuration setting.
For Singularity, the default is to use /tmp, [see][f]. If `--container-tmpdir` is specified, the `$SINGULARITY_TMPDIR` environment variable will be defined accordingly to let Singularity use that location instead.
For Apptainer/Singularity, the default is to use /tmp, [see][f]. If `--container-tmpdir` is specified, the `$SINGULARITY_TMPDIR` environment variable will be defined accordingly to let Apptainer/Singularity use that location instead.
Type of container recipe/image to generate (`--container-type`).
With the `--container-type` configuration option, you can specify what type of container recipe/image EasyBuild should generate. Possible values are:
......@@ -241,9 +241,9 @@ For detailed documentation, see the [webpage][i].
[a]: https://hub.docker.com/
[b]: https://singularity-hub.org/
[c]: https://singularity-hub.org/collections/143
[d]: https://singularity.lbl.gov/user-guide#supported-container-formats
[e]: http://singularity.lbl.gov/docs-build-container
[f]: http://singularity.lbl.gov/build-environment#temporary-folders
[d]: https://apptainer.org/docs/user/latest/bind_paths_and_mounts.html#image-mounts
[e]: https://apptainer.org/docs/user/latest/build_a_container.html
[f]: https://apptainer.org/docs/user/latest/build_env.html#temporary-folders
[g]: https://singularity.lbl.gov
[h]: https://docs.docker.com/
[i]: http://easybuild.readthedocs.io/en/latest/Containers.html
# Singularity on IT4Innovations
# Apptainer/Singularity on IT4Innovations
On our clusters, the Singularity images of main Linux distributions are prepared.
!!!note "Singularity name change"
On November 30th, 2021, the Singularity project has officially moved into the Linux Foundation. As part of this move, and to differentiate from the other like-named projects and commercial products, the project is renamed to "Apptainer".
On our clusters, the Apptainer/Singularity images of main Linux distributions are prepared.
```console
Salomon Barbora
├── CentOS ├── CentOS
│ ├── 6 │ ├── 6
│ └── 7 │ └── 7
├── Debian ├── Debian
│ └── latest │ └── latest
├── Fedora ├── Fedora
│ └── latest │ └── latest
└── Ubuntu └── Ubuntu
└── latest └── latest
Barbora
├── CentOS
│ ├── 6
│ └── 7
├── Debian
│ └── latest
├── Fedora
│ └── latest
└── Ubuntu
└── latest
```
!!! info
Current information about available Singularity images can be obtained by the `ml av` command. The images are listed in the `OS` section.
Current information about available Apptainer/Singularity images can be obtained by the `ml av` command. The images are listed in the `OS` section.
The bootstrap scripts, wrappers, features, etc. are located [here][a].
The bootstrap scripts, wrappers, features, etc. are located on [it4i-singularity GitLab page][a].
## IT4Innovations Singularity Wrappers
## IT4Innovations Apptainer/Singularity Wrappers
For better user experience with Singularity containers, we prepared several wrappers:
For better user experience with Apptainer/Singularity containers, we prepared several wrappers:
* image-exec
* image-mpi
......@@ -30,23 +33,38 @@ For better user experience with Singularity containers, we prepared several wrap
* image-shell
* image-update
Listed wrappers help you to use prepared Singularity images loaded as modules. You can easily load a Singularity image like any other module on the cluster by the `ml OS/version` command. After the module is loaded for the first time, the prepared image is copied into your home folder and is ready for use. When you load the module next time, the version of the image is checked and an image update (if exists) is offered. Then you can update your copy of the image by the `image-update` command.
Listed wrappers help you to use prepared Apptainer/Singularity images loaded as modules.
You can easily load a Apptainer/Singularity image like any other module on the cluster by the `ml OS/version` command.
After the module is loaded for the first time, the prepared image is copied into your home folder and is ready for use.
When you load the module next time, the version of the image is checked and an image update (if exists) is offered.
Then you can update your copy of the image by the `image-update` command.
!!! warning
With an image update, all user changes to the image will be overridden.
The runscript inside the Singularity image can be run by the `image-run` command. This command automatically mounts the `/scratch` and `/apps` storage and invokes the image as writable, so user changes can be made.
The runscript inside the Apptainer/Singularity image can be run by the `image-run` command.
This command automatically mounts the `/scratch` and `/apps` storage and invokes the image as writable, so user changes can be made.
Very similar to `image-run` is the `image-exec` command. The only difference is that `image-exec` runs a user-defined command instead of a runscript. In this case, the command to be run is specified as a parameter.
Very similar to `image-run` is the `image-exec` command.
The only difference is that `image-exec` runs a user-defined command instead of a runscript.
In this case, the command to be run is specified as a parameter.
Using the interactive shell inside the Singularity container is very useful for development. In this interactive shell, you can make any changes to the image you want, but be aware that you can not use the `sudo` privileged commands directly on the cluster. To simply invoke interactive shell, use the `image-shell` command.
Using the interactive shell inside the Apptainer/Singularity container is very useful for development.
In this interactive shell, you can make any changes to the image you want,
but be aware that you can not use the `sudo` privileged commands directly on the cluster.
To simply invoke interactive shell, use the `image-shell` command.
Another useful feature of the Singularity is the direct support of OpenMPI. For proper MPI function, you have to install the same version of OpenMPI inside the image as you use on the cluster. OpenMPI/3.1.4 is installed in prepared images. The MPI must be started outside the container. The easiest way to start the MPI is to use the `image-mpi` command.
This command has the same parameters as `mpirun`, so there is no difference between running normal MPI application and MPI application in a Singularity container.
Another useful feature of the Apptainer/Singularity is the direct support of OpenMPI.
For proper MPI function, you have to install the same version of OpenMPI inside the image as you use on the cluster.
OpenMPI/3.1.4 is installed in prepared images.
The MPI must be started outside the container.
The easiest way to start the MPI is to use the `image-mpi` command.
This command has the same parameters as `mpirun`, so there is no difference between running normal MPI application
and MPI application in a Apptainer/Singularity container.
## Examples
In the examples, we will use prepared Singularity images.
In the examples, we will use prepared Apptainer/Singularity images.
### Load Image
......@@ -62,7 +80,7 @@ Your image of CentOS/6 is at location: /home/login/.singularity/images/CentOS-6_
**image-exec**
Executes the given command inside the Singularity image. The container is in this case started, then the command is executed and the container is stopped.
Executes the given command inside the Apptainer/Singularity image. The container is in this case started, then the command is executed and the container is stopped.
```console
$ ml CentOS/7
......@@ -77,11 +95,11 @@ MPI wrapper - see more in the [Examples MPI][1] section.
**image-run**
This command runs the runscript inside the Singularity image. Note, that the prepared images do not contain a runscript.
This command runs the runscript inside the Apptainer/Singularity image. Note, that the prepared images do not contain a runscript.
**image-shell**
Invokes an interactive shell inside the Singularity image.
Invokes an interactive shell inside the Apptainer/Singularity image.
```console
$ ml CentOS/7
......@@ -93,7 +111,8 @@ Singularity CentOS-7_20180220104046.img:~>
### Update Image
This command is for updating your local Singularity image copy. The local copy is overridden in this case.
This command is for updating your local Apptainer/Singularity image copy.
The local copy is overridden in this case.
```console
$ ml CentOS/6
......@@ -113,23 +132,33 @@ New version is ready. (/home/login/.singularity/images/CentOS-6_20180220092823.i
### MPI
In the following example, we are using a job submitted by the command: `qsub -A PROJECT -q qprod -l select=2:mpiprocs=24 -l walltime=00:30:00 -I`
In the following example, we are using a job submitted by the command:
`qsub -A PROJECT -q qprod -l select=2:mpiprocs=24 -l walltime=00:30:00 -I`
!!! note
We have seen no major performance impact for a job running in a Singularity container.
We have seen no major performance impact for a job running in a Apptainer/Singularity container.
With Singularity, the MPI usage model is to call `mpirun` from outside the container, and reference the container from your `mpirun` command. Usage would look like this:
With Apptainer/Singularity, the MPI usage model is to call `mpirun` from outside the container
and reference the container from your `mpirun` command.
Usage would look like this:
```console
$ mpirun -np 24 singularity exec container.img /path/to/contained_mpi_prog
```
By calling `mpirun` outside of the container, we solve several very complicated work-flow aspects. For example, if `mpirun` is called from within the container, it must have a method for spawning processes on remote nodes. Historically the SSH is used for this, which means that there must be an `sshd` running within the container on the remote nodes and this `sshd` process must not conflict with the `sshd` running on that host. It is also possible for the resource manager to launch the job and (in OpenMPI’s case) the Orted (Open RTE User-Level Daemon) processes on the remote system, but that then requires resource manager modification and container awareness.
By calling `mpirun` outside of the container, we solve several very complicated work-flow aspects.
For example, if `mpirun` is called from within the container, it must have a method for spawning processes on remote nodes.
Historically the SSH is used for this, which means that there must be an `sshd` running within the container on the remote nodes
and this `sshd` process must not conflict with the `sshd` running on that host.
It is also possible for the resource manager to launch the job
and (in OpenMPI’s case) the Orted (Open RTE User-Level Daemon) processes on the remote system,
but that then requires resource manager modification and container awareness.
In the end, we do not gain anything by calling `mpirun` from within the container except for increasing the complexity levels and possibly losing out on some added
In the end, we do not gain anything by calling `mpirun` from within the container
except for increasing the complexity levels and possibly losing out on some added
performance benefits (e.g. if a container was not built with the proper OFED as the host).
#### MPI Inside Singularity Image
#### MPI Inside Apptainer/Singularity Image
```console
$ ml CentOS/7
......@@ -140,9 +169,9 @@ Singularity CentOS-7_20180220092823.img:~> mpirun hostname | wc -l
24
```
As you can see in this example, we allocated two nodes, but MPI can use only one node (24 processes) when used inside the Singularity image.
As you can see in this example, we allocated two nodes, but MPI can use only one node (24 processes) when used inside the Apptainer/Singularity image.
#### MPI Outside Singularity Image
#### MPI Outside Apptainer/Singularity Image
```console
$ ml CentOS/7
......@@ -151,8 +180,9 @@ $ image-mpi hostname | wc -l
48
```
In this case, the MPI wrapper behaves like the `mpirun` command. The `mpirun` command is called outside the container and the communication between nodes are propagated
into the container automatically.
In this case, the MPI wrapper behaves like the `mpirun` command.
The `mpirun` command is called outside the container
and the communication between nodes are propagated into the container automatically.
## How to Use Own Image on Cluster?
......@@ -160,31 +190,31 @@ into the container automatically.
* Transfer the images to your `/home` directory on the cluster (for example `.singularity/image`)
```console
local:$ scp container.img login@login4.salomon.it4i.cz:~/.singularity/image/container.img
local:$ scp container.img login@login4.clustername.it4i.cz:~/.singularity/image/container.img
```
* Load module Singularity (`ml Singularity`)
* Load module Apptainer/Singularity (`ml Singularity`)
* Use your image
!!! note
If you want to use the Singularity wrappers with your own images, load the `Singularity-wrappers/master` module and set the environment variable `IMAGE_PATH_LOCAL=/path/to/container.img`.
If you want to use the Apptainer/Singularity wrappers with your own images, load the `Singularity-wrappers/master` module and set the environment variable `IMAGE_PATH_LOCAL=/path/to/container.img`.
## How to Edit IT4Innovations Image?
* Transfer the image to your computer
```console
local:$ scp login@login4.salomon.it4i.cz:/home/login/.singularity/image/container.img container.img
local:$ scp login@login4.clustername.it4i.cz:/home/login/.singularity/image/container.img container.img
```
* Modify the image
* Transfer the image from your computer to your `/home` directory on the cluster
```console
local:$ scp container.img login@login4.salomon.it4i.cz:/home/login/.singularity/image/container.img
local:$ scp container.img login@login4.clustername.it4i.cz:/home/login/.singularity/image/container.img
```
* Load module Singularity (`ml Singularity`)
* Load module Apptainer/Singularity (`ml Singularity`)
* Use your image
[1]: #mpi
......
......@@ -225,6 +225,7 @@ nav:
- LMGC90: salomon/software/phys/LMGC90.md
- PragTic: salomon/software/phys/PragTic.md
- Tools:
- Apptainer/Singularity: software/tools/singularity-it4i.md
- ANSYS:
- Introduction: software/tools/ansys/ansys.md
- ANSYS CFX: software/tools/ansys/ansys-cfx.md
......@@ -237,7 +238,6 @@ nav:
- EasyBuild:
- Introduction: software/tools/easybuild.md
- Generating Container Recipes and Images: software/tools/easybuild-images.md
- Singularity: software/tools/singularity-it4i.md
- Spack: software/tools/spack.md
- Virtualization: software/tools/virtualization.md
- Visualization:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment