Commit 3d4c1886 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

fix

parent fdabc6bc
......@@ -129,9 +129,9 @@ With Singularity, the MPI usage model is to call ‘mpirun’ from outside the c
$ mpirun -np 20 singularity exec container.img /path/to/contained_mpi_prog
```
By calling mpirun outside the container, we solve several very complicated work-flow aspects. For example, if mpirun is called from within the container it must have a method for spawning processes on remote nodes. Historically ssh is used for this which means that there must be an sshd running within the container on the remote nodes, and this sshd process must not conflict with the sshd running on that host! It is also possible for the resource manager to launch the job and (in Open MPI’s case) the Orted processes on the remote system, but that then requires resource manager modification and container awareness.
By calling `mpirun` outside the container, we solve several very complicated work-flow aspects. For example, if `mpirun` is called from within the container it must have a method for spawning processes on remote nodes. Historically ssh is used for this which means that there must be an sshd running within the container on the remote nodes, and this sshd process must not conflict with the sshd running on that host! It is also possible for the resource manager to launch the job and (in Open MPI’s case) the Orted processes on the remote system, but that then requires resource manager modification and container awareness.
In the end, we do not gain anything by calling mpirun from within the container except for increasing the complexity levels and possibly losing out on some added performance benefits (e.g. if a container wasn’t built with the proper OFED as the host).
In the end, we do not gain anything by calling `mpirun` from within the container except for increasing the complexity levels and possibly losing out on some added performance benefits (e.g. if a container wasn’t built with the proper OFED as the host).
### MPI Inside Image
......@@ -145,7 +145,7 @@ Singularity CentOS-6.9_20180220092823.img:~> mpirun hostname | wc -l
```
!!! warning
You allocate two nodes, but MPI into image use only one node.
You allocate two nodes, but MPI inside image use only one node.
### MPI Outside Image
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment