Skip to content
Snippets Groups Projects
mpi4py-mpi-for-python.md 2.61 KiB
Newer Older
David Hrbáč's avatar
David Hrbáč committed
# MPI4Py (MPI for Python)

## Introduction
Lukáš Krupčík's avatar
Lukáš Krupčík committed

Lukáš Krupčík's avatar
Lukáš Krupčík committed
MPI for Python provides bindings of the Message Passing Interface (MPI) standard for the Python programming language, allowing any Python program to exploit multiple processors.
Lukáš Krupčík's avatar
Lukáš Krupčík committed

Lukáš Krupčík's avatar
Lukáš Krupčík committed
This package is constructed on top of the MPI-1/2 specifications and provides an object oriented interface which closely follows MPI-2 C++ bindings. It supports point-to-point (sends, receives) and collective (broadcasts, scatters, gathers) communications of any picklable Python object, as well as optimized communications of Python object exposing the single-segment buffer interface (NumPy arrays, builtin bytes/string/array objects).
Lukáš Krupčík's avatar
Lukáš Krupčík committed

On Anselm MPI4Py is available in standard Python modules.

David Hrbáč's avatar
David Hrbáč committed
## Modules

Lukáš Krupčík's avatar
Lukáš Krupčík committed
MPI4Py is build for OpenMPI. Before you start with MPI4Py you need to load Python and OpenMPI modules.
Lukáš Krupčík's avatar
Lukáš Krupčík committed

Lukáš Krupčík's avatar
Lukáš Krupčík committed
```bash
Lukáš Krupčík's avatar
Lukáš Krupčík committed
    $ module load python
    $ module load openmpi
Lukáš Krupčík's avatar
Lukáš Krupčík committed
```
Lukáš Krupčík's avatar
Lukáš Krupčík committed

David Hrbáč's avatar
David Hrbáč committed
## Execution

Lukáš Krupčík's avatar
Lukáš Krupčík committed
You need to import MPI to your python program. Include the following line to the python script:
Lukáš Krupčík's avatar
Lukáš Krupčík committed

Lukáš Krupčík's avatar
Lukáš Krupčík committed
```cpp
Lukáš Krupčík's avatar
Lukáš Krupčík committed
    from mpi4py import MPI
Lukáš Krupčík's avatar
Lukáš Krupčík committed
```
Lukáš Krupčík's avatar
Lukáš Krupčík committed

The MPI4Py enabled python programs [execute as any other OpenMPI](Running_OpenMPI/) code.The simpliest way is to run
Lukáš Krupčík's avatar
Lukáš Krupčík committed

Lukáš Krupčík's avatar
Lukáš Krupčík committed
```bash
Lukáš Krupčík's avatar
Lukáš Krupčík committed
    $ mpiexec python <script>.py
Lukáš Krupčík's avatar
Lukáš Krupčík committed
```
Lukáš Krupčík's avatar
Lukáš Krupčík committed

For example

Lukáš Krupčík's avatar
Lukáš Krupčík committed
```bash
Lukáš Krupčík's avatar
Lukáš Krupčík committed
    $ mpiexec python hello_world.py
Lukáš Krupčík's avatar
Lukáš Krupčík committed
```
Lukáš Krupčík's avatar
Lukáš Krupčík committed

David Hrbáč's avatar
David Hrbáč committed
## Examples
Lukáš Krupčík's avatar
Lukáš Krupčík committed

### Hello world!

Lukáš Krupčík's avatar
Lukáš Krupčík committed
```cpp
Lukáš Krupčík's avatar
Lukáš Krupčík committed
    from mpi4py import MPI

    comm = MPI.COMM_WORLD

    print "Hello! I'm rank %d from %d running in total..." % (comm.rank, comm.size)

David Hrbáč's avatar
David Hrbáč committed
    comm.Barrier()   # wait for everybody to synchronize
Lukáš Krupčík's avatar
Lukáš Krupčík committed
```
Lukáš Krupčík's avatar
Lukáš Krupčík committed

David Hrbáč's avatar
David Hrbáč committed
### Collective Communication with NumPy arrays
Lukáš Krupčík's avatar
Lukáš Krupčík committed

Lukáš Krupčík's avatar
Lukáš Krupčík committed
```cpp
Lukáš Krupčík's avatar
Lukáš Krupčík committed
    from mpi4py import MPI
    from __future__ import division
    import numpy as np

    comm = MPI.COMM_WORLD

    print("-"*78)
    print(" Running on %d cores" % comm.size)
    print("-"*78)

    comm.Barrier()

    # Prepare a vector of N=5 elements to be broadcasted...
    N = 5
    if comm.rank == 0:
David Hrbáč's avatar
David Hrbáč committed
        A = np.arange(N, dtype=np.float64)    # rank 0 has proper data
Lukáš Krupčík's avatar
Lukáš Krupčík committed
    else:
David Hrbáč's avatar
David Hrbáč committed
        A = np.empty(N, dtype=np.float64)     # all other just an empty array
Lukáš Krupčík's avatar
Lukáš Krupčík committed

    # Broadcast A from rank 0 to everybody
    comm.Bcast( [A, MPI.DOUBLE] )

    # Everybody should now have the same...
    print "[%02d] %s" % (comm.rank, A)
Lukáš Krupčík's avatar
Lukáš Krupčík committed
```
Lukáš Krupčík's avatar
Lukáš Krupčík committed

Execute the above code as:

Lukáš Krupčík's avatar
Lukáš Krupčík committed
```bash
Lukáš Krupčík's avatar
Lukáš Krupčík committed
    $ qsub -q qexp -l select=4:ncpus=16:mpiprocs=16:ompthreads=1 -I

    $ module load python openmpi

    $ mpiexec -bycore -bind-to-core python hello_world.py
Lukáš Krupčík's avatar
Lukáš Krupčík committed
```
Lukáš Krupčík's avatar
Lukáš Krupčík committed

Pavel Jirásek's avatar
Pavel Jirásek committed
In this example, we run MPI4Py enabled code on 4 nodes, 16 cores per node (total of 64 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pypi.python.org/pypi/mpi4py).