Skip to content
Snippets Groups Projects

Mpi examples

Merged Lukáš Krupčík requested to merge mpi-examples into master
Files
27
+ 108
0
# OpenMPI Sample Applications
Sample MPI applications provided both as a trivial primer to MPI as well as simple tests to ensure that your Open MPI installation is working properly.
## Examples
There are two MPI examples, each using one of six different MPI interfaces:
**Hello world**
* C: [hello_c.c](../../src/ompi/hello_c.c)
* C++: [hello_cxx.cc](../../src/ompi/hello_cxx.cc)
* Fortran mpif.h: [hello_mpifh.f](../../src/ompi/hello_mpifh.f)
* Fortran use mpi: [hello_usempi.f90](../../src/ompi/hello_usempi.f90)
* Fortran use mpi_f08: [hello_usempif08.f90](../../src/ompi/hello_usempif08.f90)
* Java: [Hello.java](../../src/ompi/Hello.java)
* C shmem.h: [hello_oshmem_c.c](../../src/ompi/hello_oshmem_c.c)
* Fortran shmem.fh: [hello_oshmemfh.f90](../../src/ompi/hello_oshmemfh.f90)
**Send a trivial message around in a ring**
* C: [ring_c.c](../../src/ompi/ring_c.c)
* C++: [ring_cxx.cc](../../src/ompi/ring_cxx.cc)
* Fortran mpif.h: [ring_mpifh.f](../../src/ompi/ring_mpifh.f)
* Fortran use mpi: [ring_usempi.f90](../../src/ompi/ring_usempi.f90)
* Fortran use mpi_f08: [ring_usempif08.f90](../../src/ompi/ring_usempif08.f90)
* Java: [Ring.java](../../src/ompi/Ring.java)
* C shmem.h: [ring_oshmem_c.c](../../src/ompi/ring_oshmem_c.c)
* Fortran shmem.fh: [ring_oshmemfh.f90](../../src/ompi/ring_oshmemfh.f90)
Additionally, there's one further example application, but this one only uses the MPI C bindings:
**Test the connectivity between all processes**
* C: [connectivity_c.c](../../src/ompi/connectivity_c.c)
## Build Examples
Download [examples](../../src/ompi/ompi.tar.gz).
The Makefile in this directory will build the examples for the supported languages (e.g., if you do not have the Fortran "use mpi" bindings compiled as part of Open MPI, those examples will be skipped).
The Makefile assumes that the wrapper compilers mpicc, mpic++, and mpifort are in your path.
Although the Makefile is tailored for Open MPI (e.g., it checks the *mpi_info* command to see if you have support for C++, mpif.h, use mpi, and use mpi_f08 F90), all of the example programs are pure MPI, and therefore not specific to Open MPI. Hence, you can use a different MPI implementation to compile and run these programs if you wish.
```console
[login@cn204.anselm ]$ tar xvf ompi.tar.gz
./
./connectivity_c.c
./Hello.java
./ring_mpifh.f
./hello_oshmem_cxx.cc
...
...
./hello_cxx.cc
[login@cn204.anselm ]$ ml OpenMPI/2.1.1-GCC-6.3.0-2.27
[login@cn204.anselm ]$ make
mpicc -g hello_c.c -o hello_c
mpicc -g ring_c.c -o ring_c
mpicc -g connectivity_c.c -o connectivity_c
mpicc -g spc_example.c -o spc_example
mpic++ -g hello_cxx.cc -o hello_cxx
mpic++ -g ring_cxx.cc -o ring_cxx
mpifort -g hello_mpifh.f -o hello_mpifh
mpifort -g ring_mpifh.f -o ring_mpifh
mpifort -g hello_usempi.f90 -o hello_usempi
mpifort -g ring_usempi.f90 -o ring_usempi
mpifort -g hello_usempif08.f90 -o hello_usempif08
mpifort -g ring_usempif08.f90 -o ring_usempif08
mpijavac Hello.java
mpijavac Ring.java
shmemcc -g hello_oshmem_c.c -o hello_oshmem
shmemc++ -g hello_oshmem_cxx.cc -o hello_oshmemcxx
shmemcc -g ring_oshmem_c.c -o ring_oshmem
shmemcc -g oshmem_shmalloc.c -o oshmem_shmalloc
shmemcc -g oshmem_circular_shift.c -o oshmem_circular_shift
shmemcc -g oshmem_max_reduction.c -o oshmem_max_reduction
shmemcc -g oshmem_strided_puts.c -o oshmem_strided_puts
shmemcc -g oshmem_strided_puts.c -o oshmem_strided_puts
shmemcc -g oshmem_symmetric_data.c -o oshmem_symmetric_data
shmemfort -g hello_oshmemfh.f90 -o hello_oshmemfh
shmemfort -g ring_oshmemfh.f90 -o ring_oshmemfh
[login@cn204.anselm ]$ find . -executable -type f
./hello_oshmem
./dtrace/myppriv.sh
./dtrace/partrace.sh
./oshmem_shmalloc
./ring_cxx
./ring_usempi
./hello_mpifh
./hello_cxx
./oshmem_max_reduction
./oshmem_symmetric_data
./oshmem_strided_puts
./hello_usempif08
./ring_usempif08
./spc_example
./hello_oshmemfh
./ring_oshmem
./oshmem_circular_shift
./hello_c
./ring_c
./hello_usempi
./ring_oshmemfh
./connectivity_c
./ring_mpifh
```
Loading