diff --git a/docs.it4i/anselm/software/mpi/mpi.md b/docs.it4i/anselm/software/mpi/mpi.md index 4313bf513d5262a4b3eba0f1ef10380142f3a2ef..c8e9c7d6d1f26fa8fd7347795c8b57188742aacb 100644 --- a/docs.it4i/anselm/software/mpi/mpi.md +++ b/docs.it4i/anselm/software/mpi/mpi.md @@ -70,31 +70,31 @@ $ mpif90 -v Example program: ```cpp - // helloworld_mpi.c - #include <stdio.h> +// helloworld_mpi.c +#include <stdio.h> - #include<mpi.h> +#include<mpi.h> - int main(int argc, char **argv) { +int main(int argc, char **argv) { - int len; - int rank, size; - char node[MPI_MAX_PROCESSOR_NAME]; +int len; +int rank, size; +char node[MPI_MAX_PROCESSOR_NAME]; - // Initiate MPI - MPI_Init(&argc, &argv); - MPI_Comm_rank(MPI_COMM_WORLD,&rank); - MPI_Comm_size(MPI_COMM_WORLD,&size); +// Initiate MPI +MPI_Init(&argc, &argv); +MPI_Comm_rank(MPI_COMM_WORLD,&rank); +MPI_Comm_size(MPI_COMM_WORLD,&size); - // Get hostame and print - MPI_Get_processor_name(node,&len); - printf("Hello world! from rank %d of %d on host %sn",rank,size,node); +// Get hostame and print +MPI_Get_processor_name(node,&len); +printf("Hello world! from rank %d of %d on host %sn",rank,size,node); - // Finalize and exit - MPI_Finalize(); +// Finalize and exit +MPI_Finalize(); - return 0; - } +return 0; +} ``` Compile the above example with diff --git a/docs.it4i/salomon/software/mpi/mpi.md b/docs.it4i/salomon/software/mpi/mpi.md index 99f8745aca779ad71a3ab5322499aa9e8bc9fd25..536335b3db1ce6ac972844e1f6e6c26108cd62f8 100644 --- a/docs.it4i/salomon/software/mpi/mpi.md +++ b/docs.it4i/salomon/software/mpi/mpi.md @@ -71,31 +71,31 @@ Wrappers mpif90, mpif77 that are provided by Intel MPI are designed for gcc and Example program: ```cpp - // helloworld_mpi.c - #include <stdio.h> +// helloworld_mpi.c +#include <stdio.h> - #include<mpi.h> +#include<mpi.h> - int main(int argc, char **argv) { +int main(int argc, char **argv) { - int len; - int rank, size; - char node[MPI_MAX_PROCESSOR_NAME]; +int len; +int rank, size; +char node[MPI_MAX_PROCESSOR_NAME]; - // Initiate MPI - MPI_Init(&argc, &argv); - MPI_Comm_rank(MPI_COMM_WORLD,&rank); - MPI_Comm_size(MPI_COMM_WORLD,&size); +// Initiate MPI +MPI_Init(&argc, &argv); +MPI_Comm_rank(MPI_COMM_WORLD,&rank); +MPI_Comm_size(MPI_COMM_WORLD,&size); - // Get hostame and print - MPI_Get_processor_name(node,&len); - printf("Hello world! from rank %d of %d on host %sn",rank,size,node); +// Get hostame and print +MPI_Get_processor_name(node,&len); +printf("Hello world! from rank %d of %d on host %sn",rank,size,node); - // Finalize and exit - MPI_Finalize(); +// Finalize and exit +MPI_Finalize(); - return 0; - } +return 0; +} ``` Compile the above example with @@ -117,10 +117,12 @@ The MPI program executable must be available within the same path on all nodes. Optimal way to run an MPI program depends on its memory requirements, memory access pattern and communication pattern. -Consider these ways to run an MPI program: -1\. One MPI process per node, 24 threads per process -2\. Two MPI processes per node, 12 threads per process -3\. 24 MPI processes per node, 1 thread per process. +!!! note + Consider these ways to run an MPI program: + + 1. One MPI process per node, 24 threads per process + 2. Two MPI processes per node, 12 threads per process + 3. 24 MPI processes per node, 1 thread per process. **One MPI** process per node, using 24 threads, is most useful for memory demanding applications, that make good use of processor cache memory and are not memory bound. This is also a preferred way for communication intensive applications as one process per node enjoys full bandwidth access to the network interface.