Commit 0130adc1 authored by Pavel Gajdušek's avatar Pavel Gajdušek
Browse files

mdlint, tabs

parent 2f2b76e7
......@@ -70,31 +70,31 @@ $ mpif90 -v
Example program:
```cpp
// helloworld_mpi.c
#include <stdio.h>
// helloworld_mpi.c
#include <stdio.h>
#include<mpi.h>
#include<mpi.h>
int main(int argc, char **argv) {
int main(int argc, char **argv) {
int len;
int rank, size;
char node[MPI_MAX_PROCESSOR_NAME];
int len;
int rank, size;
char node[MPI_MAX_PROCESSOR_NAME];
// Initiate MPI
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
// Initiate MPI
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
// Get hostame and print
MPI_Get_processor_name(node,&len);
printf("Hello world! from rank %d of %d on host %sn",rank,size,node);
// Get hostame and print
MPI_Get_processor_name(node,&len);
printf("Hello world! from rank %d of %d on host %sn",rank,size,node);
// Finalize and exit
MPI_Finalize();
// Finalize and exit
MPI_Finalize();
return 0;
}
return 0;
}
```
Compile the above example with
......
......@@ -71,31 +71,31 @@ Wrappers mpif90, mpif77 that are provided by Intel MPI are designed for gcc and
Example program:
```cpp
// helloworld_mpi.c
#include <stdio.h>
// helloworld_mpi.c
#include <stdio.h>
#include<mpi.h>
#include<mpi.h>
int main(int argc, char **argv) {
int main(int argc, char **argv) {
int len;
int rank, size;
char node[MPI_MAX_PROCESSOR_NAME];
int len;
int rank, size;
char node[MPI_MAX_PROCESSOR_NAME];
// Initiate MPI
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
// Initiate MPI
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
// Get hostame and print
MPI_Get_processor_name(node,&len);
printf("Hello world! from rank %d of %d on host %sn",rank,size,node);
// Get hostame and print
MPI_Get_processor_name(node,&len);
printf("Hello world! from rank %d of %d on host %sn",rank,size,node);
// Finalize and exit
MPI_Finalize();
// Finalize and exit
MPI_Finalize();
return 0;
}
return 0;
}
```
Compile the above example with
......@@ -117,10 +117,12 @@ The MPI program executable must be available within the same path on all nodes.
Optimal way to run an MPI program depends on its memory requirements, memory access pattern and communication pattern.
Consider these ways to run an MPI program:
1\. One MPI process per node, 24 threads per process
2\. Two MPI processes per node, 12 threads per process
3\. 24 MPI processes per node, 1 thread per process.
!!! note
Consider these ways to run an MPI program:
1. One MPI process per node, 24 threads per process
2. Two MPI processes per node, 12 threads per process
3. 24 MPI processes per node, 1 thread per process.
**One MPI** process per node, using 24 threads, is most useful for memory demanding applications, that make good use of processor cache memory and are not memory bound. This is also a preferred way for communication intensive applications as one process per node enjoys full bandwidth access to the network interface.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment