Skip to content
Snippets Groups Projects

Gajdusek cleaning

Merged Pavel Gajdušek requested to merge gajdusek_clean into master
Compare and
2 files
+ 101
39
Compare changes
  • Side-by-side
  • Inline
Files
2
@@ -20,7 +20,7 @@ $ ml av Python/
@@ -20,7 +20,7 @@ $ ml av Python/
Python/2.7.11-foss-2016a Python/3.5.2-foss-2016a Python/3.5.1
Python/2.7.11-foss-2016a Python/3.5.2-foss-2016a Python/3.5.1
Python/2.7.9-foss-2015g Python/3.4.3-intel-2015b Python/2.7.9
Python/2.7.9-foss-2015g Python/3.4.3-intel-2015b Python/2.7.9
Python/2.7.11-intel-2015b Python/3.5.2
Python/2.7.11-intel-2015b Python/3.5.2
$ ml av OpenMPI/
$ ml av OpenMPI/
--------------------------------------- /apps/modules/mpi --------------------------
--------------------------------------- /apps/modules/mpi --------------------------
OpenMPI/1.8.6-GCC-4.4.7-system OpenMPI/1.8.8-GNU-4.9.3-2.25 OpenMPI/1.10.1-GCC-4.9.3-2.25
OpenMPI/1.8.6-GCC-4.4.7-system OpenMPI/1.8.8-GNU-4.9.3-2.25 OpenMPI/1.10.1-GCC-4.9.3-2.25
@@ -28,7 +28,8 @@ OpenMPI/1.8.6-GNU-5.1.0-2.25 OpenMPI/1.8.8-GNU-5.1.0-2.25 OpenMPI/1.10.1-GN
@@ -28,7 +28,8 @@ OpenMPI/1.8.6-GNU-5.1.0-2.25 OpenMPI/1.8.8-GNU-5.1.0-2.25 OpenMPI/1.10.1-GN
OpenMPI/1.8.8-iccifort-2015.3.187-GNU-4.9.3-2.25 OpenMPI/2.0.2-GCC-6.3.0-2.27
OpenMPI/1.8.8-iccifort-2015.3.187-GNU-4.9.3-2.25 OpenMPI/2.0.2-GCC-6.3.0-2.27
```
```
!!! Warning ""
!!! Warning "Flavours"
 
* modules Python/x.x.x-intel... - intel MPI
* modules Python/x.x.x-intel... - intel MPI
* modules Python/x.x.x-foss... - OpenMPI
* modules Python/x.x.x-foss... - OpenMPI
* modules Python/x.x.x - without MPI
* modules Python/x.x.x - without MPI
@@ -37,8 +38,8 @@ OpenMPI/1.8.6-GNU-5.1.0-2.25 OpenMPI/1.8.8-GNU-5.1.0-2.25 OpenMPI/1.10.1-GN
@@ -37,8 +38,8 @@ OpenMPI/1.8.6-GNU-5.1.0-2.25 OpenMPI/1.8.8-GNU-5.1.0-2.25 OpenMPI/1.10.1-GN
You need to import MPI to your python program. Include the following line to the python script:
You need to import MPI to your python program. Include the following line to the python script:
```cpp
```python
from mpi4py import MPI
from mpi4py import MPI
```
```
The MPI4Py enabled python programs [execute as any other OpenMPI](Running_OpenMPI/) code.The simpliest way is to run
The MPI4Py enabled python programs [execute as any other OpenMPI](Running_OpenMPI/) code.The simpliest way is to run
@@ -57,43 +58,43 @@ $ mpiexec python hello_world.py
@@ -57,43 +58,43 @@ $ mpiexec python hello_world.py
### Hello World!
### Hello World!
```cpp
```python
from mpi4py import MPI
from mpi4py import MPI
comm = MPI.COMM_WORLD
comm = MPI.COMM_WORLD
print "Hello! I'm rank %d from %d running in total..." % (comm.rank, comm.size)
print "Hello! I'm rank %d from %d running in total..." % (comm.rank, comm.size)
comm.Barrier() # wait for everybody to synchronize
comm.Barrier() # wait for everybody to synchronize
```
```
### Collective Communication With NumPy Arrays
### Collective Communication With NumPy Arrays
```cpp
```python
from mpi4py import MPI
from mpi4py import MPI
from __future__ import division
from __future__ import division
import numpy as np
import numpy as np
comm = MPI.COMM_WORLD
comm = MPI.COMM_WORLD
print("-"*78)
print("-"*78)
print(" Running on %d cores" % comm.size)
print(" Running on %d cores" % comm.size)
print("-"*78)
print("-"*78)
comm.Barrier()
comm.Barrier()
# Prepare a vector of N=5 elements to be broadcasted...
# Prepare a vector of N=5 elements to be broadcasted...
N = 5
N = 5
if comm.rank == 0:
if comm.rank == 0:
A = np.arange(N, dtype=np.float64) # rank 0 has proper data
A = np.arange(N, dtype=np.float64) # rank 0 has proper data
else:
else:
A = np.empty(N, dtype=np.float64) # all other just an empty array
A = np.empty(N, dtype=np.float64) # all other just an empty array
# Broadcast A from rank 0 to everybody
# Broadcast A from rank 0 to everybody
comm.Bcast( [A, MPI.DOUBLE] )
comm.Bcast( [A, MPI.DOUBLE] )
# Everybody should now have the same...
# Everybody should now have the same...
print "[%02d] %s" % (comm.rank, A)
print "[%02d] %s" % (comm.rank, A)
```
```
Execute the above code as:
Execute the above code as:
@@ -106,3 +107,64 @@ $ mpiexec -bycore -bind-to-core python hello_world.py
@@ -106,3 +107,64 @@ $ mpiexec -bycore -bind-to-core python hello_world.py
```
```
In this example, we run MPI4Py enabled code on 4 nodes, 16 cores per node (total of 64 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pypi.python.org/pypi/mpi4py).
In this example, we run MPI4Py enabled code on 4 nodes, 16 cores per node (total of 64 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pypi.python.org/pypi/mpi4py).
 
 
###Adding numbers
 
 
Task: count sum of numbers from 1 to 1 000 000. (There is an easy formula to count the sum of arithmetic sequence, but we are showing the MPI solution with adding numbers one by one).
 
 
 
```python
 
#!/usr/bin/python
 
 
import numpy
 
from mpi4py import MPI
 
import time
 
 
comm = MPI.COMM_WORLD
 
rank = comm.Get_rank()
 
size = comm.Get_size()
 
 
a = 1
 
b = 1000000
 
 
perrank = b//size
 
summ = numpy.zeros(1)
 
 
comm.Barrier()
 
start_time = time.time()
 
 
temp = 0
 
for i in range(a + rank*perrank, a + (rank+1)*perrank):
 
temp = temp + i
 
 
summ[0] = temp
 
 
if rank == 0:
 
total = numpy.zeros(1)
 
else:
 
total = None
 
 
comm.Barrier()
 
#collect the partial results to total sum
 
comm.Reduce(summ, total, op=MPI.SUM, root=0)
 
 
stop_time = time.time()
 
 
if rank == 0:
 
#add the rest numbers to 1 000 000
 
for i in range(a + (size)*perrank, b+1):
 
total[0] = total[0] + i
 
print ("The sum of numbers from 1 to 1 000 000: ", int(total[0]))
 
print ("time spent with ", size, " threads in milliseconds")
 
print ("-----", int((time.time()-start_time)*1000), "-----")
 
```
 
 
Execute the code above as:
 
```console
 
$ qsub -I -q qexp -l select=4:ncpus=16,walltime=00:30:00
 
 
$ ml Python/3.5.2-intel-2017.00
 
 
$ mpirun -n 2 python myprogram.py
 
```
 
You can increase n and watch time lowering.
Loading