Commit 80ab54f4 authored by David Hrbáč's avatar David Hrbáč

Links OK

parent 44cc1ab2
Pipeline #5209 failed with stages
in 1 minute and 11 seconds
......@@ -412,6 +412,6 @@ Further jobscript examples may be found in the software section and the [Capacit
[2]: network.md
[3]: hardware-overview.md
[4]: storage.md
[5]: ../software/mpi/Running_OpenMPI.md
[5]: ../software/mpi/running_openmpi.md
[6]: ../software/mpi/running-mpich2.md
[7]: capacity-computing.md
......@@ -71,7 +71,7 @@ Refer to [CUBE documentation][7] on usage of the GUI viewer.
[1]: ../../modules-matrix.mdl
[2]: ../compilers.md
[3]: ../mpi/Running_OpenMPI.md
[3]: ../mpi/running_openmpi.md
[4]: ../mpi/running-mpich2.md
[5]: score-p.md
[6]: ../../salomon/storage.md
......
......@@ -119,7 +119,7 @@ The directives are ignored if the program is compiled without Score-P. Again, re
[1]: scalasca.md
[2]: ../../modules-matrix.md
[3]: ../compilers.md
[4]: ../mpi/Running_OpenMPI.md
[4]: ../mpi/running_openmpi.md
[5]: ../mpi/running-mpich2.md
[a]: http://www.vi-hps.org/projects/score-p/
......
......@@ -141,7 +141,7 @@ The [OpenMPI 1.8.6][a] is based on OpenMPI. Read more on [how to run OpenMPI][2]
The Intel MPI may run on the [Intel Xeon Ph][3] accelerators as well. Read more on [how to run Intel MPI on accelerators][3].
[1]: ../../modules-matrix.md
[2]: Running_OpenMPI.md
[2]: running_openmpi.md
[3]: ../intel/intel-xeon-phi-salomon.md
[a]: http://www.open-mpi.org/
......@@ -170,6 +170,6 @@ $ mpirun -n 2 python myprogram.py
You can increase n and watch time lowering.
[1]: Running_OpenMPI.md
[1]: running_openmpi.md
[a]: https://pypi.python.org/pypi/mpi4py
......@@ -124,4 +124,4 @@ $ mpiexec -np 4 ./synchronization_test.x
**-np 4** is number of images to run. The parameters of **cafrun** and **mpiexec** are the same.
For more information about running CAF program follow [Running OpenMPI - Salomon](../mpi/Running_OpenMPI.md)
For more information about running CAF program follow [Running OpenMPI - Salomon](../mpi/running_openmpi.md)
......@@ -144,7 +144,7 @@ Every evaluation of the integrad function runs in parallel on different process.
package Rmpi provides an interface (wrapper) to MPI APIs.
It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](software/mpi/Running_OpenMPI/).
It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](software/mpi/running_openmpi/).
Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at [here](http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment