diff --git a/docs.it4i/anselm/job-submission-and-execution.md b/docs.it4i/anselm/job-submission-and-execution.md index ef21fdde26f69b3f2a2a2b1c503bc2679d4372fe..3739fd0bed5e1fb9e70b4bd0c2bbc5d1774a9c95 100644 --- a/docs.it4i/anselm/job-submission-and-execution.md +++ b/docs.it4i/anselm/job-submission-and-execution.md @@ -412,6 +412,6 @@ Further jobscript examples may be found in the software section and the [Capacit [2]: network.md [3]: hardware-overview.md [4]: storage.md -[5]: ../software/mpi/Running_OpenMPI.md +[5]: ../software/mpi/running_openmpi.md [6]: ../software/mpi/running-mpich2.md [7]: capacity-computing.md diff --git a/docs.it4i/software/debuggers/scalasca.md b/docs.it4i/software/debuggers/scalasca.md index 113d3bd98234bed6d7df35d2062144d809a86a90..c2b0141bc8af21ca279837d069703ba8d5fa5678 100644 --- a/docs.it4i/software/debuggers/scalasca.md +++ b/docs.it4i/software/debuggers/scalasca.md @@ -71,7 +71,7 @@ Refer to [CUBE documentation][7] on usage of the GUI viewer. [1]: ../../modules-matrix.mdl [2]: ../compilers.md -[3]: ../mpi/Running_OpenMPI.md +[3]: ../mpi/running_openmpi.md [4]: ../mpi/running-mpich2.md [5]: score-p.md [6]: ../../salomon/storage.md diff --git a/docs.it4i/software/debuggers/score-p.md b/docs.it4i/software/debuggers/score-p.md index 6f8ade48e14b334ae863d9d10a5205b07d482681..6315249f4993dab841bf2e2fb6214a5d478b7bde 100644 --- a/docs.it4i/software/debuggers/score-p.md +++ b/docs.it4i/software/debuggers/score-p.md @@ -119,7 +119,7 @@ The directives are ignored if the program is compiled without Score-P. Again, re [1]: scalasca.md [2]: ../../modules-matrix.md [3]: ../compilers.md -[4]: ../mpi/Running_OpenMPI.md +[4]: ../mpi/running_openmpi.md [5]: ../mpi/running-mpich2.md [a]: http://www.vi-hps.org/projects/score-p/ diff --git a/docs.it4i/software/mpi/mpi.md b/docs.it4i/software/mpi/mpi.md index 6325291ac13a5296f3bfd006e0ee368673e138e6..43e8c61d073024b7f8ef54a4ee556b7832a0759b 100644 --- a/docs.it4i/software/mpi/mpi.md +++ b/docs.it4i/software/mpi/mpi.md @@ -141,7 +141,7 @@ The [OpenMPI 1.8.6][a] is based on OpenMPI. Read more on [how to run OpenMPI][2] The Intel MPI may run on the [Intel Xeon Ph][3] accelerators as well. Read more on [how to run Intel MPI on accelerators][3]. [1]: ../../modules-matrix.md -[2]: Running_OpenMPI.md +[2]: running_openmpi.md [3]: ../intel/intel-xeon-phi-salomon.md [a]: http://www.open-mpi.org/ diff --git a/docs.it4i/software/mpi/mpi4py-mpi-for-python.md b/docs.it4i/software/mpi/mpi4py-mpi-for-python.md index 1a906ff1b954ec33fe440c08f6be00fa2ab26033..08257005720a14d1431dfbb8871d38fcc469cd96 100644 --- a/docs.it4i/software/mpi/mpi4py-mpi-for-python.md +++ b/docs.it4i/software/mpi/mpi4py-mpi-for-python.md @@ -170,6 +170,6 @@ $ mpirun -n 2 python myprogram.py You can increase n and watch time lowering. -[1]: Running_OpenMPI.md +[1]: running_openmpi.md [a]: https://pypi.python.org/pypi/mpi4py diff --git a/docs.it4i/software/numerical-languages/opencoarrays.md b/docs.it4i/software/numerical-languages/opencoarrays.md index 2f08da82bdb61afc49408e13511bf69d12f2ed3f..c909838d95e84e5361d03c1554ec4f3870c56b4f 100644 --- a/docs.it4i/software/numerical-languages/opencoarrays.md +++ b/docs.it4i/software/numerical-languages/opencoarrays.md @@ -124,4 +124,4 @@ $ mpiexec -np 4 ./synchronization_test.x **-np 4** is number of images to run. The parameters of **cafrun** and **mpiexec** are the same. -For more information about running CAF program follow [Running OpenMPI - Salomon](../mpi/Running_OpenMPI.md) +For more information about running CAF program follow [Running OpenMPI - Salomon](../mpi/running_openmpi.md) diff --git a/docs.it4i/software/numerical-languages/r.md b/docs.it4i/software/numerical-languages/r.md index 81ba42081f1261d525dec8bb4c9bf5c7b17ce842..27b2d550d81509f9e0056426bf3d19c9c2877d62 100644 --- a/docs.it4i/software/numerical-languages/r.md +++ b/docs.it4i/software/numerical-languages/r.md @@ -144,7 +144,7 @@ Every evaluation of the integrad function runs in parallel on different process. package Rmpi provides an interface (wrapper) to MPI APIs. -It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](software/mpi/Running_OpenMPI/). +It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](software/mpi/running_openmpi/). Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at [here](http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf)