From 80ab54f4900c03fc87245c944e326fcfa080ebf3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?David=20Hrb=C3=A1=C4=8D?= <david@hrbac.cz> Date: Thu, 1 Nov 2018 21:33:43 +0100 Subject: [PATCH] Links OK --- docs.it4i/anselm/job-submission-and-execution.md | 2 +- docs.it4i/software/debuggers/scalasca.md | 2 +- docs.it4i/software/debuggers/score-p.md | 2 +- docs.it4i/software/mpi/mpi.md | 2 +- docs.it4i/software/mpi/mpi4py-mpi-for-python.md | 2 +- docs.it4i/software/numerical-languages/opencoarrays.md | 2 +- docs.it4i/software/numerical-languages/r.md | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs.it4i/anselm/job-submission-and-execution.md b/docs.it4i/anselm/job-submission-and-execution.md index ef21fdde2..3739fd0be 100644 --- a/docs.it4i/anselm/job-submission-and-execution.md +++ b/docs.it4i/anselm/job-submission-and-execution.md @@ -412,6 +412,6 @@ Further jobscript examples may be found in the software section and the [Capacit [2]: network.md [3]: hardware-overview.md [4]: storage.md -[5]: ../software/mpi/Running_OpenMPI.md +[5]: ../software/mpi/running_openmpi.md [6]: ../software/mpi/running-mpich2.md [7]: capacity-computing.md diff --git a/docs.it4i/software/debuggers/scalasca.md b/docs.it4i/software/debuggers/scalasca.md index 113d3bd98..c2b0141bc 100644 --- a/docs.it4i/software/debuggers/scalasca.md +++ b/docs.it4i/software/debuggers/scalasca.md @@ -71,7 +71,7 @@ Refer to [CUBE documentation][7] on usage of the GUI viewer. [1]: ../../modules-matrix.mdl [2]: ../compilers.md -[3]: ../mpi/Running_OpenMPI.md +[3]: ../mpi/running_openmpi.md [4]: ../mpi/running-mpich2.md [5]: score-p.md [6]: ../../salomon/storage.md diff --git a/docs.it4i/software/debuggers/score-p.md b/docs.it4i/software/debuggers/score-p.md index 6f8ade48e..6315249f4 100644 --- a/docs.it4i/software/debuggers/score-p.md +++ b/docs.it4i/software/debuggers/score-p.md @@ -119,7 +119,7 @@ The directives are ignored if the program is compiled without Score-P. Again, re [1]: scalasca.md [2]: ../../modules-matrix.md [3]: ../compilers.md -[4]: ../mpi/Running_OpenMPI.md +[4]: ../mpi/running_openmpi.md [5]: ../mpi/running-mpich2.md [a]: http://www.vi-hps.org/projects/score-p/ diff --git a/docs.it4i/software/mpi/mpi.md b/docs.it4i/software/mpi/mpi.md index 6325291ac..43e8c61d0 100644 --- a/docs.it4i/software/mpi/mpi.md +++ b/docs.it4i/software/mpi/mpi.md @@ -141,7 +141,7 @@ The [OpenMPI 1.8.6][a] is based on OpenMPI. Read more on [how to run OpenMPI][2] The Intel MPI may run on the [Intel Xeon Ph][3] accelerators as well. Read more on [how to run Intel MPI on accelerators][3]. [1]: ../../modules-matrix.md -[2]: Running_OpenMPI.md +[2]: running_openmpi.md [3]: ../intel/intel-xeon-phi-salomon.md [a]: http://www.open-mpi.org/ diff --git a/docs.it4i/software/mpi/mpi4py-mpi-for-python.md b/docs.it4i/software/mpi/mpi4py-mpi-for-python.md index 1a906ff1b..082570057 100644 --- a/docs.it4i/software/mpi/mpi4py-mpi-for-python.md +++ b/docs.it4i/software/mpi/mpi4py-mpi-for-python.md @@ -170,6 +170,6 @@ $ mpirun -n 2 python myprogram.py You can increase n and watch time lowering. -[1]: Running_OpenMPI.md +[1]: running_openmpi.md [a]: https://pypi.python.org/pypi/mpi4py diff --git a/docs.it4i/software/numerical-languages/opencoarrays.md b/docs.it4i/software/numerical-languages/opencoarrays.md index 2f08da82b..c909838d9 100644 --- a/docs.it4i/software/numerical-languages/opencoarrays.md +++ b/docs.it4i/software/numerical-languages/opencoarrays.md @@ -124,4 +124,4 @@ $ mpiexec -np 4 ./synchronization_test.x **-np 4** is number of images to run. The parameters of **cafrun** and **mpiexec** are the same. -For more information about running CAF program follow [Running OpenMPI - Salomon](../mpi/Running_OpenMPI.md) +For more information about running CAF program follow [Running OpenMPI - Salomon](../mpi/running_openmpi.md) diff --git a/docs.it4i/software/numerical-languages/r.md b/docs.it4i/software/numerical-languages/r.md index 81ba42081..27b2d550d 100644 --- a/docs.it4i/software/numerical-languages/r.md +++ b/docs.it4i/software/numerical-languages/r.md @@ -144,7 +144,7 @@ Every evaluation of the integrad function runs in parallel on different process. package Rmpi provides an interface (wrapper) to MPI APIs. -It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](software/mpi/Running_OpenMPI/). +It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](software/mpi/running_openmpi/). Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at [here](http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf) -- GitLab