Skip to content
Snippets Groups Projects
Commit 80ab54f4 authored by David Hrbáč's avatar David Hrbáč
Browse files

Links OK

parent 44cc1ab2
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!219Virtual environment, upgrade MKdocs, upgrade Material design
...@@ -412,6 +412,6 @@ Further jobscript examples may be found in the software section and the [Capacit ...@@ -412,6 +412,6 @@ Further jobscript examples may be found in the software section and the [Capacit
[2]: network.md [2]: network.md
[3]: hardware-overview.md [3]: hardware-overview.md
[4]: storage.md [4]: storage.md
[5]: ../software/mpi/Running_OpenMPI.md [5]: ../software/mpi/running_openmpi.md
[6]: ../software/mpi/running-mpich2.md [6]: ../software/mpi/running-mpich2.md
[7]: capacity-computing.md [7]: capacity-computing.md
...@@ -71,7 +71,7 @@ Refer to [CUBE documentation][7] on usage of the GUI viewer. ...@@ -71,7 +71,7 @@ Refer to [CUBE documentation][7] on usage of the GUI viewer.
[1]: ../../modules-matrix.mdl [1]: ../../modules-matrix.mdl
[2]: ../compilers.md [2]: ../compilers.md
[3]: ../mpi/Running_OpenMPI.md [3]: ../mpi/running_openmpi.md
[4]: ../mpi/running-mpich2.md [4]: ../mpi/running-mpich2.md
[5]: score-p.md [5]: score-p.md
[6]: ../../salomon/storage.md [6]: ../../salomon/storage.md
......
...@@ -119,7 +119,7 @@ The directives are ignored if the program is compiled without Score-P. Again, re ...@@ -119,7 +119,7 @@ The directives are ignored if the program is compiled without Score-P. Again, re
[1]: scalasca.md [1]: scalasca.md
[2]: ../../modules-matrix.md [2]: ../../modules-matrix.md
[3]: ../compilers.md [3]: ../compilers.md
[4]: ../mpi/Running_OpenMPI.md [4]: ../mpi/running_openmpi.md
[5]: ../mpi/running-mpich2.md [5]: ../mpi/running-mpich2.md
[a]: http://www.vi-hps.org/projects/score-p/ [a]: http://www.vi-hps.org/projects/score-p/
......
...@@ -141,7 +141,7 @@ The [OpenMPI 1.8.6][a] is based on OpenMPI. Read more on [how to run OpenMPI][2] ...@@ -141,7 +141,7 @@ The [OpenMPI 1.8.6][a] is based on OpenMPI. Read more on [how to run OpenMPI][2]
The Intel MPI may run on the [Intel Xeon Ph][3] accelerators as well. Read more on [how to run Intel MPI on accelerators][3]. The Intel MPI may run on the [Intel Xeon Ph][3] accelerators as well. Read more on [how to run Intel MPI on accelerators][3].
[1]: ../../modules-matrix.md [1]: ../../modules-matrix.md
[2]: Running_OpenMPI.md [2]: running_openmpi.md
[3]: ../intel/intel-xeon-phi-salomon.md [3]: ../intel/intel-xeon-phi-salomon.md
[a]: http://www.open-mpi.org/ [a]: http://www.open-mpi.org/
...@@ -170,6 +170,6 @@ $ mpirun -n 2 python myprogram.py ...@@ -170,6 +170,6 @@ $ mpirun -n 2 python myprogram.py
You can increase n and watch time lowering. You can increase n and watch time lowering.
[1]: Running_OpenMPI.md [1]: running_openmpi.md
[a]: https://pypi.python.org/pypi/mpi4py [a]: https://pypi.python.org/pypi/mpi4py
...@@ -124,4 +124,4 @@ $ mpiexec -np 4 ./synchronization_test.x ...@@ -124,4 +124,4 @@ $ mpiexec -np 4 ./synchronization_test.x
**-np 4** is number of images to run. The parameters of **cafrun** and **mpiexec** are the same. **-np 4** is number of images to run. The parameters of **cafrun** and **mpiexec** are the same.
For more information about running CAF program follow [Running OpenMPI - Salomon](../mpi/Running_OpenMPI.md) For more information about running CAF program follow [Running OpenMPI - Salomon](../mpi/running_openmpi.md)
...@@ -144,7 +144,7 @@ Every evaluation of the integrad function runs in parallel on different process. ...@@ -144,7 +144,7 @@ Every evaluation of the integrad function runs in parallel on different process.
package Rmpi provides an interface (wrapper) to MPI APIs. package Rmpi provides an interface (wrapper) to MPI APIs.
It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](software/mpi/Running_OpenMPI/). It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](software/mpi/running_openmpi/).
Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at [here](http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf) Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at [here](http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment