Commit 44cc1ab2 authored by David Hrbáč's avatar David Hrbáč

Link test

parent 9a7dc58e
......@@ -16,12 +16,15 @@ Test module:
$ ml Tensorflow
```
Read more about available versions at the [TensorFlow page](software/machine-learning/tensorflow/).
Read more about available versions at the [TensorFlow page][1].
## Theano
Read more about [available versions](modules-matrix/).
Read more about [available versions][2].
## Keras
Read more about [available versions](modules-matrix/).
Read more about [available versions][2].
[1]: tensorflow.md
[2]: ../../modules-matrix.md
# Intel Xeon Phi Environment
Intel Xeon Phi (so-called MIC) accelerator can be used in several modes ([Offload](software/intel/intel-xeon-phi-salomon/#offload-mode) and [Native](#native-mode)). The default mode on the cluster is offload mode, but all modes described in this document are supported.
Intel Xeon Phi (so-called MIC) accelerator can be used in several modes ([Offload][1] and [Native][2]). The default mode on the cluster is offload mode, but all modes described in this document are supported.
See sections below for more details.
## Intel Utilities for Xeon Phi
Continue [here](software/intel/intel-xeon-phi-salomon/)
Continue [here][3].
## GCC With [KNC](https://en.wikipedia.org/wiki/Xeon_Phi) Support
## GCC With KNC Support
On Salomon cluster we have module `GCC/5.1.1-knc` with cross-compiled (offload) support. (gcc, g++ and gfortran)
On Salomon cluster we have module `GCC/5.1.1-knc` with cross-compiled (offload) support for [KNC][a]. (gcc, g++ and gfortran)
!!! warning
Only Salomon cluster.
Salomon cluster only.
To compile a code using GCC compiler run following commands
......@@ -205,7 +205,7 @@ Test passed
```
!!! tip
Or use the procedure from the chapter [Devel Environment](#devel-environment).
Or use the procedure from the chapter [Devel Environment][4].
## Only Intel Xeon Phi Cards
......@@ -434,4 +434,12 @@ Configure step (for `configure`,`make` and `make install` software)
Modulefile and Lmod
* Read [Lmod](software/modules/lmod/)
* Read [Lmod][5]
[1]: ../intel/intel-xeon-phi-salomon.md#offload-mode
[2]: #native-mode
[3]: ../intel/intel-xeon-phi-salomon.md
[4]: #devel-environment
[5]: ../modules/lmod.md
[a]: https://en.wikipedia.org/wiki/Xeon_Phi
......@@ -2,7 +2,7 @@
Lmod is a modules tool, a modern alternative to the oudated & no longer actively maintained Tcl-based environment modules tool.
Detailed documentation on Lmod is available [here](http://lmod.readthedocs.io).
Detailed documentation on Lmod is available [here][a].
## Benefits
......@@ -40,7 +40,7 @@ Currently Loaded Modules:
```
!!! tip
For more details on sticky modules, see the section on [ml purge](#resetting-by-unloading-all-modules).
For more details on sticky modules, see the section on [ml purge][1].
## Searching for Available Modules
......@@ -317,3 +317,7 @@ Named collection list:
To inspect a collection, use ml describe.
To remove a module collection, remove the corresponding entry in $HOME/.lmod.d.
[1]: #resetting-by-unloading-all-modules
[a]: http://lmod.readthedocs.io
......@@ -39,7 +39,7 @@ Examples:
$ ml gompi/2015b
```
In this example, we activate the latest OpenMPI with latest GNU compilers (OpenMPI 1.8.6 and GCC 5.1). Please see more information about toolchains in section [Environment and Modules](../../modules-matrix/) .
In this example, we activate the latest OpenMPI with latest GNU compilers (OpenMPI 1.8.6 and GCC 5.1). Please see more information about toolchains in section [Environment and Modules][1].
To use OpenMPI with the intel compiler suite, use
......@@ -136,6 +136,12 @@ In the previous two cases with one or two MPI processes per node, the operating
### Running OpenMPI
The [**OpenMPI 1.8.6**](http://www.open-mpi.org/) is based on OpenMPI. Read more on [how to run OpenMPI](software/mpi/Running_OpenMPI/) based MPI.
The [OpenMPI 1.8.6][a] is based on OpenMPI. Read more on [how to run OpenMPI][2] based MPI.
The Intel MPI may run on the [Intel Xeon Ph](software/intel/intel-xeon-phi-salomon/) accelerators as well. Read more on [how to run Intel MPI on accelerators](software/intel/intel-xeon-phi-salomon/).
The Intel MPI may run on the [Intel Xeon Ph][3] accelerators as well. Read more on [how to run Intel MPI on accelerators][3].
[1]: ../../modules-matrix.md
[2]: Running_OpenMPI.md
[3]: ../intel/intel-xeon-phi-salomon.md
[a]: http://www.open-mpi.org/
......@@ -42,7 +42,7 @@ You need to import MPI to your python program. Include the following line to the
from mpi4py import MPI
```
The MPI4Py enabled python programs [execute as any other OpenMPI](salomon/mpi/Running_OpenMPI/) code.The simpliest way is to run
The MPI4Py enabled python programs [execute as any other OpenMPI][1] code.The simpliest way is to run
```console
$ mpiexec python <script>.py
......@@ -106,7 +106,7 @@ $ ml OpenMPI
$ mpiexec -bycore -bind-to-core python hello_world.py
```
In this example, we run MPI4Py enabled code on 4 nodes, 16 cores per node (total of 64 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pypi.python.org/pypi/mpi4py).
In this example, we run MPI4Py enabled code on 4 nodes, 16 cores per node (total of 64 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage][a].
### Adding Numbers
......@@ -169,3 +169,7 @@ $ mpirun -n 2 python myprogram.py
```
You can increase n and watch time lowering.
[1]: Running_OpenMPI.md
[a]: https://pypi.python.org/pypi/mpi4py
......@@ -139,7 +139,7 @@ In this example, we see that ranks have been mapped on nodes according to the or
### Process Binding
The Intel MPI automatically binds each process and its threads to the corresponding portion of cores on the processor socket of the node, no options needed. The binding is primarily controlled by environment variables. Read more about mpi process binding on [Intel website](https://software.intel.com/sites/products/documentation/hpc/ics/impi/41/lin/Reference_Manual/Environment_Variables_Process_Pinning.htm). The MPICH2 uses the -bind-to option Use -bind-to numa or -bind-to core to bind the process on single core or entire socket.
The Intel MPI automatically binds each process and its threads to the corresponding portion of cores on the processor socket of the node, no options needed. The binding is primarily controlled by environment variables. Read more about mpi process binding on [Intel website][a]. The MPICH2 uses the -bind-to option Use -bind-to numa or -bind-to core to bind the process on single core or entire socket.
### Bindings Verification
......@@ -152,4 +152,8 @@ $ mpirun -bindto numa echo $OMP_NUM_THREADS
## Intel MPI on Xeon Phi
The [MPI section of Intel Xeon Phi chapter](software/intel/intel-xeon-phi-salomon/) provides details on how to run Intel MPI code on Xeon Phi architecture.
The [MPI section of Intel Xeon Phi chapter][1] provides details on how to run Intel MPI code on Xeon Phi architecture.
[1]: ../intel/intel-xeon-phi-salomon.md
[a]: https://software.intel.com/sites/products/documentation/hpc/ics/impi/41/lin/Reference_Manual/Environment_Variables_Process_Pinning.htm
......@@ -147,7 +147,7 @@ nav:
- Introduction: software/mpi/mpi.md
- OpenMPI Examples: software/mpi/ompi-examples.md
- MPI4Py (MPI for Python): software/mpi/mpi4py-mpi-for-python.md
- Running Open MPI: software/mpi/Running_OpenMPI.md
- Running Open MPI: software/mpi/running_openmpi.md
- Running MPICH2: software/mpi/running-mpich2.md
- Numerical Languages:
- Introduction: software/numerical-languages/introduction.md
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment