Skip to content
Snippets Groups Projects
Commit 01cd1f05 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

fix links

parent bd89c66c
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!219Virtual environment, upgrade MKdocs, upgrade Material design
Showing
with 71 additions and 71 deletions
......@@ -2,7 +2,7 @@
## Intel Threading Building Blocks
Intel Threading Building Blocks (Intel TBB) is a library that supports scalable parallel programming using standard ISO C++ code. It does not require special languages or compilers. To use the library, you specify tasks, not threads, and let the library map tasks onto threads in an efficient manner. The tasks are executed by a runtime scheduler and may be offloaded to [MIC accelerator](../intel-xeon-phi-salomon/).
Intel Threading Building Blocks (Intel TBB) is a library that supports scalable parallel programming using standard ISO C++ code. It does not require special languages or compilers. To use the library, you specify tasks, not threads, and let the library map tasks onto threads in an efficient manner. The tasks are executed by a runtime scheduler and may be offloaded to [MIC accelerator](software/intel//intel-xeon-phi-salomon/).
Intel is available on the cluster.
......@@ -37,4 +37,4 @@ $ icc -O2 -o primes.x main.cpp primes.cpp -Wl,-rpath=$LIBRARY_PATH -ltbb
## Further Reading
Read more on Intel website, <http://software.intel.com/sites/products/documentation/doclib/tbb_sa/help/index.htm>
Read more on Intel website, [http://software.intel.com/sites/products/documentation/doclib/tbb_sa/help/index.htm](http://software.intel.com/sites/products/documentation/doclib/tbb_sa/help/index.htm)
......@@ -21,7 +21,7 @@ The trace will be saved in file myapp.stf in the current directory.
## Viewing Traces
To view and analyze the trace, open ITAC GUI in a [graphical environment](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/):
To view and analyze the trace, open ITAC GUI in a [graphical environment](general/accessing-the-clusters/graphical-user-interface/x-window-system/):
```console
$ ml itac/9.1.2.024
......@@ -30,7 +30,7 @@ $ traceanalyzer
The GUI will launch and you can open the produced `*`.stf file.
![](../../img/Snmekobrazovky20151204v15.35.12.png)
![](img/Snmekobrazovky20151204v15.35.12.png)
Please refer to Intel documenation about usage of the GUI tool.
......
......@@ -15,7 +15,7 @@ If an ISV application was purchased for educational (research) purposes and also
### Web Interface
For each license there is a table, which provides the information about the name, number of available (purchased/licensed), number of used and number of free license features <https://extranet.it4i.cz/anselm/licenses>
For each license there is a table, which provides the information about the name, number of available (purchased/licensed), number of used and number of free license features [https://extranet.it4i.cz/anselm/licenses](https://extranet.it4i.cz/anselm/licenses)
### Text Interface
......@@ -68,7 +68,7 @@ Names of applications (APP):
matlab-edu
```
To get the FEATUREs of a license take a look into the corresponding state file ([see above](isv_licenses/#Licence)), or use:
To get the FEATUREs of a license take a look into the corresponding state file ([see above](software/isv_licenses/#Licence)), or use:
### Application and List of Provided Features
......
......@@ -22,7 +22,7 @@ $ javac -version
$ which javac
```
Java applications may use MPI for inter-process communication, in conjunction with OpenMPI. Read more on <http://www.open-mpi.org/faq/?category=java>. This functionality is currently not supported on Anselm cluster. In case you require the java interface to MPI, contact [cluster support](https://support.it4i.cz/rt/).
Java applications may use MPI for inter-process communication, in conjunction with OpenMPI. Read more on [here](http://www.open-mpi.org/faq/?category=java). This functionality is currently not supported on Anselm cluster. In case you require the java interface to MPI, contact [cluster support](https://support.it4i.cz/rt/).
## Java With OpenMPI
......
......@@ -16,12 +16,12 @@ Test module:
$ ml Tensorflow
```
Read more about available versions at the [TensorFlow page](tensorflow/).
Read more about available versions at the [TensorFlow page](software/machine-learning/tensorflow/).
## Theano
Read more about [available versions](../../modules-matrix/).
Read more about [available versions](modules-matrix/).
## Keras
Read more about [available versions](../../modules-matrix/).
Read more about [available versions](modules-matrix/).
# Intel Xeon Phi Environment
Intel Xeon Phi (so-called MIC) accelerator can be used in several modes ([Offload](../intel/intel-xeon-phi-salomon/#offload-mode) and [Native](#native-mode)). The default mode on the cluster is offload mode, but all modes described in this document are supported.
Intel Xeon Phi (so-called MIC) accelerator can be used in several modes ([Offload](software/intel/intel-xeon-phi-salomon/#offload-mode) and [Native](#native-mode)). The default mode on the cluster is offload mode, but all modes described in this document are supported.
See sections below for more details.
## Intel Utilities for Xeon Phi
Continue [here](../intel/intel-xeon-phi-salomon/)
Continue [here](software/intel/intel-xeon-phi-salomon/)
## GCC With [KNC](https://en.wikipedia.org/wiki/Xeon_Phi) Support
......@@ -434,4 +434,4 @@ Configure step (for `configure`,`make` and `make install` software)
Modulefile and Lmod
* Read [Lmod](../modules/lmod/)
* Read [Lmod](software/modules/lmod/)
......@@ -136,6 +136,6 @@ In the previous two cases with one or two MPI processes per node, the operating
### Running OpenMPI
The [**OpenMPI 1.8.6**](http://www.open-mpi.org/) is based on OpenMPI. Read more on [how to run OpenMPI](Running_OpenMPI/) based MPI.
The [**OpenMPI 1.8.6**](http://www.open-mpi.org/) is based on OpenMPI. Read more on [how to run OpenMPI](software/mpi/Running_OpenMPI/) based MPI.
The Intel MPI may run on the [Intel Xeon Ph](../intel/intel-xeon-phi-salomon/) accelerators as well. Read more on [how to run Intel MPI on accelerators](../intel/intel-xeon-phi-salomon/).
The Intel MPI may run on the [Intel Xeon Ph](software/intel/intel-xeon-phi-salomon/) accelerators as well. Read more on [how to run Intel MPI on accelerators](software/intel/intel-xeon-phi-salomon/).
......@@ -42,7 +42,7 @@ You need to import MPI to your python program. Include the following line to the
from mpi4py import MPI
```
The MPI4Py enabled python programs [execute as any other OpenMPI](Running_OpenMPI/) code.The simpliest way is to run
The MPI4Py enabled python programs [execute as any other OpenMPI](salomon/mpi/Running_OpenMPI/) code.The simpliest way is to run
```console
$ mpiexec python <script>.py
......
......@@ -178,35 +178,35 @@ class Hello {
}
```
* C: [hello_c.c](../../src/ompi/hello_c.c)
* C++: [hello_cxx.cc](../../src/ompi/hello_cxx.cc)
* Fortran mpif.h: [hello_mpifh.f](../../src/ompi/hello_mpifh.f)
* Fortran use mpi: [hello_usempi.f90](../../src/ompi/hello_usempi.f90)
* Fortran use mpi_f08: [hello_usempif08.f90](../../src/ompi/hello_usempif08.f90)
* Java: [Hello.java](../../src/ompi/Hello.java)
* C shmem.h: [hello_oshmem_c.c](../../src/ompi/hello_oshmem_c.c)
* Fortran shmem.fh: [hello_oshmemfh.f90](../../src/ompi/hello_oshmemfh.f90)
* C: [hello_c.c](src/ompi/hello_c.c)
* C++: [hello_cxx.cc](src/ompi/hello_cxx.cc)
* Fortran mpif.h: [hello_mpifh.f](src/ompi/hello_mpifh.f)
* Fortran use mpi: [hello_usempi.f90](src/ompi/hello_usempi.f90)
* Fortran use mpi_f08: [hello_usempif08.f90](src/ompi/hello_usempif08.f90)
* Java: [Hello.java](src/ompi/Hello.java)
* C shmem.h: [hello_oshmem_c.c](src/ompi/hello_oshmem_c.c)
* Fortran shmem.fh: [hello_oshmemfh.f90](src/ompi/hello_oshmemfh.f90)
### Send a Trivial Message Around in a Ring
* C: [ring_c.c](../../src/ompi/ring_c.c)
* C++: [ring_cxx.cc](../../src/ompi/ring_cxx.cc)
* Fortran mpif.h: [ring_mpifh.f](../../src/ompi/ring_mpifh.f)
* Fortran use mpi: [ring_usempi.f90](../../src/ompi/ring_usempi.f90)
* Fortran use mpi_f08: [ring_usempif08.f90](../../src/ompi/ring_usempif08.f90)
* Java: [Ring.java](../../src/ompi/Ring.java)
* C shmem.h: [ring_oshmem_c.c](../../src/ompi/ring_oshmem_c.c)
* Fortran shmem.fh: [ring_oshmemfh.f90](../../src/ompi/ring_oshmemfh.f90)
* C: [ring_c.c](src/ompi/ring_c.c)
* C++: [ring_cxx.cc](src/ompi/ring_cxx.cc)
* Fortran mpif.h: [ring_mpifh.f](src/ompi/ring_mpifh.f)
* Fortran use mpi: [ring_usempi.f90](src/ompi/ring_usempi.f90)
* Fortran use mpi_f08: [ring_usempif08.f90](src/ompi/ring_usempif08.f90)
* Java: [Ring.java](src/ompi/Ring.java)
* C shmem.h: [ring_oshmem_c.c](src/ompi/ring_oshmem_c.c)
* Fortran shmem.fh: [ring_oshmemfh.f90](src/ompi/ring_oshmemfh.f90)
Additionally, there's one further example application, but this one only uses the MPI C bindings:
### Test the Connectivity Between All Pross
* C: [connectivity_c.c](../../src/ompi/connectivity_c.c)
* C: [connectivity_c.c](src/ompi/connectivity_c.c)
## Build Examples
Download [examples](../../src/ompi/ompi.tar.gz).
Download [examples](src/ompi/ompi.tar.gz).
The Makefile in this directory will build the examples for the supported languages (e.g., if you do not have the Fortran "use mpi" bindings compiled as part of OpenMPI, those examples will be skipped).
......
......@@ -152,4 +152,4 @@ $ mpirun -bindto numa echo $OMP_NUM_THREADS
## Intel MPI on Xeon Phi
The[MPI section of Intel Xeon Phi chapter](../intel/intel-xeon-phi-salomon/) provides details on how to run Intel MPI code on Xeon Phi architecture.
The [MPI section of Intel Xeon Phi chapter](software/intel/intel-xeon-phi-salomon/) provides details on how to run Intel MPI code on Xeon Phi architecture.
......@@ -15,7 +15,7 @@ $ ml MATLAB
$ matlab
```
Read more at the [Matlab page](matlab/).
Read more at the [Matlab page](software/numerical-languages/matlab/).
## Octave
......@@ -26,7 +26,7 @@ $ ml Octave
$ octave
```
Read more at the [Octave page](octave/).
Read more at the [Octave page](software/numerical-languages/octave/).
## R
......@@ -37,4 +37,4 @@ $ ml R
$ R
```
Read more at the [R page](r/).
Read more at the [R page](software/numerical-languages/r/).
......@@ -21,9 +21,9 @@ $ ml av MATLAB
If you need to use the Matlab GUI to prepare your Matlab programs, you can use Matlab directly on the login nodes. But for all computations use Matlab on the compute nodes via PBS Pro scheduler.
If you require the Matlab GUI, follow the general information about [running graphical applications](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
If you require the Matlab GUI, follow the general information about [running graphical applications](general/accessing-the-clusters/graphical-user-interface/x-window-system/).
Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (see the "GUI Applications on Compute Nodes over VNC" part [here](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)) is recommended.
Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (see the "GUI Applications on Compute Nodes over VNC" part [here](general/accessing-the-clusters/graphical-user-interface/x-window-system/)) is recommended.
To run Matlab with GUI, use
......@@ -68,7 +68,7 @@ With the new mode, MATLAB itself launches the workers via PBS, so you can either
### Parallel Matlab Interactive Session
Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this page](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this page](general/accessing-the-clusters/graphical-user-interface/x-window-system/).
```console
$ xhost +
......@@ -218,7 +218,7 @@ This method is a "hack" invented by us to emulate the mpiexec functionality foun
!!! warning
This method is experimental.
For this method, you need to use SalomonDirect profile, import it using [the same way as SalomonPBSPro](matlab.md#running-parallel-matlab-using-distributed-computing-toolbox---engine)
For this method, you need to use SalomonDirect profile, import it using [the same way as SalomonPBSPro](#running-parallel-matlab-using-distributed-computing-toolbox---engine)
This is an example of m-script using direct mode:
......@@ -249,11 +249,11 @@ delete(pool)
### Non-Interactive Session and Licenses
If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the `-l __feature__matlab__MATLAB=1` for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, [look here](../isv_licenses/).
If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the `-l __feature__matlab__MATLAB=1` for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, [look here](software/isv_licenses/).
The licensing feature of PBS is currently disabled.
In case of non-interactive session read the [following information](../isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.
In case of non-interactive session read the [following information](software/isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.
### Matlab Distributed Computing Engines Start Up Time
......@@ -278,4 +278,4 @@ Since this is a SMP machine, you can completely avoid using Parallel Toolbox and
### Local Cluster Mode
You can also use Parallel Toolbox on UV2000. Use [local cluster mode](matlab/#parallel-matlab-batch-job-in-local-mode), "SalomonPBSPro" profile will not work.
You can also use Parallel Toolbox on UV2000. Use [local cluster mode](#parallel-matlab-batch-job-in-local-mode), "SalomonPBSPro" profile will not work.
......@@ -3,7 +3,7 @@
## Introduction
!!! note
This document relates to the old versions R2013 and R2014. For MATLAB 2015 use [this documentation instead](matlab/).
This document relates to the old versions R2013 and R2014. For MATLAB 2015 use [this documentation instead](software/numerical-languages/matlab/).
Matlab is available in the latest stable version. There are always two variants of the release:
......@@ -46,7 +46,7 @@ Plots, images, etc... will be still available.
Recommended parallel mode for running parallel Matlab on Anselm is MPIEXEC mode. In this mode user allocates resources through PBS prior to starting Matlab. Once resources are granted the main Matlab instance is started on the first compute node assigned to job by PBS and workers are started on all remaining nodes. User can use both interactive and non-interactive PBS sessions. This mode guarantees that the data processing is not performed on login nodes, but all processing is on compute nodes.
![Parallel Matlab](../../img/Matlab.png "Parallel Matlab")
![Parallel Matlab](img/Matlab.png)
For the performance reasons Matlab should use system MPI. On Anselm the supported MPI implementation for Matlab is Intel MPI. To switch to system MPI user has to override default Matlab setting by creating new configuration file in its home directory. The path and file name has to be exactly the same as in the following listing:
......@@ -190,9 +190,9 @@ You can copy and paste the example in a .m file and execute. Note that the matla
### Non-Interactive Session and Licenses
If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the ` -l __feature__matlab__MATLAB=1` for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, [look here](../isv_licenses/).
If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the ` -l __feature__matlab__MATLAB=1` for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, [look here](software/isv_licenses/).
In case of non-interactive session read the [following information](../isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.
In case of non-interactive session read the [following information](software/isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.
### Matlab Distributed Computing Engines Start Up Time
......
......@@ -2,7 +2,7 @@
## Introduction
GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable. Read more on <http://www.gnu.org/software/octave/>
GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable. Read more on [http://www.gnu.org/software/octave/](http://www.gnu.org/software/octave/)
For looking for avaible modules, type:
......@@ -60,11 +60,11 @@ Octave may use MPI for interprocess communication This functionality is currentl
## Xeon Phi Support
Octave may take advantage of the Xeon Phi accelerators. This will only work on the [Intel Xeon Phi](../intel/intel-xeon-phi-salomon/) [accelerated nodes](../../salomon/compute-nodes/).
Octave may take advantage of the Xeon Phi accelerators. This will only work on the [Intel Xeon Phi](software/intel/intel-xeon-phi-salomon/) [accelerated nodes](salomon/compute-nodes/).
### Automatic Offload Support
Octave can accelerate BLAS type operations (in particular the Matrix Matrix multiplications] on the Xeon Phi accelerator, via [Automatic Offload using the MKL library](../intel/intel-xeon-phi-salomon/)
Octave can accelerate BLAS type operations (in particular the Matrix Matrix multiplications] on the Xeon Phi accelerator, via [Automatic Offload using the MKL library](software/intel/intel-xeon-phi-salomon/)
Example
......@@ -88,7 +88,7 @@ In this example, the calculation was automatically divided among the CPU cores a
### Native Support
A version of [native](../intel/intel-xeon-phi-salomon/) Octave is compiled for Xeon Phi accelerators. Some limitations apply for this version:
A version of [native](software/intel/intel-xeon-phi-salomon/) Octave is compiled for Xeon Phi accelerators. Some limitations apply for this version:
* Only command line support. GUI, graph plotting etc. is not supported.
* Command history in interactive mode is not supported.
......
......@@ -11,7 +11,7 @@ The variable syntax of Fortran language is extended with indexes in square brack
By default, the CAF is using Message Passing Interface (MPI) for lower-level communication, so there are some similarities with MPI.
Read more on <http://www.opencoarrays.org/>
Read more on [http://www.opencoarrays.org/](http://www.opencoarrays.org/)
## Coarray Basics
......@@ -70,7 +70,7 @@ end program synchronization_test
```
* sync all - Synchronize all images between each other
* sync images(*) - Synchronize this image to all other
* sync images(\*) - Synchronize this image to all other
* sync images(*index*) - Synchronize this image to image with *index*
!!! note
......
......@@ -10,7 +10,7 @@ Another convenience is the ease with which the C code or third party libraries m
Extensive support for parallel computing is available within R.
Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals/r-release/R-lang.html>
Read more on [http://www.r-project.org/](http://www.r-project.org/), [http://cran.r-project.org/doc/manuals/r-release/R-lang.html](http://cran.r-project.org/doc/manuals/r-release/R-lang.html)
## Modules
......@@ -66,11 +66,11 @@ cp routput.out $PBS_O_WORKDIR/.
exit
```
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section - Anselm](../../anselm/job-submission-and-execution/).
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section - Anselm](anselm/job-submission-and-execution/).
## Parallel R
Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where parallel constructs are directly stated within the R script.
Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](#interactive-execution). In the following sections, we focus on explicit parallelization, where parallel constructs are directly stated within the R script.
## Package Parallel
......@@ -144,9 +144,9 @@ Every evaluation of the integrad function runs in parallel on different process.
package Rmpi provides an interface (wrapper) to MPI APIs.
It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](../mpi/Running_OpenMPI/).
It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](software/mpi/Running_OpenMPI/).
Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at <http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf>
Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at [here](http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf)
When using package Rmpi, both openmpi and R modules must be loaded
......@@ -345,7 +345,7 @@ while (TRUE)
mpi.quit()
```
The above is the mpi.apply MPI example for calculating the number π. Only the slave processes carry out the calculation. Note the **mpi.parSapply()**, function call. The package parallel [example](r/#package-parallel) [above](r/#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply().
The above is the mpi.apply MPI example for calculating the number π. Only the slave processes carry out the calculation. Note the **mpi.parSapply()**, function call. The package parallel [example](#package-parallel) [above](#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply().
Execute the example as:
......@@ -361,7 +361,7 @@ Currently, the two packages can not be combined for hybrid calculations.
The R parallel jobs are executed via the PBS queue system exactly as any other parallel jobs. User must create an appropriate jobscript and submit via the **qsub**
Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, running 1 process per core:
Example jobscript for [static Rmpi](#static-rmpi) parallel R execution, running 1 process per core:
```bash
#!/bin/bash
......@@ -390,7 +390,7 @@ cp routput.out $PBS_O_WORKDIR/.
exit
```
For more information about jobscripts and MPI execution refer to the [Job submission](../../anselm/job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections.
For more information about jobscripts and MPI execution refer to the [Job submission](anselm/job-submission-and-execution/) and general [MPI](software/mpi/mpi/) sections.
## Xeon Phi Offload
......@@ -400,4 +400,4 @@ By leveraging MKL, R can accelerate certain computations, most notably linear al
$ export MKL_MIC_ENABLE=1
```
[Read more about automatic offload](../intel/intel-xeon-phi-salomon/)
[Read more about automatic offload](software/intel/intel-xeon-phi-salomon/)
......@@ -68,6 +68,6 @@ $ ml fftw3-mpi
$ mpicc testfftw3mpi.c -o testfftw3mpi.x -Wl,-rpath=$LIBRARY_PATH -lfftw3_mpi
```
Run the example as [Intel MPI program](../mpi/running-mpich2/).
Run the example as [Intel MPI program](software/mpi/running-mpich2/).
Read more on FFTW usage on the [FFTW website.](http://www.fftw.org/fftw3_doc/)
......@@ -84,6 +84,6 @@ $ ml hdf5-parallel
$ mpicc hdf5test.c -o hdf5test.x -Wl,-rpath=$LIBRARY_PATH $HDF5_INC $HDF5_SHLIB
```
Run the example as [Intel MPI program](../mpi/running-mpich2/).
Run the example as [Intel MPI program](software/mpi/running-mpich2/).
For further information, see the website: <http://www.hdfgroup.org/HDF5/>
For further information, see the website: [http://www.hdfgroup.org/HDF5/](http://www.hdfgroup.org/HDF5/)
......@@ -10,7 +10,7 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e
$ ml mkl **or** ml imkl
```
Read more at the [Intel MKL](../intel/intel-suite/intel-mkl/) page.
Read more at the [Intel MKL](software/intel/intel-suite/intel-mkl/) page.
## Intel Integrated Performance Primitives
......@@ -20,7 +20,7 @@ Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX is avai
$ ml ipp
```
Read more at the [Intel IPP](../intel/intel-suite/intel-integrated-performance-primitives/) page.
Read more at the [Intel IPP](software/intel/intel-suite/intel-integrated-performance-primitives/) page.
## Intel Threading Building Blocks
......@@ -30,4 +30,4 @@ Intel Threading Building Blocks (Intel TBB) is a library that supports scalable
$ ml tbb
```
Read more at the [Intel TBB](../intel/intel-suite/intel-tbb/) page.
Read more at the [Intel TBB](software/intel/intel-suite/intel-tbb/) page.
......@@ -73,4 +73,4 @@ See more details at [MAGMA home page](http://icl.cs.utk.edu/magma/).
## References
[1] MAGMA MIC: Linear Algebra Library for Intel Xeon Phi Coprocessors, Jack Dongarra et. al, <http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf>
[1] MAGMA MIC: Linear Algebra Library for Intel Xeon Phi Coprocessors, Jack Dongarra et. al, [http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf](http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment