Skip to content
Snippets Groups Projects
Commit 01cd1f05 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

fix links

parent bd89c66c
Branches
Tags
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!219Virtual environment, upgrade MKdocs, upgrade Material design
Showing
with 60 additions and 60 deletions
......@@ -33,4 +33,4 @@ mpirun nwchem h2o.nw
Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
* MEMORY : controls the amount of memory NWChem will use
* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem - Salomon](../../salomon/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem - Salomon](salomon/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
......@@ -2,7 +2,7 @@
## Introduction
This GPL software calculates phonon-phonon interactions via the third order force constants. It allows to obtain lattice thermal conductivity, phonon lifetime/linewidth, imaginary part of self energy at the lowest order, joint density of states (JDOS) and weighted-JDOS. For details see Phys. Rev. B 91, 094306 (2015) and <http://atztogo.github.io/phono3py/index.html>
This GPL software calculates phonon-phonon interactions via the third order force constants. It allows to obtain lattice thermal conductivity, phonon lifetime/linewidth, imaginary part of self energy at the lowest order, joint density of states (JDOS) and weighted-JDOS. For details see Phys. Rev. B 91, 094306 (2015) and [http://atztogo.github.io/phono3py/index.html](http://atztogo.github.io/phono3py/index.html)
Available modules
......@@ -61,7 +61,7 @@ POSCAR-00006 POSCAR-00015 POSCAR-00024 POSCAR-00033 POSCAR-00042 POSCAR-00051
POSCAR-00007 POSCAR-00016 POSCAR-00025 POSCAR-00034 POSCAR-00043 POSCAR-00052 POSCAR-00061 POSCAR-00070 POSCAR-00079 POSCAR-00088 POSCAR-00097 POSCAR-00106
```
For each displacement the forces needs to be calculated, i.e. in form of the output file of VASP (vasprun.xml). For a single VASP calculations one needs [KPOINTS](KPOINTS), [POTCAR](POTCAR), [INCAR](INCAR) in your case directory (where you have POSCARS) and those 111 displacements calculations can be generated by [prepare.sh](prepare.sh) script. Then each of the single 111 calculations is submitted [run.sh](run.sh) by [submit.sh](submit.sh).
For each displacement the forces needs to be calculated, i.e. in form of the output file of VASP (vasprun.xml). For a single VASP calculations one needs [KPOINTS](software/chemistry/KPOINTS), [POTCAR](software/chemistry/POTCAR), [INCAR](software/chemistry/INCAR) in your case directory (where you have POSCARS) and those 111 displacements calculations can be generated by [prepare.sh](software/chemistry/prepare.sh) script. Then each of the single 111 calculations is submitted [run.sh](software/chemistry/run.sh) by [submit.sh](software/chemistry/submit.sh).
```console
$./prepare.sh
......@@ -155,7 +155,7 @@ one finds which grid points needed to be calculated, for instance using followin
$ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR --sigma 0.1 --br --write-gamma --gp="0 1 2
```
one calculates grid points 0, 1, 2. To automize one can use for instance scripts to submit 5 points in series, see [gofree-cond1.sh](gofree-cond1.sh)
one calculates grid points 0, 1, 2. To automize one can use for instance scripts to submit 5 points in series, see [gofree-cond1.sh](software/chemistry/gofree-cond1.sh)
```console
$ qsub gofree-cond1.sh
......
......@@ -24,7 +24,7 @@ Commercial licenses:
## Intel Compilers
For information about the usage of Intel Compilers and other Intel products, read the [Intel Parallel studio](intel-suite/intel-compilers/) page.
For information about the usage of Intel Compilers and other Intel products, read the [Intel Parallel studio](software/intel-suite/intel-compilers/) page.
## PGI Compilers (Only on Salomon)
......@@ -187,8 +187,8 @@ For more information see the man pages.
## Java
For information how to use Java (runtime and/or compiler), read the [Java page](java/).
For information how to use Java (runtime and/or compiler), read the [Java page](software/java/).
## NVIDIA CUDA
For information how to work with NVIDIA CUDA, read the [NVIDIA CUDA page](../anselm/software/nvidia-cuda/).
For information how to work with NVIDIA CUDA, read the [NVIDIA CUDA page](anselm/software/nvidia-cuda/).
......@@ -15,7 +15,7 @@ $ ml intel
$ idb
```
Read more at the [Intel Debugger](../intel/intel-suite/intel-debugger/) page.
Read more at the [Intel Debugger](software/intel/intel-suite/intel-debugger/) page.
## Allinea Forge (DDT/MAP)
......@@ -26,7 +26,7 @@ $ ml Forge
$ forge
```
Read more at the [Allinea DDT](allinea-ddt/) page.
Read more at the [Allinea DDT](software/debuggers/allinea-ddt/) page.
## Allinea Performance Reports
......@@ -37,7 +37,7 @@ $ ml PerformanceReports/6.0
$ perf-report mpirun -n 64 ./my_application argument01 argument02
```
Read more at the [Allinea Performance Reports](allinea-performance-reports/) page.
Read more at the [Allinea Performance Reports](software/debuggers/allinea-performance-reports/) page.
## RougeWave Totalview
......@@ -48,7 +48,7 @@ $ ml TotalView/8.15.4-6-linux-x86-64
$ totalview
```
Read more at the [Totalview](total-view/) page.
Read more at the [Totalview](software/debuggers/total-view/) page.
## Vampir Trace Analyzer
......@@ -59,4 +59,4 @@ Vampir is a GUI trace analyzer for traces in OTF format.
$ vampir
```
Read more at the [Vampir](vampir/) page.
Read more at the [Vampir](software/debuggers/vampir/) page.
......@@ -3,7 +3,7 @@
* Aislinn is a dynamic verifier for MPI programs. For a fixed input it covers all possible runs with respect to nondeterminism introduced by MPI. It allows to detect bugs (for sure) that occurs very rare in normal runs.
* Aislinn detects problems like invalid memory accesses, deadlocks, misuse of MPI, and resource leaks.
* Aislinn is open-source software; you can use it without any licensing limitations.
* Web page of the project: <http://verif.cs.vsb.cz/aislinn/>
* Web page of the project: [http://verif.cs.vsb.cz/aislinn/](http://verif.cs.vsb.cz/aislinn/)
!!! note
Aislinn is software developed at IT4Innovations and some parts are still considered experimental. If you have any questions or experienced any problems, contact the author: <mailto:stanislav.bohm@vsb.cz>.
......@@ -79,7 +79,7 @@ $ firefox report.html
At the beginning of the report there are some basic summaries of the verification. In the second part (depicted in the following picture), the error is described.
![](../../img/report.png)
![](img/report.png)
It shows us:
......
......@@ -59,7 +59,7 @@ Be sure to log in with an X window forwarding enabled. This could mean using the
$ ssh -X username@anselm.it4i.cz
```
Other options is to access login node using VNC. Please see the detailed information on how to [use graphic user interface on Anselm](/general/accessing-the-clusters/graphical-user-interface/x-window-system/)
Other options is to access login node using VNC. Please see the detailed information on how to [use graphic user interface on Anselm](general/accessing-the-clusters/graphical-user-interface/x-window-system/)
From the login node an interactive session **with X windows forwarding** (-X option) can be started by following command:
......@@ -75,7 +75,7 @@ $ ddt test_debug
A submission window that appears have a prefilled path to the executable to debug. You can select the number of MPI processors and/or OpenMP threads on which to run and press run. Command line arguments to a program can be entered to the "Arguments " box.
![](../../img/ddt1.png)
![](img/ddt1.png)
To start the debugging directly without the submission window, user can specify the debugging and execution parameters from the command line. For example the number of MPI processes is set by option "-np 4". Skipping the dialog is done by "-start" option. To see the list of the "ddt" command line parameters, run "ddt --help".
......
......@@ -22,13 +22,13 @@ The module sets up environment variables, required for using the Allinea Perform
Use the the perf-report wrapper on your (MPI) program.
Instead of [running your MPI program the usual way](../mpi/mpi/), use the the perf report wrapper:
Instead of [running your MPI program the usual way](software/mpi/mpi/), use the the perf report wrapper:
```console
$ perf-report mpirun ./mympiprog.x
```
The MPI program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that demanding MPI codes should be run within [the queue system](../../salomon/job-submission-and-execution/).
The MPI program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that demanding MPI codes should be run within [the queue system](salomon/job-submission-and-execution/).
## Example
......@@ -56,4 +56,4 @@ Now lets profile the code:
$ perf-report mpirun ./mympiprog.x
```
Performance report files [mympiprog_32p\*.txt](mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p\*.html](mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded.
Performance report files [mympiprog_32p\*.txt](software/debuggers/mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p\*.html](software/debuggers/mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded.
......@@ -10,7 +10,7 @@ CUBE is a graphical performance report explorer for displaying data from Score-P
Each dimension is organized in a tree, for example the time performance metric is divided into Execution time and Overhead time, call path dimension is organized by files and routines in your source code etc.
![](../../img/Snmekobrazovky20141204v12.56.36.png)
![](img/Snmekobrazovky20141204v12.56.36.png)
\*Figure 1. Screenshot of CUBE displaying data from Scalasca.\*
......@@ -18,7 +18,7 @@ Each node in the tree is colored by severity (the color scheme is displayed at t
## Installed Versions
Currently, there are two versions of CUBE 4.2.3 available as [modules](../../modules-matrix/):
Currently, there are two versions of CUBE 4.2.3 available as [modules](modules-matrix/):
* cube/4.2.3-gcc, compiled with GCC
* cube/4.2.3-icc, compiled with Intel compiler
......@@ -33,4 +33,4 @@ CUBE is a graphical application. Refer to Graphical User Interface documentation
After loading the appropriate module, simply launch cube command, or alternatively you can use Scalasca -examine command to launch the GUI. Note that for Scalasca data sets, if you do not analyze the data with `scalasca -examine` before to opening them with CUBE, not all performance data will be available.
References
1\. <http://www.scalasca.org/software/cube-4.x/download.html>
1\. [http://www.scalasca.org/software/cube-4.x/download.html](http://www.scalasca.org/software/cube-4.x/download.html)
......@@ -2,11 +2,11 @@
## Introduction
Intel PCM (Performance Counter Monitor) is a tool to monitor performance hardware counters on Intel>® processors, similar to [PAPI](papi/). The difference between PCM and PAPI is that PCM supports only Intel hardware, but PCM can monitor also uncore metrics, like memory controllers and >QuickPath Interconnect links.
Intel PCM (Performance Counter Monitor) is a tool to monitor performance hardware counters on Intel>® processors, similar to [PAPI](software/debuggers/papi/). The difference between PCM and PAPI is that PCM supports only Intel hardware, but PCM can monitor also uncore metrics, like memory controllers and QuickPath Interconnect links.
## Installed Version
Currently installed version 2.6. To load the [module](../../modules-matrix/) issue:
Currently installed version 2.6. To load the [module](modules-matrix/) issue:
```console
$ ml intelpcm
......@@ -276,6 +276,6 @@ $ ./matrix
## References
1. <https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization>
1. <https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf> Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide.
1. <http://intel-pcm-api-documentation.github.io/classPCM.html> API Documentation
1. [https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization](https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization])
1. [Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide](https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf)
1. [API Documentation](http://intel-pcm-api-documentation.github.io/classPCM.html)
......@@ -9,7 +9,7 @@ Intel *®* VTune™ Amplifier, part of Intel Parallel studio, is a GUI profiling
* Low level specific counters, such as branch analysis and memory bandwidth
* Power usage analysis - frequency and sleep states.
![](../../img/vtune-amplifier.png)
![](img/vtune-amplifier.png)
## Usage
......
......@@ -10,7 +10,7 @@ PAPI can be used with parallel as well as serial programs.
## Usage
To use PAPI, load [module](../../environment-and-modules/) PAPI:
To use PAPI, load [module](environment-and-modules/) PAPI:
```console
$ ml papi
......@@ -193,7 +193,7 @@ $ ./matrix
!!! note
PAPI currently supports only a subset of counters on the Intel Xeon Phi processor compared to Intel Xeon, for example the floating point operations counter is missing.
To use PAPI in [Intel Xeon Phi](../intel/intel-xeon-phi-salomon/) native applications, you need to load module with " -mic" suffix, for example " papi/5.3.2-mic" :
To use PAPI in [Intel Xeon Phi](software/intel/intel-xeon-phi-salomon/) native applications, you need to load module with " -mic" suffix, for example " papi/5.3.2-mic" :
```console
$ ml papi/5.3.2-mic
......
......@@ -8,10 +8,10 @@ Scalasca supports profiling of MPI, OpenMP and hybrid MPI+OpenMP applications.
## Installed Versions
There are currently two versions of Scalasca 2.0 [modules](../../modules-matrix/) installed on Anselm:
There are currently two versions of Scalasca 2.0 [modules](modules-matrix/) installed on Anselm:
* scalasca2/2.0-gcc-openmpi, for usage with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/),
* scalasca2/2.0-icc-impi, for usage with [Intel Compiler](../compilers/) and [Intel MPI](../mpi/running-mpich2/).
* scalasca2/2.0-gcc-openmpi, for usage with [GNU Compiler](software/compilers/) and [OpenMPI](software/mpi/Running_OpenMPI/),
* scalasca2/2.0-icc-impi, for usage with [Intel Compiler](software/compilers/) and [Intel MPI](software/mpi/running-mpich2/).
## Usage
......@@ -23,7 +23,7 @@ Profiling a parallel application with Scalasca consists of three steps:
### Instrumentation
Instrumentation via `scalasca -instrument` is discouraged. Use [Score-P instrumentation](score-p/).
Instrumentation via `scalasca -instrument` is discouraged. Use [Score-P instrumentation](software/debuggers/score-p/).
### Runtime Measurement
......@@ -43,11 +43,11 @@ Some notable Scalasca options are:
* **-e &lt;directory> Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep\_, followed by name of the executable and launch configuration.**
!!! note
Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../salomon/storage/).
Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](salomon/storage/).
### Analysis of Reports
For the analysis, you must have [Score-P](score-p/) and [CUBE](cube/) modules loaded. The analysis is done in two steps, first, the data is preprocessed and then CUBE GUI tool is launched.
For the analysis, you must have [Score-P](software/debuggers/score-p/) and [CUBE](software/debuggers/cube/) modules loaded. The analysis is done in two steps, first, the data is preprocessed and then CUBE GUI tool is launched.
To launch the analysis, run :
......@@ -63,8 +63,8 @@ scalasca -examine -s <experiment_directory>
Alternatively you can open CUBE and load the data directly from here. Keep in mind that in that case the pre-processing is not done and not all metrics will be shown in the viewer.
Refer to [CUBE documentation](cube/) on usage of the GUI viewer.
Refer to [CUBE documentation](software/debuggers/cube/) on usage of the GUI viewer.
## References
1. <http://www.scalasca.org/>
1. [http://www.scalasca.org/](http://www.scalasca.org/)
......@@ -4,14 +4,14 @@
The [Score-P measurement infrastructure](http://www.vi-hps.org/projects/score-p/) is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications.
Score-P can be used as an instrumentation tool for [Scalasca](scalasca/).
Score-P can be used as an instrumentation tool for [Scalasca](software/debuggers/scalasca/).
## Installed Versions
There are currently two versions of Score-P version 1.2.6 [modules](../../modules-matrix/) installed on Anselm :
There are currently two versions of Score-P version 1.2.6 [modules](modules-matrix/) installed on Anselm :
* scorep/1.2.3-gcc-openmpi, for usage with [GNU Compiler](../compilers/) and [OpenMPI](../mpi/Running_OpenMPI/)
* scorep/1.2.3-icc-impi, for usage with [Intel Compiler](../compilers/)> and [Intel MPI](../mpi/running-mpich2/)>.
* scorep/1.2.3-gcc-openmpi, for usage with [GNU Compiler](software/compilers/) and [OpenMPI](software/mpi/Running_OpenMPI/)
* scorep/1.2.3-icc-impi, for usage with [Intel Compiler](software/compilers/)> and [Intel MPI](software/mpi/running-mpich2/)>.
## Instrumentation
......
......@@ -140,11 +140,11 @@ $ mpirun -tv -n 5 ./test_debug
When following dialog appears click on "Yes"
![](../../img/totalview1.png)
![](img/totalview1.png)
At this point the main TotalView GUI window will appear and you can insert the breakpoints and start debugging:
![](../../img/totalview2.png)
![](img/totalview2.png)
### Debugging a Parallel Code - Option 2
......
......@@ -22,7 +22,7 @@ The main tools available in Valgrind are :
There are two versions of Valgrind available on Anselm.
* Version 3.6.0, installed by operating system vendor in /usr/bin/valgrind. This version is available by default, without the need to load any module. This version however does not provide additional MPI support.
* Version 3.9.0 with support for Intel MPI, available in [module](../../modules-matrix/) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind.
* Version 3.9.0 with support for Intel MPI, available in [module](modules-matrix/) valgrind/3.9.0-impi. After loading the module, this version replaces the default valgrind.
There are two versions of Valgrind available on the Salomon.
......
# Vampir
Vampir is a commercial trace analysis and visualization tool. It can work with traces in OTF and OTF2 formats. It does not have the functionality to collect traces, you need to use a trace collection tool (such as [Score-P](score-p/)) first to collect the traces.
Vampir is a commercial trace analysis and visualization tool. It can work with traces in OTF and OTF2 formats. It does not have the functionality to collect traces, you need to use a trace collection tool (such as [Score-P](software/debuggers/score-p/)) first to collect the traces.
![](../../img/Snmekobrazovky20160708v12.33.35.png)
![](img/Snmekobrazovky20160708v12.33.35.png)
## Installed Versions
......@@ -21,4 +21,4 @@ You can find the detailed user manual in PDF format in $EBROOTVAMPIR/doc/vampir-
## References
1. <https://www.vampir.eu>
1. [https://www.vampir.eu](https://www.vampir.eu)
......@@ -32,5 +32,5 @@ Read more at <https://software.intel.com/en-us/intel-cplusplus-compiler-16.0-use
Anselm nodes are currently equipped with Sandy Bridge CPUs, while Salomon compute nodes are equipped with Haswell based architecture. The UV1 SMP compute server has Ivy Bridge CPUs, which are equivalent to Sandy Bridge (only smaller manufacturing technology). The new processors are backward compatible with the Sandy Bridge nodes, so all programs that ran on the Sandy Bridge processors, should also run on the new Haswell nodes. To get optimal performance out of the Haswell processors a program should make use of the special AVX2 instructions for this processor. One can do this by recompiling codes with the compiler flags designated to invoke these instructions. For the Intel compiler suite, there are two ways of doing this:
* Using compiler flag (both for Fortran and C): -xCORE-AVX2. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge/Ivy Bridge nodes.
* Using compiler flags (both for Fortran and C): -xAVX -axCORE-AVX2. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge/Ivy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries.
* Using compiler flag (both for Fortran and C): **-xCORE-AVX2**. This will create a binary with AVX2 instructions, specifically for the Haswell processors. Note that the executable will not run on Sandy Bridge/Ivy Bridge nodes.
* Using compiler flags (both for Fortran and C): **-xAVX -axCORE-AVX2**. This will generate multiple, feature specific auto-dispatch code paths for Intel® processors, if there is a performance benefit. So this binary will run both on Sandy Bridge/Ivy Bridge and Haswell processors. During runtime it will be decided which path to follow, dependent on which processor you are running on. In general this will result in larger binaries.
......@@ -4,7 +4,7 @@ IDB is no longer available since Intel Parallel Studio 2015
## Debugging Serial Applications
The intel debugger version is available, via module intel/13.5.192. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/) for running the GUI.
The intel debugger version is available, via module intel/13.5.192. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](general/accessing-the-clusters/graphical-user-interface/x-window-system/) for running the GUI.
```console
$ ml intel/13.5.192
......@@ -18,7 +18,7 @@ The debugger may run in text mode. To debug in text mode, use
$ idbc
```
To debug on the compute nodes, module intel must be loaded. The GUI on compute nodes may be accessed using the same way as in [the GUI section](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)
To debug on the compute nodes, module intel must be loaded. The GUI on compute nodes may be accessed using the same way as in [the GUI section](general/accessing-the-clusters/graphical-user-interface/x-window-system/)
Example:
......@@ -40,7 +40,7 @@ In this example, we allocate 1 full compute node, compile program myprog.c with
### Small Number of MPI Ranks
For debugging small number of MPI ranks, you may execute and debug each rank in separate xterm terminal (do not forget the [X display](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)). Using Intel MPI, this may be done in following way:
For debugging small number of MPI ranks, you may execute and debug each rank in separate xterm terminal (do not forget the [X display](general/accessing-the-clusters/graphical-user-interface/x-window-system/)). Using Intel MPI, this may be done in following way:
```console
$ qsub -q qexp -l select=2:ncpus=24 -X -I
......
......@@ -37,7 +37,7 @@ Intel MKL library provides number of interfaces. The fundamental once are the LP
### Linking
Linking Intel MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl/#examples) below.
Linking Intel MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](#examples) below.
You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include rpath on the compile line:
......@@ -109,7 +109,7 @@ In this example, we compile, link and run the cblas_dgemm example, using LP64 in
## MKL and MIC Accelerators
The Intel MKL is capable to automatically offload the computations o the MIC accelerator. See section [Intel Xeon Phi](../intel-xeon-phi-salomon/) for details.
The Intel MKL is capable to automatically offload the computations o the MIC accelerator. See section [Intel Xeon Phi](software/intel/intel-xeon-phi-salomon/) for details.
## LAPACKE C Interface
......
......@@ -23,7 +23,7 @@ $ icc -v
$ ifort -v
```
Read more at the [Intel Compilers](intel-compilers/) page.
Read more at the [Intel Compilers](software/intel/intel-suite/intel-compilers/) page.
## Intel Debugger
......@@ -36,7 +36,7 @@ $ ml intel
$ idb
```
Read more at the [Intel Debugger](intel-debugger/) page.
Read more at the [Intel Debugger](software/intel/intel-suite/intel-debugger/) page.
## Intel Math Kernel Library
......@@ -46,7 +46,7 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e
$ ml imkl
```
Read more at the [Intel MKL](intel-mkl/) page.
Read more at the [Intel MKL](software/intel/intel-suite/intel-mkl/) page.
## Intel Integrated Performance Primitives
......@@ -56,7 +56,7 @@ Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX is avai
$ ml ipp
```
Read more at the [Intel IPP](intel-integrated-performance-primitives/) page.
Read more at the [Intel IPP](software/intel/intel-suite/intel-integrated-performance-primitives/) page.
## Intel Threading Building Blocks
......@@ -66,4 +66,4 @@ Intel Threading Building Blocks (Intel TBB) is a library that supports scalable
$ ml tbb
```
Read more at the [Intel TBB](intel-tbb/) page.
Read more at the [Intel TBB](software/intel/intel-suite/intel-tbb/) page.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment