From e52e62134b0b780259473e03bbaa126bb85663cf Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Luk=C3=A1=C5=A1=20Krup=C4=8D=C3=ADk?= <lukas.krupcik@vsb.cz>
Date: Wed, 31 Aug 2016 12:55:33 +0200
Subject: [PATCH] repair external and internal links

---
 .../software/chemistry/molpro.md              |  3 +-
 .../software/debuggers/allinea-ddt.md         |  6 ++--
 .../debuggers/allinea-performance-reports.md  |  3 +-
 .../intel-performance-counter-monitor.md      |  6 ++--
 .../debuggers/intel-vtune-amplifier.md        | 11 ++++---
 .../software/debuggers/scalasca.md            |  3 +-
 .../software/debuggers/total-view.md          | 16 +++++-----
 ...intel-integrated-performance-primitives.md |  3 +-
 .../software/intel-suite/intel-mkl.md         |  6 ++--
 .../software/intel-suite/intel-tbb.md         |  3 +-
 .../software/isv_licenses.md                  |  3 +-
 .../software/kvirtualization.md               | 13 +++++---
 .../software/mpi/Running_OpenMPI.md           | 15 ++++++---
 .../software/mpi/mpi.md                       | 21 ++++++++-----
 .../software/mpi/mpi4py-mpi-for-python.md     |  2 --
 .../software/mpi/running-mpich2.md            |  9 ++++--
 .../numerical-languages/matlab 2013-2014.md   | 11 ++++---
 .../software/numerical-languages/matlab.md    |  9 ++++--
 .../software/numerical-languages/octave.md    |  3 +-
 .../software/numerical-languages/r.md         | 18 ++++++-----
 .../software/numerical-libraries/hdf5.md      |  6 ++--
 .../magma-for-intel-xeon-phi.md               | 31 ++++++++++++-------
 .../software/nvidia-cuda.md                   |  8 +++--
 .../omics-master/diagnostic-component-team.md |  3 +-
 .../priorization-component-bierapp.md         |  3 +-
 .../software/openfoam.md                      |  6 ++--
 .../software/paraview.md                      |  3 --
 27 files changed, 137 insertions(+), 87 deletions(-)

diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
index 5d0b5aec4..15c62090c 100644
--- a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
+++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
@@ -33,7 +33,8 @@ Running
 ------
 Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html)![external](../../../img/external.png) for more details.
 
->The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option  mpiprocs=16:ompthreads=1 to PBS.
+!!! Note "Note"
+	The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option  mpiprocs=16:ompthreads=1 to PBS.
 
 You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
index 53ebc20d4..0667baee8 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
@@ -49,10 +49,10 @@ $ mpif90 -g -O0 -o test_debug test.f
 
 Before debugging, you need to compile your code with theses flags:
 
->- **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for
-GNU and INTEL C/C++ and Fortran compilers.
+!!! Note "Note"
+	- **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
 
->- **O0** : Suppress all optimizations.
+	- **O0** : Suppress all optimizations.
 
 Starting a Job with DDT
 -----------------------
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md
index 8ae1a2f06..34307fa3c 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md
@@ -23,7 +23,8 @@ The module sets up environment variables, required for using the Allinea Perform
 
 Usage
 -----
->Use the the perf-report wrapper on your (MPI) program.
+!!! Note "Note"
+	Use the the perf-report wrapper on your (MPI) program.
 
 Instead of [running your MPI program the usual way](../mpi/), use the the perf report wrapper:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
index a5ff9fde5..7eca1cd62 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
@@ -193,7 +193,8 @@ API
 ---
 In a similar fashion to PAPI, PCM provides a C++ API to access the performance counter from within your application. Refer to the [doxygen documentation](http://intel-pcm-api-documentation.github.io/classPCM.html)![external](../../../img/external.png) for details of the API.
 
->Due to security limitations, using PCM API to monitor your applications is currently not possible on Anselm. (The application must be run as root user)
+!!! Note "Note"
+	Due to security limitations, using PCM API to monitor your applications is currently not possible on Anselm. (The application must be run as root user)
 
 Sample program using the API :
 
@@ -278,5 +279,4 @@ References
 ----------
 1.  <https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization>![external](../../../img/external.png)
 2.  <https://software.intel.com/sites/default/files/m/3/2/2/xeon-e5-2600-uncore-guide.pdf>![external](../../../img/external.png) Intel® Xeon® Processor E5-2600 Product Family Uncore Performance Monitoring Guide.
-3.  <http://intel-pcm-api-documentation.github.io/classPCM.html>![external](../../../img/external.png) API Documentation
-
+3.  <http://intel-pcm-api-documentation.github.io/classPCM.html>![external](../../../img/external.png) API Documentation
\ No newline at end of file
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
index 0966634cb..4876db7c3 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
@@ -3,7 +3,7 @@ Intel VTune Amplifier
 
 Introduction
 ------------
-Intel*® *VTune™ >Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
+Intel*® *VTune™ Amplifier, part of Intel Parallel studio, is a GUI profiling tool designed for Intel processors. It offers a graphical performance analysis of single core and multithreaded applications. A highlight of the features:
 
 -   Hotspot analysis
 -   Locks and waits analysis
@@ -27,7 +27,8 @@ and launch the GUI :
     $ amplxe-gui
 ```
 
->To profile an application with VTune Amplifier, special kernel modules need to be loaded. The modules are not loaded on Anselm login nodes, thus direct profiling on login nodes is not possible. Use VTune on compute nodes and refer to the documentation on using GUI applications.
+!!! Note "Note"
+	To profile an application with VTune Amplifier, special kernel modules need to be loaded. The modules are not loaded on Anselm login nodes, thus direct profiling on login nodes is not possible. Use VTune on compute nodes and refer to the documentation on using GUI applications.
 
 The GUI will open in new window. Click on "*New Project...*" to create a new project. After clicking *OK*, a new window with project properties will appear.  At "*Application:*", select the bath to your binary you want to profile (the binary should be compiled with -g flag). Some additional options such as command line arguments can be selected. At "*Managed code profiling mode:*" select "*Native*" (unless you want to profile managed mode .NET/Mono applications). After clicking *OK*, your project is created.
 
@@ -47,7 +48,8 @@ Copy the line to clipboard and then you can paste it in your jobscript or in com
 
 Xeon Phi
 --------
->This section is outdated. It will be updated with new information soon.
+!!! Note "Note"
+	This section is outdated. It will be updated with new information soon.
 
 It is possible to analyze both native and offload Xeon Phi applications. For offload mode, just specify the path to the binary. For native mode, you need to specify in project properties:
 
@@ -57,7 +59,8 @@ Application parameters:  mic0 source ~/.profile && /path/to/your/bin
 
 Note that we include  source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/).
 
->If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
+!!! Note "Note"
+	If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
 
 You may also use remote analysis to collect data from the MIC and then analyze it in the GUI later :
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
index 615eb9ee4..d14499831 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
@@ -43,7 +43,8 @@ Some notable Scalsca options are:
 **-t Enable trace data collection. By default, only summary data are collected.**
 **-e &lt;directory&gt; Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep_, followed by name of the executable and launch configuration.**
 
->Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage/storage/).
+!!! Note "Note"
+	Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage/storage/).
 
 ### Analysis of reports
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md
index 782aff8f3..2cc11cd85 100644
--- a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md
+++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md
@@ -58,9 +58,10 @@ Compile the code:
 
 Before debugging, you need to compile your code with theses flags:
 
->**-g** : Generates extra debugging information usable by GDB. **-g3** includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
+!!! Note "Note"
+	**-g** : Generates extra debugging information usable by GDB. **-g3** includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
 
->**-O0** : Suppress all optimizations.
+	**-O0** : Suppress all optimizations.
 
 Starting a Job with TotalView
 -----------------------------
@@ -92,7 +93,8 @@ To debug a serial code use:
 
 To debug a parallel code compiled with **OpenMPI** you need to setup your TotalView environment:
 
->**Please note:** To be able to run parallel debugging procedure from the command line without stopping the debugger in the mpiexec source code you have to add the following function to your **~/.tvdrc** file:
+!!! Note "Note"
+	**Please note:** To be able to run parallel debugging procedure from the command line without stopping the debugger in the mpiexec source code you have to add the following function to your **~/.tvdrc** file:
 
 ```bash
     proc mpi_auto_run_starter {loaded_id} {
@@ -119,8 +121,9 @@ The source code of this function can be also found in
     /apps/mpi/openmpi/intel/1.6.5/etc/openmpi-totalview.tcl
 ```
 
->You can also add only following line to you ~/.tvdrc file instead of the entire function:
-**source /apps/mpi/openmpi/intel/1.6.5/etc/openmpi-totalview.tcl**
+!!! Note "Note"
+	You can also add only following line to you ~/.tvdrc file instead of the entire function: 
+    **source /apps/mpi/openmpi/intel/1.6.5/etc/openmpi-totalview.tcl**
 
 You need to do this step only once.
 
@@ -156,5 +159,4 @@ More information regarding the command line parameters of the TotalView can be f
 
 Documentation
 -------------
-[1] The [TotalView documentation](http://www.roguewave.com/support/product-documentation/totalview-family.aspx#totalview)![external](../../../img/external.png) web page is a good resource for learning more about some of the advanced TotalView features.
-
+[1] The [TotalView documentation](http://www.roguewave.com/support/product-documentation/totalview-family.aspx#totalview)![external](../../../img/external.png) web page is a good resource for learning more about some of the advanced TotalView features.
\ No newline at end of file
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md
index 05839d105..f45028438 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md
@@ -5,7 +5,8 @@ Intel Integrated Performance Primitives
 ---------------------------------------
 Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX vector instructions is available, via module ipp. The IPP is a very rich library of highly optimized algorithmic building blocks for media and data applications. This includes signal, image and frame processing algorithms, such as FFT, FIR, Convolution, Optical Flow, Hough transform, Sum, MinMax, as well as cryptographic functions, linear algebra functions and many more.
 
->Check out IPP before implementing own math functions for data processing, it is likely already there.
+!!! Note "Note"
+	Check out IPP before implementing own math functions for data processing, it is likely already there.
 
 ```bash
     $ module load ipp
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
index eb51c824f..b8ea29b8f 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
@@ -24,7 +24,8 @@ Intel MKL version 13.5.192 is available on Anselm
 
 The module sets up environment variables, required for linking and running mkl enabled applications. The most important variables are the $MKLROOT, $MKL_INC_DIR, $MKL_LIB_DIR and $MKL_EXAMPLES
 
->The MKL library may be linked using any compiler. With intel compiler use -mkl option to link default threaded MKL.
+!!! Note "Note"
+	The MKL library may be linked using any compiler. With intel compiler use -mkl option to link default threaded MKL.
 
 ### Interfaces
 
@@ -47,7 +48,8 @@ You will need the mkl module loaded to run the mkl enabled executable. This may
 
 ### Threading
 
->Advantage in using the MKL library is that it brings threaded parallelization to applications that are otherwise not parallel.
+!!! Note "Note"
+	Advantage in using the MKL library is that it brings threaded parallelization to applications that are otherwise not parallel.
 
 For this to work, the application must link the threaded MKL library (default). Number and behaviour of MKL threads may be controlled via the OpenMP environment variables, such as OMP_NUM_THREADS and KMP_AFFINITY. MKL_NUM_THREADS takes precedence over OMP_NUM_THREADS
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md
index a6ef96ba0..75238c7da 100644
--- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md
+++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-tbb.md
@@ -14,7 +14,8 @@ Intel TBB version 4.1 is available on Anselm
 
 The module sets up environment variables, required for linking and running tbb enabled applications.
 
->Link the tbb library, using -ltbb
+!!! Note "Note"
+	Link the tbb library, using -ltbb
 
 Examples
 --------
diff --git a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
index 7d165b422..c9ae13dfd 100644
--- a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
+++ b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
@@ -11,7 +11,8 @@ If an ISV application was purchased for educational (research) purposes and also
 
 Overview of the licenses usage
 ------------------------------
->The overview is generated every minute and is accessible from web or command line interface.
+!!! Note "Note"
+	The overview is generated every minute and is accessible from web or command line interface.
 
 ### Web interface
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
index c28561ceb..976850735 100644
--- a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
+++ b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
@@ -28,18 +28,20 @@ Virtualization has also some drawbacks, it is not so easy to setup efficient sol
 
 Solution described in chapter [HOWTO](virtualization/#howto)  is suitable for single node tasks, does not introduce virtual machine clustering.
 
->Please consider virtualization as last resort solution for your needs.
+!!! Note "Note"
+	Please consider virtualization as last resort solution for your needs.
 
->Please consult use of virtualization with IT4Innovation's support.
+	Please consult use of virtualization with IT4Innovation's support.
 
->For running Windows application (when source code and Linux native application are not available) consider use of Wine, Windows compatibility layer. Many Windows applications can be run using Wine with less effort and better performance than when using virtualization.
+	For running Windows application (when source code and Linux native application are not available) consider use of Wine, Windows compatibility layer. Many Windows applications can be run using Wine with less effort and better performance than when using virtualization.
 
 Licensing
 ---------
 
 IT4Innovations does not provide any licenses for operating systems and software of virtual machines. Users are ( in accordance with [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf)![external](../../img/external.png)) fully responsible for licensing all software running in virtual machines on Anselm. Be aware of complex conditions of licensing software in virtual environments.
 
->Users are responsible for licensing OS e.g. MS Windows and all software running in their virtual machines.
+!!! Note "Note"
+	Users are responsible for licensing OS e.g. MS Windows and all software running in their virtual machines.
 
  HOWTO
 ----------
@@ -248,7 +250,8 @@ Run virtual machine using optimized devices, user network backend with sharing a
 
 Thanks to port forwarding you can access virtual machine via SSH (Linux) or RDP (Windows) connecting to IP address of compute node (and port 2222 for SSH). You must use [VPN network](../../accessing-the-cluster/vpn-access/).
 
->Keep in mind, that if you use virtio devices, you must have virtio drivers installed on your virtual machine.
+!!! Note "Note"
+	Keep in mind, that if you use virtio devices, you must have virtio drivers installed on your virtual machine.
 
 ### Networking and data sharing
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md b/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md
index e00940965..ac353f47e 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md
@@ -7,7 +7,8 @@ The OpenMPI programs may be executed only via the PBS Workload manager, by enter
 
 ### Basic usage
 
->Use the mpiexec to run the OpenMPI code.
+!!! Note "Note"
+	Use the mpiexec to run the OpenMPI code.
 
 Example:
 
@@ -27,7 +28,8 @@ Example:
     Hello world! from rank 3 of 4 on host cn110
 ```
 
->Please be aware, that in this example, the directive **-pernode** is used to run only **one task per node**, which is normally an unwanted behaviour (unless you want to run hybrid code with just one MPI and 16 OpenMP tasks per node). In normal MPI programs **omit the -pernode directive** to run up to 16 MPI tasks per each node.
+!!! Note "Note"
+	Please be aware, that in this example, the directive **-pernode** is used to run only **one task per node**, which is normally an unwanted behaviour (unless you want to run hybrid code with just one MPI and 16 OpenMP tasks per node). In normal MPI programs **omit the -pernode directive** to run up to 16 MPI tasks per each node.
 
 In this example, we allocate 4 nodes via the express queue interactively. We set up the openmpi environment and interactively run the helloworld_mpi.x program. Note that the executable helloworld_mpi.x must be available within the
 same path on all nodes. This is automatically fulfilled on the /home and /scratch filesystem.
@@ -47,7 +49,8 @@ You need to preload the executable, if running on the local scratch /lscratch fi
 
 In this example, we assume the executable helloworld_mpi.x is present on compute node cn17 on local scratch. We call the mpiexec whith the **--preload-binary** argument (valid for openmpi). The mpiexec will copy the executable from cn17 to the /lscratch/15210.srv11 directory on cn108, cn109 and cn110 and execute the program.
 
->MPI process mapping may be controlled by PBS parameters.
+!!! Note "Note"
+	MPI process mapping may be controlled by PBS parameters.
 
 The mpiprocs and ompthreads parameters allow for selection of number of running MPI processes per node as well as number of OpenMP threads per MPI process.
 
@@ -95,7 +98,8 @@ In this example, we demonstrate recommended way to run an MPI application, using
 
 ### OpenMP thread affinity
 
->Important!  Bind every OpenMP thread to a core!
+!!! Note "Note"
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two examples with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You might want to avoid this by setting these environment variable for GCC OpenMP:
 
@@ -149,7 +153,8 @@ In this example, we see that ranks have been mapped on nodes according to the or
 
 Exact control of MPI process placement and resource binding is provided by specifying a rankfile
 
->Appropriate binding may boost performance of your application.
+!!! Note "Note"
+	Appropriate binding may boost performance of your application.
 
 Example rankfile
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
index 4beed6d6e..b7179fa47 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
@@ -61,7 +61,8 @@ In this example, the openmpi 1.6.5 using intel compilers is activated
 
 Compiling MPI Programs
 ----------------------
->After setting up your MPI environment, compile your program using one of the mpi wrappers
+!!! Note "Note"
+	After setting up your MPI environment, compile your program using one of the mpi wrappers
 
 ```bash
     $ mpicc -v
@@ -107,8 +108,9 @@ Compile the above example with
 
 Running MPI Programs
 --------------------
->The MPI program executable must be compatible with the loaded MPI module.
-Always compile and execute using the very same MPI module.
+!!! Note "Note"
+	The MPI program executable must be compatible with the loaded MPI module. 
+    Always compile and execute using the very same MPI module.
 
 It is strongly discouraged to mix mpi implementations. Linking an application with one MPI implementation and running mpirun/mpiexec form other implementation may result in unexpected errors.
 
@@ -118,16 +120,19 @@ The MPI program executable must be available within the same path on all nodes.
 
 Optimal way to run an MPI program depends on its memory requirements, memory access pattern and communication pattern.
 
->Consider these ways to run an MPI program:
-1. One MPI process per node, 16 threads per process
-2. Two MPI processes per node, 8 threads per process
-3. 16 MPI processes per node, 1 thread per process.
+!!! Note "Note"
+	Consider these ways to run an MPI program:
+
+	1. One MPI process per node, 16 threads per process
+	2. Two MPI processes per node, 8 threads per process
+	3. 16 MPI processes per node, 1 thread per process.
 
 **One MPI** process per node, using 16 threads, is most useful for memory demanding applications, that make good use of processor cache memory and are not memory bound.  This is also a preferred way for communication intensive applications as one process per node enjoys full bandwidth access to the network interface.
 
 **Two MPI** processes per node, using 8 threads each, bound to processor socket is most useful for memory bandwidth bound applications such as BLAS1 or FFT, with scalable memory demand. However, note that the two processes will share access to the network interface. The 8 threads and socket binding should ensure maximum memory access bandwidth and minimize communication, migration and numa effect overheads.
 
->Important!  Bind every OpenMP thread to a core!
+!!! Note "Note"
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two cases with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You want to avoid this by setting the KMP_AFFINITY or GOMP_CPU_AFFINITY environment variables.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md
index 504dca09b..c4955467b 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md
@@ -1,8 +1,6 @@
 MPI4Py (MPI for Python)
 =======================
 
-OpenMPI interface to Python
-
 Introduction
 ------------
 MPI for Python provides bindings of the Message Passing Interface (MPI) standard for the Python programming language, allowing any Python program to exploit multiple processors.
diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md
index 96e731183..7045165e8 100644
--- a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md
+++ b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md
@@ -7,7 +7,8 @@ The MPICH2 programs use mpd daemon or ssh connection to spawn processes, no PBS
 
 ### Basic usage
 
->Use the mpirun to execute the MPICH2 code.
+!!! Note "Note"
+	Use the mpirun to execute the MPICH2 code.
 
 Example:
 
@@ -43,7 +44,8 @@ You need to preload the executable, if running on the local scratch /lscratch fi
 
 In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the cp command via mpirun, copying the executable from shared home to local scratch . Second  mpirun will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node.
 
->MPI process mapping may be controlled by PBS parameters.
+!!! Note "Note"
+	MPI process mapping may be controlled by PBS parameters.
 
 The mpiprocs and ompthreads parameters allow for selection of number of running MPI processes per node as well as number of OpenMP threads per MPI process.
 
@@ -91,7 +93,8 @@ In this example, we demonstrate recommended way to run an MPI application, using
 
 ### OpenMP thread affinity
 
->Important!  Bind every OpenMP thread to a core!
+!!! Note "Note"
+	Important!  Bind every OpenMP thread to a core!
 
 In the previous two examples with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You might want to avoid this by setting these environment variable for GCC OpenMP:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab 2013-2014.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab 2013-2014.md
index ad7f4980f..7f54854ea 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab 2013-2014.md	
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab 2013-2014.md	
@@ -3,7 +3,8 @@ Matlab 2013-2014
 
 Introduction
 ------------
->This document relates to the old versions R2013 and R2014. For MATLAB 2015, please use [this documentation instead](matlab/).
+!!! Note "Note"
+	This document relates to the old versions R2013 and R2014. For MATLAB 2015, please use [this documentation instead](matlab/).
 
 Matlab is available in the latest stable version. There are always two variants of the release:
 
@@ -69,7 +70,8 @@ For the performance reasons Matlab should use system MPI. On Anselm the supporte
 
 System MPI library allows Matlab to communicate through 40Gbps Infiniband QDR interconnect instead of slower 1Gb ethernet network.
 
->Please note: The path to MPI library in "mpiLibConf.m" has to match with version of loaded Intel MPI module. In this example the version 4.1.1.036 of Iintel MPI is used by Matlab and therefore module impi/4.1.1.036  has to be loaded prior to starting Matlab.
+!!! Note "Note"
+	Please note: The path to MPI library in "mpiLibConf.m" has to match with version of loaded Intel MPI module. In this example the version 4.1.1.036 of Iintel MPI is used by Matlab and therefore module impi/4.1.1.036  has to be loaded prior to starting Matlab.
 
 ### Parallel Matlab interactive session
 
@@ -141,7 +143,8 @@ The last part of the configuration is done directly in the user Matlab script be
 
 This script creates scheduler object "sched" of type "mpiexec" that starts workers using mpirun tool. To use correct version of mpirun, the second line specifies the path to correct version of system Intel MPI library.
 
->Please note: Every Matlab script that needs to initialize/use matlabpool has to contain these three lines prior to calling matlabpool(sched, ...) function.
+!!! Note "Note"
+	Please note: Every Matlab script that needs to initialize/use matlabpool has to contain these three lines prior to calling matlabpool(sched, ...) function.
 
 The last step is to start matlabpool with "sched" object and correct number of workers. In this case qsub asked for total number of 32 cores, therefore the number of workers is also set to 32.
 
@@ -202,4 +205,4 @@ Starting Matlab workers is an expensive process that requires certain amount of
   |16|256|1008|
   |8|128|534|
   |4|64|333|
-  |2|32|210|
\ No newline at end of file
+  |2|32|210|
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
index e9a817d67..1e863daf7 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
@@ -42,7 +42,8 @@ plots, images, etc... will be still available.
 
 Running parallel Matlab using Distributed Computing Toolbox / Engine
 ------------------------------------------------------------------------
->Distributed toolbox is available only for the EDU variant
+!!! Note "Note"
+	Distributed toolbox is available only for the EDU variant
 
 The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1)![external](../../../img/external.png).
 
@@ -64,7 +65,8 @@ Or in the GUI, go to tab HOME -&gt; Parallel -&gt; Manage Cluster Profiles..., c
 
 With the new mode, MATLAB itself launches the workers via PBS, so you can either use interactive mode or a batch mode on one node, but the actual parallel processing will be done in a separate job started by MATLAB itself. Alternatively, you can use "local" mode to run parallel code on just a single node.
 
->The profile is confusingly named Salomon, but you can use it also on Anselm.
+!!! Note "Note"
+	The profile is confusingly named Salomon, but you can use it also on Anselm.
 
 ### Parallel Matlab interactive session
 
@@ -132,7 +134,8 @@ The last part of the configuration is done directly in the user Matlab script be
 
 This script creates scheduler object "cluster" of type "local" that starts workers locally.
 
->Please note: Every Matlab script that needs to initialize/use matlabpool has to contain these three lines prior to calling parpool(sched, ...) function.
+!!! Note "Note"
+	Please note: Every Matlab script that needs to initialize/use matlabpool has to contain these three lines prior to calling parpool(sched, ...) function.
 
 The last step is to start matlabpool with "cluster" object and correct number of workers. We have 24 cores per node, so we start 24 workers.
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
index ea4edf294..330f10ad9 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
@@ -98,7 +98,8 @@ A version of [native](../intel-xeon-phi/#section-4) Octave is compiled for Xeon
 Octave is linked with parallel Intel MKL, so it best suited for batch processing of tasks that utilize BLAS, LAPACK and FFT operations. By default, number of threads is set to 120, you can control this with > OMP_NUM_THREADS environment
 variable.
 
->Calculations that do not employ parallelism (either by using parallel MKL eg. via matrix operations,  fork() function, [parallel package](http://octave.sourceforge.net/parallel/)![external](../../../img/external.png) or other mechanism) will actually run slower than on host CPU.
+!!! Note "Note"
+	Calculations that do not employ parallelism (either by using parallel MKL eg. via matrix operations,  fork() function, [parallel package](http://octave.sourceforge.net/parallel/)![external](../../../img/external.png) or other mechanism) will actually run slower than on host CPU.
 
 To use Octave on a node with Xeon Phi:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md
index 5557ce019..35661aad7 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md
@@ -96,10 +96,12 @@ Download the package [parallell](package-parallel-vignette.pdf) vignette.
 
 The forking is the most simple to use. Forking family of functions provide parallelized, drop in replacement for the serial apply() family of functions.
 
->Forking via package parallel provides functionality similar to OpenMP construct
->omp parallel for
->
->Only cores of single node can be utilized this way!
+!!! Note "Note"
+	Forking via package parallel provides functionality similar to OpenMP construct
+
+	omp parallel for
+
+	Only cores of single node can be utilized this way!
 
 Forking example:
 
@@ -145,7 +147,8 @@ Every evaluation of the integrad function runs in parallel on different process.
 
 Package Rmpi
 ------------
->package Rmpi provides an interface (wrapper) to MPI APIs.
+!!! Note "Note"
+	package Rmpi provides an interface (wrapper) to MPI APIs.
 
 It also provides interactive R slave environment. On Anselm, Rmpi provides interface to the [OpenMPI](../mpi-1/Running_OpenMPI/).
 
@@ -293,7 +296,8 @@ Execute the example as:
 
 mpi.apply is a specific way of executing Dynamic Rmpi programs.
 
->mpi.apply() family of functions provide MPI parallelized, drop in replacement for the serial apply() family of functions.
+!!! Note "Note"
+	mpi.apply() family of functions provide MPI parallelized, drop in replacement for the serial apply() family of functions.
 
 Execution is identical to other dynamic Rmpi programs.
 
@@ -394,4 +398,4 @@ Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, runnin
     exit
 ```
 
-For more information about jobscripts and MPI execution refer to the [Job submission](../../resource-allocation-and-job-execution/job-submission-and-execution/) and general [MPI](../mpi/) sections.
\ No newline at end of file
+For more information about jobscripts and MPI execution refer to the [Job submission](../../resource-allocation-and-job-execution/job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections.
\ No newline at end of file
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md
index a92c3a689..53b95fe59 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md
@@ -23,8 +23,10 @@ Versions **1.8.11** and **1.8.13** of HDF5 library are available on Anselm, comp
 
 The module sets up environment variables, required for linking and running HDF5 enabled applications. Make sure that the choice of HDF5 module is consistent with your choice of MPI library. Mixing MPI of different implementations may have unpredictable results.
 
->Be aware, that GCC version of **HDF5 1.8.11** has serious performance issues, since it's compiled with -O0 optimization flag. This version is provided only for testing of code compiled only by GCC and IS NOT recommended for production computations. For more informations, please see: <http://www.hdfgroup.org/ftp/HDF5/prev-releases/ReleaseFiles/release5-1811>![external](../../../img/external.png)
-All GCC versions of **HDF5 1.8.13** are not affected by the bug, are compiled with -O3 optimizations and are recommended for production computations.
+!!! Note "Note"
+	Be aware, that GCC version of **HDF5 1.8.11** has serious performance issues, since it's compiled with -O0 optimization flag. This version is provided only for testing of code compiled only by GCC and IS NOT recommended for production computations. For more informations, please see: <http://www.hdfgroup.org/ftp/HDF5/prev-releases/ReleaseFiles/release5-1811>![external](../../../img/external.png)
+
+	All GCC versions of **HDF5 1.8.13** are not affected by the bug, are compiled with -O3 optimizations and are recommended for production computations.
 
 Example
 -------
diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
index 05aaf37a9..32558f85f 100644
--- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
+++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
@@ -13,9 +13,11 @@ To be able to compile and link code with MAGMA library user has to load followin
 
 To make compilation more user friendly module also sets these two environment variables:
 
->MAGMA_INC - contains paths to the MAGMA header files (to be used for compilation step)
+!!! Note "Note"
+	MAGMA_INC - contains paths to the MAGMA header files (to be used for compilation step)
 
->MAGMA_LIBS - contains paths to MAGMA libraries (to be used for linking step).
+!!! Note "Note"
+	MAGMA_LIBS - contains paths to MAGMA libraries (to be used for linking step).
 
 Compilation example:
 
@@ -29,14 +31,17 @@ Compilation example:
 
 MAGMA implementation for Intel MIC requires a MAGMA server running on accelerator prior to executing the user application. The server can be started and stopped using following scripts:
 
->To start MAGMA server use:
-**$MAGMAROOT/start_magma_server**
+!!! Note "Note"
+	To start MAGMA server use:
+	**$MAGMAROOT/start_magma_server**
 
->To stop the server use:
-**$MAGMAROOT/stop_magma_server**
+!!! Note "Note"
+	To stop the server use:
+	**$MAGMAROOT/stop_magma_server**
 
->For deeper understanding how the MAGMA server is started, see the following script:
-**$MAGMAROOT/launch_anselm_from_mic.sh**
+!!! Note "Note"
+	For deeper understanding how the MAGMA server is started, see the following script:
+	**$MAGMAROOT/launch_anselm_from_mic.sh**
 
 To test if the MAGMA server runs properly we can run one of examples that are part of the MAGMA installation:
 
@@ -62,11 +67,13 @@ To test if the MAGMA server runs properly we can run one of examples that are pa
     10304 10304     ---   (  ---  )    500.70 (   1.46)     ---
 ```
 
->Please note: MAGMA contains several benchmarks and examples that can be found in:
-**$MAGMAROOT/testing/**
+!!! Note "Note"
+	Please note: MAGMA contains several benchmarks and examples that can be found in:
+	**$MAGMAROOT/testing/**
 
->MAGMA relies on the performance of all CPU cores as well as on the performance of the accelerator. Therefore on Anselm number of CPU OpenMP threads has to be set to 16:
-**export OMP_NUM_THREADS=16**
+!!! Note "Note"
+	MAGMA relies on the performance of all CPU cores as well as on the performance of the accelerator. Therefore on Anselm number of CPU OpenMP threads has to be set to 16:
+	**export OMP_NUM_THREADS=16**
 
 
 See more details at [MAGMA home page](http://icl.cs.utk.edu/magma/)![external](../../../img/external.png).
diff --git a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
index f1ef09a3e..2bd26525c 100644
--- a/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
+++ b/docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
@@ -282,9 +282,11 @@ SAXPY function multiplies the vector x by the scalar alpha and adds it to the ve
     }
 ```
 
->Please note: cuBLAS has its own function for data transfers between CPU and GPU memory:
- - [cublasSetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublassetvector)![external](../../img/external.png) - transfers data from CPU to GPU memory
- - [cublasGetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublasgetvector)![external](../../img/external.png) - transfers data from GPU to CPU memory
+!!! Note "Note"
+	Please note: cuBLAS has its own function for data transfers between CPU and GPU memory:
+
+	- [cublasSetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublassetvector)![external](../../img/external.png) 		- transfers data from CPU to GPU memory
+	- [cublasGetVector](http://docs.nvidia.com/cuda/cublas/index.html#cublasgetvector)![external](../../img/external.png) - transfers data from GPU to CPU memory
 
 To compile the code using NVCC compiler a "-lcublas" compiler flag has to be specified:
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md
index b94fdf6f6..610e6a072 100644
--- a/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md
+++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/diagnostic-component-team.md
@@ -5,7 +5,8 @@ Diagnostic component (TEAM)
 
 TEAM is available at the following address: <http://omics.it4i.cz/team/>![external](../../../img/external.png)
 
->The address is accessible only via [VPN. ](../../accessing-the-cluster/vpn-access/)
+!!! Note "Note"
+	The address is accessible only via [VPN. ](../../accessing-the-cluster/vpn-access/)
 
 ### Diagnostic component (TEAM)
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/priorization-component-bierapp.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/priorization-component-bierapp.md
index 0c6c893b5..5bffe9eda 100644
--- a/docs.it4i/anselm-cluster-documentation/software/omics-master/priorization-component-bierapp.md
+++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/priorization-component-bierapp.md
@@ -5,7 +5,8 @@ Priorization component (BiERApp)
 
 BiERApp is available at the following address: <http://omics.it4i.cz/bierapp/>
 
->The address is accessible onlyvia [VPN. ](../../accessing-the-cluster/vpn-access/)
+!!! Note "Note"
+	The address is accessible onlyvia [VPN. ](../../accessing-the-cluster/vpn-access/)
 
 ###BiERApp
 
diff --git a/docs.it4i/anselm-cluster-documentation/software/openfoam.md b/docs.it4i/anselm-cluster-documentation/software/openfoam.md
index 108ab8da2..ed28468da 100644
--- a/docs.it4i/anselm-cluster-documentation/software/openfoam.md
+++ b/docs.it4i/anselm-cluster-documentation/software/openfoam.md
@@ -59,7 +59,8 @@ To create OpenFOAM environment on ANSELM give the commands:
     $ source $FOAM_BASHRC
 ```
 
->Please load correct module with your requirements “compiler - GCC/ICC, precision - DP/SP”.
+!!! Note "Note"
+	Please load correct module with your requirements “compiler - GCC/ICC, precision - DP/SP”.
 
 Create a project directory within the $HOME/OpenFOAM directory named >&lt;USER&gt;-&lt;OFversion&gt; and create a directory named run within it, e.g. by typing:
 
@@ -121,7 +122,8 @@ Run the second case for example external incompressible turbulent flow - case -
 
 First we must run serial application bockMesh and decomposePar for preparation of parallel computation.
 
->Create a Bash scrip test.sh:
+!!! Note "Note"
+	Create a Bash scrip test.sh:
 
 ```bash
     #!/bin/bash
diff --git a/docs.it4i/anselm-cluster-documentation/software/paraview.md b/docs.it4i/anselm-cluster-documentation/software/paraview.md
index 26a137165..f7235ff47 100644
--- a/docs.it4i/anselm-cluster-documentation/software/paraview.md
+++ b/docs.it4i/anselm-cluster-documentation/software/paraview.md
@@ -60,11 +60,8 @@ replace  username with your login and cn77 with the name of compute node your P
 Now launch ParaView client installed on your desktop PC. Select File-&gt;Connect..., click Add Server. Fill in the following :
 
 Name : Anselm tunnel
-
 Server Type : Client/Server
-
 Host : localhost
-
 Port : 12345
 
 Click Configure, Save, the configuration is now saved for later use. Now click Connect to connect to the ParaView server. In your terminal where you have interactive session with ParaView server launched, you should see:
-- 
GitLab