diff --git a/README.md b/README.md
deleted file mode 100644
index 2e112b1059fa8c2e6a5e24fe1423dbd05cfcb694..0000000000000000000000000000000000000000
--- a/README.md
+++ /dev/null
@@ -1,9 +0,0 @@
-# Převod html dokumentace do md formátu
-
-html_md.sh             ... převede html do md, vytvoří složku info, kde se evidují odkazy na soubory ve složkách
-
-html_md.sh -d -html    ... odstraní všechny html soubory
-
-html_md.sh -d -md      ... odstraní všechny md souborz
-
-filter.txt             ... filtrovanĂ˝ text
diff --git a/docs.it4i.cz/anselm-cluster-documentation.md b/docs.it4i.cz/anselm-cluster-documentation.md
index 7d1bde3e9229026bb5cd0d102710e86979641d53..cf3bfcbbadff97a674e71305f57127f8ec033092 100644
--- a/docs.it4i.cz/anselm-cluster-documentation.md
+++ b/docs.it4i.cz/anselm-cluster-documentation.md
@@ -1,5 +1,6 @@
 Introduction 
 ============
+
   
 Welcome to Anselm supercomputer cluster. The Anselm cluster consists of
 209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving
diff --git a/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster.md b/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster.md
index f94bc1f2798e621608141717061437963f15a297..5d19caf1d27cce06f26e230ca6b5ce16bee31e54 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster.md
@@ -1,5 +1,6 @@
 Shell access and data transfer 
 ==============================
+
   
 Interactive Login
 -----------------
@@ -7,7 +8,7 @@ The Anselm cluster is accessed by SSH protocol via login nodes login1
 and login2 at address anselm.it4i.cz. The login nodes may be addressed
 specifically, by prepending the login node name to the address.
   Login address           Port   Protocol   Login node
-   ------ ---------- 
+  ----------------------- ------ ---------- ----------------------------------------------
   anselm.it4i.cz          22     ssh        round-robin DNS record for login1 and login2
   login1.anselm.it4i.cz   22     ssh        login1
   login2.anselm.it4i.cz   22     ssh        login2
@@ -20,12 +21,12 @@ d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA)</span>
  
 Private keys authentication:
 On **Linux** or **Mac**, use
-``` {.prettyprint .lang-sh}
+``` 
 local $ ssh -i /path/to/id_rsa username@anselm.it4i.cz
 ```
 If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this
 command to set lower permissions to private key file.
-``` {.prettyprint .lang-sh}
+``` 
 local $ chmod 600 /path/to/id_rsa
 ```
 On **Windows**, use [PuTTY ssh
@@ -51,7 +52,7 @@ protocols. <span class="discreet">(Not available yet.) In case large
 volumes of data are transferred, use dedicated data mover node
 dm1.anselm.it4i.cz for increased performance.</span>
   Address                                            Port                               Protocol
-  ---- ----------- ------------------
+  -------------------------------------------------- ---------------------------------- -----------------------------------------
   anselm.it4i.cz                                     22                                 scp, sftp
   login1.anselm.it4i.cz                              22                                 scp, sftp
   login2.anselm.it4i.cz                              22                                 scp, sftp
@@ -67,26 +68,26 @@ be expected.  Fast cipher (aes128-ctr) should be used.
 If you experience degraded data transfer performance, consult your local
 network provider.
 On linux or Mac, use scp or sftp client to transfer the data to Anselm:
-``` {.prettyprint .lang-sh}
+``` 
 local $ scp -i /path/to/id_rsa my-local-file username@anselm.it4i.cz:directory/file
 ```
-``` {.prettyprint .lang-sh}
+``` 
 local $ scp -i /path/to/id_rsa -r my-local-dir username@anselm.it4i.cz:directory
 ```
 > or
-``` {.prettyprint .lang-sh}
+``` 
 local $ sftp -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz
 ```
 Very convenient way to transfer files in and out of the Anselm computer
 is via the fuse filesystem
 [sshfs](http://linux.die.net/man/1/sshfs)
-``` {.prettyprint .lang-sh}
+``` 
 local $ sshfs -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz:. mountpoint
 ```
 Using sshfs, the users Anselm home directory will be mounted on your
 local computer, just like an external disk.
 Learn more on ssh, scp and sshfs by reading the manpages
-``` {.prettyprint .lang-sh}
+``` 
 $ man ssh
 $ man scp
 $ man sshfs
diff --git a/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md b/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md
index 1e00ffd80682c075f62a187afed313ff757d94c5..b4d232d2a1f1982a172c6313ec8efe078dc2349e 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/outgoing-connections.md
@@ -1,7 +1,9 @@
 Outgoing connections 
 ====================
+
   
 Connection restrictions
+-----------------------
 Outgoing connections, from Anselm Cluster login nodes to the outside
 world, are restricted to following ports:
   Port   Protocol
diff --git a/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md b/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md
index f94bc1f2798e621608141717061437963f15a297..5d19caf1d27cce06f26e230ca6b5ce16bee31e54 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md
@@ -1,5 +1,6 @@
 Shell access and data transfer 
 ==============================
+
   
 Interactive Login
 -----------------
@@ -7,7 +8,7 @@ The Anselm cluster is accessed by SSH protocol via login nodes login1
 and login2 at address anselm.it4i.cz. The login nodes may be addressed
 specifically, by prepending the login node name to the address.
   Login address           Port   Protocol   Login node
-   ------ ---------- 
+  ----------------------- ------ ---------- ----------------------------------------------
   anselm.it4i.cz          22     ssh        round-robin DNS record for login1 and login2
   login1.anselm.it4i.cz   22     ssh        login1
   login2.anselm.it4i.cz   22     ssh        login2
@@ -20,12 +21,12 @@ d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA)</span>
  
 Private keys authentication:
 On **Linux** or **Mac**, use
-``` {.prettyprint .lang-sh}
+``` 
 local $ ssh -i /path/to/id_rsa username@anselm.it4i.cz
 ```
 If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this
 command to set lower permissions to private key file.
-``` {.prettyprint .lang-sh}
+``` 
 local $ chmod 600 /path/to/id_rsa
 ```
 On **Windows**, use [PuTTY ssh
@@ -51,7 +52,7 @@ protocols. <span class="discreet">(Not available yet.) In case large
 volumes of data are transferred, use dedicated data mover node
 dm1.anselm.it4i.cz for increased performance.</span>
   Address                                            Port                               Protocol
-  ---- ----------- ------------------
+  -------------------------------------------------- ---------------------------------- -----------------------------------------
   anselm.it4i.cz                                     22                                 scp, sftp
   login1.anselm.it4i.cz                              22                                 scp, sftp
   login2.anselm.it4i.cz                              22                                 scp, sftp
@@ -67,26 +68,26 @@ be expected.  Fast cipher (aes128-ctr) should be used.
 If you experience degraded data transfer performance, consult your local
 network provider.
 On linux or Mac, use scp or sftp client to transfer the data to Anselm:
-``` {.prettyprint .lang-sh}
+``` 
 local $ scp -i /path/to/id_rsa my-local-file username@anselm.it4i.cz:directory/file
 ```
-``` {.prettyprint .lang-sh}
+``` 
 local $ scp -i /path/to/id_rsa -r my-local-dir username@anselm.it4i.cz:directory
 ```
 > or
-``` {.prettyprint .lang-sh}
+``` 
 local $ sftp -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz
 ```
 Very convenient way to transfer files in and out of the Anselm computer
 is via the fuse filesystem
 [sshfs](http://linux.die.net/man/1/sshfs)
-``` {.prettyprint .lang-sh}
+``` 
 local $ sshfs -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz:. mountpoint
 ```
 Using sshfs, the users Anselm home directory will be mounted on your
 local computer, just like an external disk.
 Learn more on ssh, scp and sshfs by reading the manpages
-``` {.prettyprint .lang-sh}
+``` 
 $ man ssh
 $ man scp
 $ man sshfs
diff --git a/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/storage-1.md b/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/storage-1.md
index 8fca8932e4dd28e96ea3b96c0c2383987a62c404..167b6fa6399a542db4304c83a3a2de37ce109472 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/storage-1.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/storage-1.md
@@ -1,5 +1,6 @@
 Storage 
 =======
+
   
 There are two main shared file systems on Anselm cluster, the
 [HOME](#home) and [SCRATCH](#scratch). All
@@ -64,12 +65,12 @@ Use the lfs getstripe for getting the stripe parameters. Use the lfs
 setstripe command for setting the stripe parameters to get optimal I/O
 performance The correct stripe setting depends on your needs and file
 access patterns. 
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs getstripe dir|filename 
 $ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename 
 ```
 Example:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs getstripe /scratch/username/
 /scratch/username/
 stripe_count  1 stripe_size   1048576 stripe_offset -1
@@ -84,7 +85,7 @@ and verified. All files written to this directory will be striped over
 10 OSTs
 Use lfs check OSTs to see the number and status of active OSTs for each
 filesystem on Anselm. Learn more by reading the man page
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs check osts
 $ man lfs
 ```
@@ -223,11 +224,11 @@ Number of OSTs
 ### <span>Disk usage and quota commands</span>
 <span>User quotas on the file systems can be checked and reviewed using
 following command:</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota dir
 ```
 Example for Lustre HOME directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota /home
 Disk quotas for user user001 (uid 1234):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
@@ -239,7 +240,7 @@ Disk quotas for group user001 (gid 1234):
 In this example, we view current quota size limit of 250GB and 300MB
 currently used by user001.
 Example for Lustre SCRATCH directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota /scratch
 Disk quotas for user user001 (uid 1234):
      Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
@@ -253,11 +254,11 @@ currently used by user001.
  
 To have a better understanding of where the space is exactly used, you
 can use following command to find out.
-``` {.prettyprint .lang-sh}
+``` 
 $ du -hs dir
 ```
 Example for your HOME directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ cd /home
 $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
 258M     cuda-samples
@@ -272,10 +273,10 @@ is sorted in descending order from largest to smallest
 files/directories.
 <span>To have a better understanding of previous commands, you can read
 manpages.</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ man lfs
 ```
-``` {.prettyprint .lang-sh}
+``` 
 $ man du 
 ```
 ### Extended ACLs
@@ -287,7 +288,7 @@ number of named user and named group entries.
 ACLs on a Lustre file system work exactly like ACLs on any Linux file
 system. They are manipulated with the standard tools in the standard
 manner. Below, we create a directory and allow a specific user access.
-``` {.prettyprint .lang-sh}
+``` 
 [vop999@login1.anselm ~]$ umask 027
 [vop999@login1.anselm ~]$ mkdir test
 [vop999@login1.anselm ~]$ ls -ld test
@@ -388,7 +389,7 @@ files in /tmp directory are automatically purged.
 **
 ----------
   Mountpoint                                 Usage                       Protocol   Net Capacity     Throughput   Limitations   Access                    Services
-  ------------------- ---- ---------- ---------------- ------------ ------------- -- ------
+  ------------------------------------------ --------------------------- ---------- ---------------- ------------ ------------- ------------------------- -----------------------------
   <span class="monospace">/home</span>       home directory              Lustre     320 TiB          2 GB/s       Quota 250GB   Compute and login nodes   backed up
   <span class="monospace">/scratch</span>    cluster shared jobs' data   Lustre     146 TiB          6 GB/s       Quota 100TB   Compute and login nodes   files older 90 days removed
   <span class="monospace">/lscratch</span>   node local jobs' data       local      330 GB           100 MB/s     none          Compute nodes             purged after job ends
diff --git a/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md b/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md
index 6dc9839525ee85673376471ea0b5c566e4ce6f64..bd8d529ed4a9fbcc1c015073d5f1275f20091a69 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/vpn-access.md
@@ -1,8 +1,9 @@
 VPN Access 
 ==========
+
   
 Accessing IT4Innovations internal resources via VPN
------
+---------------------------------------------------
 **Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS
 patch**
 Workaround can be found at
@@ -20,7 +21,7 @@ the following operating systems:
 -   <span>MacOS</span>
 It is impossible to connect to VPN from other operating systems.
 <span>VPN client installation</span>
--------------
+------------------------------------
 You can install VPN client from web interface after successful login
 with LDAP credentials on address <https://vpn1.it4i.cz/anselm>
 ![](https://docs.it4i.cz/anselm-cluster-documentation/login.jpg/@@images/30271119-b392-4db9-a212-309fb41925d6.jpeg)
@@ -48,6 +49,7 @@ successfull](https://docs.it4i.cz/anselm-cluster-documentation/downloadfilesucce
 After successful download of installation file, you have to execute this
 tool with administrator's rights and install VPN client manually.
 Working with VPN client
+-----------------------
 You can use graphical user interface or command line interface to run
 VPN client on all supported operating systems. We suggest using GUI.
 ![Icon](https://docs.it4i.cz/anselm-cluster-documentation/icon.jpg "Icon")
diff --git a/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/x-window-and-vnc.md b/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/x-window-and-vnc.md
index ff3738063f55085dad7cad70901c9102e176ce07..bf8c88bdf23764386bc0098a609c08830258c367 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/x-window-and-vnc.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/x-window-and-vnc.md
@@ -1,5 +1,6 @@
 Graphical User Interface 
 ========================
+
   
 X Window System
 ---------------
diff --git a/docs.it4i.cz/anselm-cluster-documentation/compute-nodes.md b/docs.it4i.cz/anselm-cluster-documentation/compute-nodes.md
index f1d9d48a85f85f77d6f576d9833258ec4cbc3ca6..da695ce55077469163d8a7218ef3fa08a75d8a71 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/compute-nodes.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/compute-nodes.md
@@ -1,5 +1,6 @@
 Compute Nodes 
 =============
+
   
 Nodes Configuration
 -------------------
@@ -100,7 +101,7 @@ nodes.****
 ****Figure Anselm bullx B510 servers****
 ### Compute Nodes Summary********
   Node type                    Count   Range           Memory   Cores         [Access](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy)
-  ----- ------- --------------- -------- ------------- -----
+  ---------------------------- ------- --------------- -------- ------------- -----------------------------------------------------------------------------------------------------------------------------------------------
   Nodes without accelerator    180     cn[1-180]     64GB     16 @ 2.4Ghz   qexp, qprod, qlong, qfree
   Nodes with GPU accelerator   23      cn[181-203]   96GB     16 @ 2.3Ghz   qgpu, qprod
   Nodes with MIC accelerator   4       cn[204-207]   96GB     16 @ 2.3GHz   qmic, qprod
@@ -137,7 +138,7 @@ with accelerator). Processors support Advanced Vector Extensions (AVX)
 Nodes equipped with Intel Xeon E5-2665 CPU have set PBS resource
 attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU
 have set PBS resource attribute cpu_freq = 23.
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16:cpu_freq=24 -I
 ```
 In this example, we allocate 4 nodes, 16 cores at 2.4GHhz per node.
diff --git a/docs.it4i.cz/anselm-cluster-documentation/environment-and-modules.md b/docs.it4i.cz/anselm-cluster-documentation/environment-and-modules.md
index e2a6e796a7dd361342c819e11d5b9e3dcb04c626..2b16f47ed8e8263ec06616b3d5ad85bb08a9d851 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/environment-and-modules.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/environment-and-modules.md
@@ -1,11 +1,12 @@
 Environment and Modules 
 =======================
+
   
 ### Environment Customization
 After logging in, you may want to configure the environment. Write your
 preferred path definitions, aliases, functions and module loads in the
 .bashrc file
-``` {.prettyprint .lang-sh}
+``` 
 # ./bashrc
 # Source global definitions
 if [ -f /etc/bashrc ]; then
@@ -39,25 +40,25 @@ Modules Path Expansion](#EasyBuild).
 The modules may be loaded, unloaded and switched, according to momentary
 needs.
 To check available modules use
-``` {.prettyprint .lang-sh}
+``` 
 $ module avail
 ```
 To load a module, for example the octave module  use
-``` {.prettyprint .lang-sh}
+``` 
 $ module load octave
 ```
 loading the octave module will set up paths and environment variables of
 your active shell such that you are ready to run the octave software
 To check loaded modules use
-``` {.prettyprint .lang-sh}
+``` 
 $ module list
 ```
  To unload a module, for example the octave module use
-``` {.prettyprint .lang-sh}
+``` 
 $ module unload octave
 ```
 Learn more on modules by reading the module man page
-``` {.prettyprint .lang-sh}
+``` 
 $ man module
 ```
 Following modules set up the development environment
@@ -72,7 +73,7 @@ using tool called
 In case that you want to use some applications that are build by
 EasyBuild already, you have to modify your MODULEPATH environment
 variable.
-``` {.prettyprint .lang-sh}
+``` 
 export MODULEPATH=$MODULEPATH:/apps/easybuild/modules/all/
 ```
 This command expands your searched paths to modules. You can also add
diff --git a/docs.it4i.cz/anselm-cluster-documentation/hardware-overview.md b/docs.it4i.cz/anselm-cluster-documentation/hardware-overview.md
index aff57803f79184d3669e2092a495ac0591ed6e38..684e1cde8c1384e4765a3d3ee2f27a2e2cf1c0b1 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/hardware-overview.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/hardware-overview.md
@@ -1,5 +1,6 @@
 Hardware Overview 
 =================
+
   
 The Anselm cluster consists of 209 computational nodes named cn[1-209]
 of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated
@@ -353,7 +354,7 @@ Total max. LINPACK performance  (Rmax)
 Total amount of RAM
 15.136 TB
   Node               Processor                               Memory   Accelerator
-  ------------------ ---------------- -------- ----------------------
+  ------------------ --------------------------------------- -------- ----------------------
   w/o accelerator    2x Intel Sandy Bridge E5-2665, 2.4GHz   64GB     -
   GPU accelerated    2x Intel Sandy Bridge E5-2470, 2.3GHz   96GB     NVIDIA Kepler K20
   MIC accelerated    2x Intel Sandy Bridge E5-2470, 2.3GHz   96GB     Intel Xeon Phi P5110
diff --git a/docs.it4i.cz/anselm-cluster-documentation/introduction.md b/docs.it4i.cz/anselm-cluster-documentation/introduction.md
index 7d1bde3e9229026bb5cd0d102710e86979641d53..cf3bfcbbadff97a674e71305f57127f8ec033092 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/introduction.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/introduction.md
@@ -1,5 +1,6 @@
 Introduction 
 ============
+
   
 Welcome to Anselm supercomputer cluster. The Anselm cluster consists of
 209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving
diff --git a/docs.it4i.cz/anselm-cluster-documentation/network.md b/docs.it4i.cz/anselm-cluster-documentation/network.md
index 33ccce93e0eea6756576de1fe33f9cd4a5f51c7e..836d156647917cd60b51759123775e7fd794ad10 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/network.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/network.md
@@ -1,5 +1,6 @@
 Network 
 =======
+
   
 All compute and login nodes of Anselm are interconnected by
 [Infiniband](http://en.wikipedia.org/wiki/InfiniBand)
@@ -29,7 +30,7 @@ aliases cn1-cn209.
 The network provides **114MB/s** transfer rates via the TCP connection.
 Example
 -------
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
 $ qstat -n -u username
                                                             Req'd  Req'd   Elap
diff --git a/docs.it4i.cz/anselm-cluster-documentation/prace.md b/docs.it4i.cz/anselm-cluster-documentation/prace.md
index 8e0d19d89e3595aceb42675ca183d571af75a7a6..466b4a140893cb2e515574b5abbdf8dd36f79808 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/prace.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/prace.md
@@ -1,5 +1,6 @@
 PRACE User Support 
 ==================
+
   
 Intro
 -----
@@ -31,7 +32,7 @@ to access the web interface of the local (IT4Innovations) request
 tracker and thus a new ticket should be created by sending an e-mail to
 support[at]it4i.cz.
 Obtaining Login Credentials
-----
+---------------------------
 In general PRACE users already have a PRACE account setup through their
 HOMESITE (institution from their country) as a result of rewarded PRACE
 project proposal. This includes signed PRACE AuP, generated and
@@ -80,7 +81,7 @@ class="monospace">anselm-prace.it4i.cz</span> which is distributed
 between the two login nodes. If needed, user can login directly to one
 of the login nodes. The addresses are:
   Login address                 Port   Protocol   Login node
-  ------ ------ ---------- ------------------
+  ----------------------------- ------ ---------- ------------------
   anselm-prace.it4i.cz          2222   gsissh     login1 or login2
   login1-prace.anselm.it4i.cz   2222   gsissh     login1
   login2-prace.anselm.it4i.cz   2222   gsissh     login2
@@ -96,7 +97,7 @@ class="monospace">anselm.it4i.cz</span> which is distributed between the
 two login nodes. If needed, user can login directly to one of the login
 nodes. The addresses are:
   Login address           Port   Protocol   Login node
-   ------ ---------- ------------------
+  ----------------------- ------ ---------- ------------------
   anselm.it4i.cz          2222   gsissh     login1 or login2
   login1.anselm.it4i.cz   2222   gsissh     login1
   login2.anselm.it4i.cz   2222   gsissh     login2
@@ -145,7 +146,7 @@ There's one control server and three backend servers for striping and/or
 backup in case one of them would fail.
 **Access from PRACE network:**
   Login address                  Port   Node role
-  ------- ------ ------
+  ------------------------------ ------ -----------------------------
   gridftp-prace.anselm.it4i.cz   2812   Front end /control server
   login1-prace.anselm.it4i.cz    2813   Backend / data mover server
   login2-prace.anselm.it4i.cz    2813   Backend / data mover server
@@ -162,7 +163,7 @@ Or by using <span class="monospace">prace_service</span> script:
  
 **Access from public Internet:**
   Login address            Port   Node role
-  - ------ ------
+  ------------------------ ------ -----------------------------
   gridftp.anselm.it4i.cz   2812   Front end /control server
   login1.anselm.it4i.cz    2813   Backend / data mover server
   login2.anselm.it4i.cz    2813   Backend / data mover server
@@ -179,7 +180,7 @@ Or by using <span class="monospace">prace_service</span> script:
  
 Generally both shared file systems are available through GridFTP:
   File system mount point   Filesystem   Comment
-  -- ------------ ------------------
+  ------------------------- ------------ ----------------------------------------------------------------
   /home                     Lustre       Default HOME directories of users in format /home/prace/login/
   /scratch                  Lustre       Shared SCRATCH mounted on the whole cluster
 More information about the shared file systems is available
@@ -212,7 +213,7 @@ execution is in this [section of general
 documentation](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction).
 For PRACE users, the default production run queue is "qprace". PRACE
 users can also use two other queues "qexp" and "qfree".
-  ------
+  -------------------------------------------------------------------------------------------------------------------------
   queue                 Active project   Project resources   Nodes                 priority   authorization   walltime
                                                                                                               default/max
   --------------------- ---------------- ------------------- --------------------- ---------- --------------- -------------
@@ -223,7 +224,7 @@ users can also use two other queues "qexp" and "qfree".
                                                                                                               
   **qfree**            yes              none required       178 w/o accelerator   very low   no              12 / 12h
   Free resource queue                                                                                         
-  ------
+  -------------------------------------------------------------------------------------------------------------------------
 **qprace**, the PRACE Production queue****This queue is intended for
 normal production runs. It is required that active project with nonzero
 remaining resources is specified to enter the qprace. The queue runs
diff --git a/docs.it4i.cz/anselm-cluster-documentation/remote-visualization.md b/docs.it4i.cz/anselm-cluster-documentation/remote-visualization.md
index f001452634262bd0c4404b4055a0616ea5463b64..96651c373d3c85d411392225235ae487c187c790 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/remote-visualization.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/remote-visualization.md
@@ -1,6 +1,6 @@
 Remote visualization service 
 ============================
-Introduction {#schematic-overview}
+Introduction 
 ------------
 The goal of this service is to provide the users a GPU accelerated use
 of OpenGL applications, especially for pre- and post- processing work,
@@ -28,7 +28,7 @@ Schematic overview
 ------------------
 ![rem_vis_scheme](https://docs.it4i.cz/anselm-cluster-documentation/scheme.png "rem_vis_scheme")
 ![rem_vis_legend](https://docs.it4i.cz/anselm-cluster-documentation/legend.png "rem_vis_legend")
-How to use the service {#setup-and-start-your-own-turbovnc-server}
+How to use the service 
 ----------------------
 ### Setup and start your own TurboVNC server.
 TurboVNC is designed and implemented for cooperation with VirtualGL and
@@ -48,7 +48,7 @@ Otherwise only the geometry (desktop size) definition is needed.
 *At first VNC server run you need to define a password.*
 This example defines desktop with dimensions 1200x700 pixels and 24 bit
 color depth.
-``` {.code .highlight .white .shell}
+``` 
 $ module load turbovnc/1.2.2 
 $ vncserver -geometry 1200x700 -depth 24 
 Desktop 'TurboVNClogin2:1 (username)' started on display login2:1 
@@ -56,7 +56,7 @@ Starting applications specified in /home/username/.vnc/xstartup.turbovnc
 Log file is /home/username/.vnc/login2:1.log 
 ```
 #### 3. Remember which display number your VNC server runs (you will need it in the future to stop the server). {#3-remember-which-display-number-your-vnc-server-runs-you-will-need-it-in-the-future-to-stop-the-server}
-``` {.code .highlight .white .shell}
+``` 
 $ vncserver -list 
 TurboVNC server sessions
 X DISPLAY # PROCESS ID 
@@ -64,21 +64,21 @@ X DISPLAY # PROCESS ID
 ```
 In this example the VNC server runs on display **:1**.
 #### 4. Remember the exact login node, where your VNC server runs. {#4-remember-the-exact-login-node-where-your-vnc-server-runs}
-``` {.code .highlight .white .shell}
+``` 
 $ uname -n
 login2 
 ```
 In this example the VNC server runs on **login2**.
 #### 5. Remember on which TCP port your own VNC server is running. {#5-remember-on-which-tcp-port-your-own-vnc-server-is-running}
 To get the port you have to look to the log file of your VNC server.
-``` {.code .highlight .white .shell}
+``` 
 $ grep -E "VNC.*port" /home/username/.vnc/login2:1.log 
 20/02/2015 14:46:41 Listening for VNC connections on TCP port 5901 
 ```
 In this example the VNC server listens on TCP port **5901**.
 #### 6. Connect to the login node where your VNC server runs with SSH to tunnel your VNC session. {#6-connect-to-the-login-node-where-your-vnc-server-runs-with-ssh-to-tunnel-your-vnc-session}
 Tunnel the TCP port on which your VNC server is listenning.
-``` {.code .highlight .white .shell}
+``` 
 $ ssh login2.anselm.it4i.cz -L 5901:localhost:5901 
 ```
 *If you use Windows and Putty, please refer to port forwarding setup
@@ -89,7 +89,7 @@ Get it from<http://sourceforge.net/projects/turbovnc/>
 #### 8. Run TurboVNC Viewer from your workstation. {#8-run-turbovnc-viewer-from-your-workstation}
 Mind that you should connect through the SSH tunneled port. In this
 example it is 5901 on your workstation (localhost).
-``` {.code .highlight .white .shell}
+``` 
 $ vncviewer localhost:5901 
 ```
 *If you use Windows version of TurboVNC Viewer, just run the Viewer and
@@ -100,11 +100,11 @@ workstation.*
 #### 10. After you end your visualization session. {#10-after-you-end-your-visualization-session}
 *Don't forget to correctly shutdown your own VNC server on the login
 node!*
-``` {.code .highlight .white .shell}
+``` 
 $ vncserver -kill :1 
 ```
 Access the visualization node
-------
+-----------------------------
 To access the node use a dedicated PBS Professional scheduler queue
 **qviz**. The queue has following properties:
 <table>
@@ -154,12 +154,12 @@ hours maximum.*
 To access the visualization node, follow these steps:
 #### 1. In your VNC session, open a terminal and allocate a node using PBSPro qsub command. {#1-in-your-vnc-session-open-a-terminal-and-allocate-a-node-using-pbspro-qsub-command}
 *This step is necessary to allow you to proceed with next steps.*
-``` {.code .highlight .white .shell}
+``` 
 $ qsub -I -q qviz -A PROJECT_ID 
 ```
 In this example the default values for CPU cores and usage time are
 used.
-``` {.code .highlight .white .shell}
+``` 
 $ qsub -I -q qviz -A PROJECT_ID -l select=1:ncpus=16 -l walltime=02:00:00 
 ```
 *Substitute **PROJECT_ID** with the assigned project identification
@@ -167,7 +167,7 @@ string.*
 In this example a whole node for 2 hours is requested.
 If there are free resources for your request, you will have a shell
 running on an assigned node. Please remember the name of the node.
-``` {.code .highlight .white .shell}
+``` 
 $ uname -n
 srv8 
 ```
@@ -175,24 +175,24 @@ In this example the visualization session was assigned to node **srv8**.
 #### 2. In your VNC session open another terminal (keep the one with interactive PBSPro job open). {#2-in-your-vnc-session-open-another-terminal-keep-the-one-with-interactive-pbspro-job-open}
 Setup the VirtualGL connection to the node, which PBSPro allocated for
 your job.
-``` {.code .highlight .white .shell}
+``` 
 $ vglconnect srv8 
 ```
 You will be connected with created VirtualGL tunnel to the visualization
 node, where you will have a shell.
 #### 3. Load the VirtualGL module. {#3-load-the-virtualgl-module}
-``` {.code .highlight .white .shell}
+``` 
 $ module load virtualgl/2.4 
 ```
 #### 4. Run your desired OpenGL accelerated application using VirtualGL script "vglrun". {#4-run-your-desired-opengl-accelerated-application-using-virtualgl-script-vglrun}
-``` {.code .highlight .white .shell}
+``` 
 $ vglrun glxgears 
 ```
 Please note, that if you want to run an OpenGL application which is
 available through modules, you need at first load the respective module.
 E. g. to run the **Mentat** OpenGL application from **MARC** software
 package use:
-``` {.code .highlight .white .shell}
+``` 
 $ module load marc/2013.1 
 $ vglrun mentat 
 ```
diff --git a/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution.md b/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution.md
index c0a7da2ce7d24a1e84a791be4fcaf30966945bd2..1037208fbf221561e0292ea20a238c8f80786629 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution.md
@@ -1,5 +1,6 @@
 Resource Allocation and Job Execution 
 =====================================
+
   
 To run a
 [job](https://docs.it4i.cz/anselm-cluster-documentation/introduction),
@@ -13,7 +14,7 @@ here](https://docs.it4i.cz/pbspro-documentation),
 especially in the [PBS Pro User's
 Guide](https://docs.it4i.cz/pbspro-documentation/pbspro-users-guide).
 Resources Allocation Policy
-----
+---------------------------
 The resources are allocated to the job in a fairshare fashion, subject
 to constraints set by the queue and resources available to the Project.
 [The
@@ -33,7 +34,7 @@ Read more on the [Resource Allocation
 Policy](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy)
 page.
 Job submission and execution
------
+----------------------------
 Use the **qsub** command to submit your jobs.
 The qsub submits the job into the queue. The qsub command creates a
 request to the PBS Job manager for allocation of specified resources. 
diff --git a/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md b/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md
index c26533581b5d3f9a1b1c44a851860f2642874dcd..c5863818d519c0f22d98ca2fd8926a1eb146cc15 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/capacity-computing.md
@@ -1,5 +1,6 @@
 Capacity computing 
 ==================
+
   
 Introduction
 ------------
@@ -52,11 +53,11 @@ file001, ..., file900). Assume we would like to use each of these input
 files with program executable myprog.x, each as a separate job.
 First, we create a tasklist file (or subjobs list), listing all tasks
 (subjobs) - all input files in our example:
-``` {.prettyprint .lang-sh}
+``` 
 $ find . -name 'file*' > tasklist
 ```
 Then we create jobscript:
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -A PROJECT_ID
 #PBS -q qprod
@@ -65,7 +66,7 @@ Then we create jobscript:
 SCR=/lscratch/$PBS_JOBID
 mkdir -p $SCR ; cd $SCR || exit
 # get individual tasks from tasklist with index from PBS JOB ARRAY
-TASK=$(sed -n "${PBS_ARRAY_INDEX}p" $PBS_O_WORKDIR/tasklist)  
+TASK=$(sed -n "$p" $PBS_O_WORKDIR/tasklist)  
 # copy input file and executable to scratch 
 cp $PBS_O_WORKDIR/$TASK input ; cp $PBS_O_WORKDIR/myprog.x .
 # execute the calculation
@@ -94,7 +95,7 @@ run has to be used properly.
 To submit the job array, use the qsub -J command. The 900 jobs of the
 [example above](#array_example) may be submitted like
 this:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -N JOBNAME -J 1-900 jobscript
 12345[].dm2
 ```
@@ -104,14 +105,14 @@ the #PBS directives in the beginning of the jobscript file, dont'
 forget to set your valid PROJECT_ID and desired queue).
 Sometimes for testing purposes, you may need to submit only one-element
 array. This is not allowed by PBSPro, but there's a workaround:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -N JOBNAME -J 9-10:2 jobscript
 ```
 This will only choose the lower index (9 in this example) for
 submitting/running your job.
 ### Manage the job array
 Check status of the job array by the qstat command.
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -a 12345[].dm2
 dm2:
                                                             Req'd  Req'd   Elap
@@ -121,7 +122,7 @@ Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
 ```
 The status B means that some subjobs are already running.
 Check status of the first 100 subjobs by the qstat command.
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -a 12345[1-100].dm2
 dm2:
                                                             Req'd  Req'd   Elap
@@ -137,16 +138,16 @@ Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
 ```
 Delete the entire job array. Running subjobs will be killed, queueing
 subjobs will be deleted.
-``` {.prettyprint .lang-sh}
+``` 
 $ qdel 12345[].dm2
 ```
 Deleting large job arrays may take a while.
 Display status information for all user's jobs, job arrays, and subjobs.
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -u $USER -t
 ```
 Display status information for all user's subjobs.
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -u $USER -tJ
 ```
 Read more on job arrays in the [PBSPro Users
@@ -159,7 +160,7 @@ more computers. A job can be a single command or a small script that has
 to be run for each of the lines in the input. GNU parallel is most
 useful in running single core jobs via the queue system on  Anselm.
 For more information and examples see the parallel man page:
-``` {.prettyprint .lang-sh}
+``` 
 $ module add parallel
 $ man parallel
 ```
@@ -174,11 +175,11 @@ files with program executable myprog.x, each as a separate single core
 job. We call these single core jobs tasks.
 First, we create a tasklist file, listing all tasks - all input files in
 our example:
-``` {.prettyprint .lang-sh}
+``` 
 $ find . -name 'file*' > tasklist
 ```
 Then we create jobscript:
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -A PROJECT_ID
 #PBS -q qprod
@@ -209,7 +210,7 @@ $TASK.out name. 
 ### Submit the job
 To submit the job, use the qsub command. The 101 tasks' job of the
 [example above](#gp_example) may be submitted like this:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -N JOBNAME jobscript
 12345.dm2
 ```
@@ -219,7 +220,7 @@ complete in less than 2 hours.
 Please note the #PBS directives in the beginning of the jobscript file,
 dont' forget to set your valid PROJECT_ID and desired queue.
 []()Job arrays and GNU parallel
---------
+-------------------------------
 Combine the Job arrays and GNU parallel for best throughput of single
 core jobs
 While job arrays are able to utilize all available computational nodes,
@@ -241,16 +242,16 @@ files with program executable myprog.x, each as a separate single core
 job. We call these single core jobs tasks.
 First, we create a tasklist file, listing all tasks - all input files in
 our example:
-``` {.prettyprint .lang-sh}
+``` 
 $ find . -name 'file*' > tasklist
 ```
 Next we create a file, controlling how many tasks will be executed in
 one subjob
-``` {.prettyprint .lang-sh}
+``` 
 $ seq 32 > numtasks
 ```
 Then we create jobscript:
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -A PROJECT_ID
 #PBS -q qprod
@@ -262,7 +263,7 @@ SCR=/lscratch/$PBS_JOBID/$PARALLEL_SEQ
 mkdir -p $SCR ; cd $SCR || exit
 # get individual task from tasklist with index from PBS JOB ARRAY and index form Parallel
 IDX=$(($PBS_ARRAY_INDEX + $PARALLEL_SEQ - 1))
-TASK=$(sed -n "${IDX}p" $PBS_O_WORKDIR/tasklist)
+TASK=$(sed -n "$p" $PBS_O_WORKDIR/tasklist)
 [ -z "$TASK" ] && exit
 # copy input file and executable to scratch 
 cp $PBS_O_WORKDIR/$TASK input 
@@ -291,7 +292,7 @@ Select  subjob walltime and number of tasks per subjob  carefully
 To submit the job array, use the qsub -J command. The 992 tasks' job of
 the [example above](#combined_example) may be submitted
 like this:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -N JOBNAME -J 1-992:32 jobscript
 12345[].dm2
 ```
@@ -311,7 +312,7 @@ recommend to try out the examples, before using this for running
 production jobs.
 Unzip the archive in an empty directory on Anselm and follow the
 instructions in the README file
-``` {.prettyprint .lang-sh}
+``` 
 $ unzip capacity.zip
 $ cat README
 ```
diff --git a/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md b/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md
index c0a7da2ce7d24a1e84a791be4fcaf30966945bd2..1037208fbf221561e0292ea20a238c8f80786629 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction.md
@@ -1,5 +1,6 @@
 Resource Allocation and Job Execution 
 =====================================
+
   
 To run a
 [job](https://docs.it4i.cz/anselm-cluster-documentation/introduction),
@@ -13,7 +14,7 @@ here](https://docs.it4i.cz/pbspro-documentation),
 especially in the [PBS Pro User's
 Guide](https://docs.it4i.cz/pbspro-documentation/pbspro-users-guide).
 Resources Allocation Policy
-----
+---------------------------
 The resources are allocated to the job in a fairshare fashion, subject
 to constraints set by the queue and resources available to the Project.
 [The
@@ -33,7 +34,7 @@ Read more on the [Resource Allocation
 Policy](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy)
 page.
 Job submission and execution
------
+----------------------------
 Use the **qsub** command to submit your jobs.
 The qsub submits the job into the queue. The qsub command creates a
 request to the PBS Job manager for allocation of specified resources. 
diff --git a/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md b/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md
index e461362995d2f3cbe41560a58de9a1e73626c0ab..ebf37320bb8b98a085d86171f900dd076658cffe 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution.md
@@ -1,5 +1,6 @@
 Job submission and execution 
 ============================
+
   
 Job Submission
 --------------
@@ -14,7 +15,7 @@ When allocating computational resources for the job, please specify
 Use the **qsub** command to submit your job to a queue for allocation of
 the computational resources.
 Submit the job using the qsub command:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -A Project_ID -q queue -l select=x:ncpus=y,walltime=[[hh:]mm:]ss[.ms] jobscript
 ```
 The qsub submits the job into the queue, in another words the qsub
@@ -24,7 +25,7 @@ subject to above described policies and constraints. **After the
 resources are allocated the jobscript or interactive shell is executed
 on first of the allocated nodes.**
 ### Job Submission Examples
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -A OPEN-0-0 -q qprod -l select=64:ncpus=16,walltime=03:00:00 ./myjob
 ```
 In this example, we allocate 64 nodes, 16 cores per node, for 3 hours.
@@ -32,21 +33,21 @@ We allocate these resources via the qprod queue, consumed resources will
 be accounted to the Project identified by Project ID OPEN-0-0. Jobscript
 myjob will be executed on the first node in the allocation.
  
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -q qexp -l select=4:ncpus=16 -I
 ```
 In this example, we allocate 4 nodes, 16 cores per node, for 1 hour. We
 allocate these resources via the qexp queue. The resources will be
 available interactively
  
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -A OPEN-0-0 -q qnvidia -l select=10:ncpus=16 ./myjob
 ```
 In this example, we allocate 10 nvidia accelerated nodes, 16 cores per
 node, for  24 hours. We allocate these resources via the qnvidia queue.
 Jobscript myjob will be executed on the first node in the allocation.
  
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=16 ./myjob
 ```
 In this example, we allocate 10  nodes, 16 cores per node, for 12 hours.
@@ -58,20 +59,20 @@ the first node in the allocation.
 All qsub options may be [saved directly into the
 jobscript](#PBSsaved). In such a case, no options to qsub
 are needed.
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub ./myjob
 ```
  
 By default, the PBS batch system sends an e-mail only when the job is
 aborted. Disabling mail events completely can be done like this:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -m n
 ```
 Advanced job placement
 ----------------------
 ### Placement by name
 Specific nodes may be allocated via the PBS
-``` {.prettyprint .lang-sh}
+``` 
 qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=16:host=cn171+1:ncpus=16:host=cn172 -I
 ```
 In this example, we allocate nodes cn171 and cn172, all 16 cores per
@@ -85,7 +86,7 @@ Nodes equipped with Intel Xeon E5-2665 CPU have base clock frequency
 via the PBS resource attribute <span
 class="highlightedSearchTerm">cpu_freq</span> .
   CPU Type             base freq.   Nodes                        cpu_freq attribute
-  -------------------- ------------ ----- ---------------------
+  -------------------- ------------ ---------------------------- ---------------------
   Intel Xeon E5-2665   2.4GHz       cn[1-180], cn[208-209]   24
   Intel Xeon E5-2470   2.3GHz       cn[181-207]                23
  
@@ -143,14 +144,14 @@ Job Management
 --------------
 Check status of your jobs using the **qstat** and **check-pbs-jobs**
 commands
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -a
 $ qstat -a -u username
 $ qstat -an -u username
 $ qstat -f 12345.srv11
 ```
 []()Example:
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -a
 srv11:
                                                             Req'd  Req'd   Elap
@@ -171,7 +172,7 @@ Check status of your jobs using check-pbs-jobs command. Check presence
 of user's PBS jobs' processes on execution hosts. Display load,
 processes. Display job standard and error output. Continuously display
 (tail -f) job standard or error output.
-``` {.prettyprint .lang-sh}
+``` 
 $ check-pbs-jobs --check-all
 $ check-pbs-jobs --print-load --print-processes
 $ check-pbs-jobs --print-job-out --print-job-err
@@ -179,7 +180,7 @@ $ check-pbs-jobs --jobid JOBID --check-all --print-all
 $ check-pbs-jobs --jobid JOBID --tailf-job-out
 ```
 Examples:
-``` {.prettyprint .lang-sh}
+``` 
 $ check-pbs-jobs --check-all
 JOB 35141.dm2, session_id 71995, user user2, nodes cn164,cn165
 Check session idOK
@@ -189,7 +190,7 @@ cn165No process
 ```
 In this example we see that job 35141.dm2 currently runs no process on
 allocated node cn165, which may indicate an execution error.
-``` {.prettyprint .lang-sh}
+``` 
 $ check-pbs-jobs --print-load --print-processes
 JOB 35141.dm2, session_id 71995, user user2, nodes cn164,cn165
 Print load
@@ -205,7 +206,7 @@ cn16499.7 run-task
 In this example we see that job 35141.dm2 currently runs process
 run-task on node cn164, using one thread only, while node cn165 is
 empty, which may indicate an execution error.
-``` {.prettyprint .lang-sh}
+``` 
 $ check-pbs-jobs --jobid 35141.dm2 --print-job-out
 JOB 35141.dm2, session_id 71995, user user2, nodes cn164,cn165
 Print job standard output:
@@ -221,15 +222,15 @@ In this example, we see actual output (some iteration loops) of the job
 Manage your queued or running jobs, using the **qhold**, **qrls**,
 **qdel,** **qsig** or **qalter** commands
 You may release your allocation at any time, using qdel command
-``` {.prettyprint .lang-sh}
+``` 
 $ qdel 12345.srv11
 ```
 You may kill a running job by force, using qsig command
-``` {.prettyprint .lang-sh}
+``` 
 $ qsig -s 9 12345.srv11
 ```
 Learn more by reading the pbs man page
-``` {.prettyprint .lang-sh}
+``` 
 $ man pbs_professional
 ```
 Job Execution
@@ -243,7 +244,7 @@ command as an argument and executed by the PBS Professional workload
 manager.
 The jobscript or interactive shell is executed on first of the allocated
 nodes.
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
 $ qstat -n -u username
 srv11:
@@ -259,7 +260,7 @@ the node cn17, while the nodes cn108, cn109 and cn110 are available for
 use as well.
 The jobscript or interactive shell is by default executed in home
 directory
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -q qexp -l select=4:ncpus=16 -I
 qsubwaiting for job 15210.srv11 to start
 qsubjob 15210.srv11 ready
@@ -275,7 +276,7 @@ may access each other via ssh as well.
 Calculations on allocated nodes may be executed remotely via the MPI,
 ssh, pdsh or clush. You may find out which nodes belong to the
 allocation by reading the $PBS_NODEFILE file
-``` {.prettyprint .lang-sh}
+``` 
 qsub -q qexp -l select=4:ncpus=16 -I
 qsubwaiting for job 15210.srv11 to start
 qsubjob 15210.srv11 ready
@@ -302,7 +303,7 @@ Production jobs must use the /scratch directory for I/O
 The recommended way to run production jobs is to change to /scratch
 directory early in the jobscript, copy all inputs to /scratch, execute
 the calculations and copy outputs to home directory.
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 # change to scratch directory, exit on failure
 SCRDIR=/scratch/$USER/myjob
@@ -341,7 +342,7 @@ Use **mpiprocs** and **ompthreads** qsub options to control the MPI job
 execution.
 Example jobscript for an MPI job with preloaded inputs and executables,
 options for qsub are stored within the script :
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -q qprod
 #PBS -N MYJOB
@@ -374,7 +375,7 @@ scratch will be deleted immediately after the job ends.
 Example jobscript for single node calculation, using [local
 scratch](https://docs.it4i.cz/anselm-cluster-documentation/storage-1/storage)
 on the node:
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 # change to local scratch directory
 cd /lscratch/$PBS_JOBID || exit
diff --git a/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md b/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md
index 1af2116e2f60086a993276503144204d33b74874..f37f4f14f3e7918f577adbbb52b18f23cb866429 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy.md
@@ -1,8 +1,9 @@
 Resources Allocation Policy 
 ===========================
+
   
-Resources Allocation Policy {#resources-allocation-policy}
-----
+Resources Allocation Policy 
+---------------------------
 The resources are allocated to the job in a fairshare fashion, subject
 to constraints set by the queue and resources available to the Project.
 The Fairshare at Anselm ensures that individual users may consume
@@ -181,12 +182,12 @@ Check the status of jobs, queues and compute nodes at
 ![rspbs web
 interface](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/rsweb.png)
 Display the queue status on Anselm:
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -q
 ```
 The PBS allocation overview may be obtained also using the rspbs
 command.
-``` {.prettyprint .lang-sh}
+``` 
 $ rspbs
 Usagerspbs [options]
 Options:
@@ -241,7 +242,7 @@ Options:
   --incl-finished       Include finished jobs
 ```
 []()Resources Accounting Policy
---------
+-------------------------------
 ### The Core-Hour
 The resources that are currently subject to accounting are the
 core-hours. The core-hours are accounted on the wall clock basis. The
@@ -260,7 +261,7 @@ located here:
 User may check at any time, how many core-hours have been consumed by
 himself/herself and his/her projects. The command is available on
 clusters' login nodes.
-``` {.prettyprint .lang-sh}
+``` 
 $ it4ifree
 Password:
      PID    Total   Used   ...by me Free
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/anselm-cluster-documentation/software/mpi-1/running-mpich2.md b/docs.it4i.cz/anselm-cluster-documentation/software/anselm-cluster-documentation/software/mpi-1/running-mpich2.md
index c8d0b2abc90013cd168d0571b9fa0eaaf9d14368..303c0b50ffa13933c63f1e0f1d44a9dfd1b963ac 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/anselm-cluster-documentation/software/mpi-1/running-mpich2.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/anselm-cluster-documentation/software/mpi-1/running-mpich2.md
@@ -1,8 +1,9 @@
 Running MPICH2 
 ==============
+
   
 MPICH2 program execution
--
+------------------------
 The MPICH2 programs use mpd daemon or ssh connection to spawn processes,
 no PBS support is needed. However the PBS allocation is required to
 access compute nodes. On Anselm, the **Intel MPI** and **mpich2 1.9**
@@ -90,7 +91,7 @@ later) the following variables may be used for Intel or GCC:
     $ export OMP_PLACES=cores 
  
 MPICH2 Process Mapping and Binding
------------
+----------------------------------
 The mpirun allows for precise selection of how the MPI processes will be
 mapped to the computational nodes and how these processes will bind to
 particular processor sockets and cores.
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/ansys.md b/docs.it4i.cz/anselm-cluster-documentation/software/ansys.md
index c8e08155b7b965077a12d582b462273bf2fee7c2..884bef9353c3dde2040ae848945b12af1d3ffb7a 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/ansys.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/ansys.md
@@ -6,7 +6,7 @@ Republic provided all ANSYS licenses for ANSELM cluster and supports of
 all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent,
 Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging
 to problem of ANSYS functionality contact
-please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM){.email-link}
+please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
 Anselm provides as commercial as academic variants. Academic variants
 are distinguished by "**Academic...**" word in the name of  license or
 by two letter preposition "**aa_**" in the license feature name. Change
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-cfx.md b/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-cfx.md
index 90931cac41dfd242affef7b9414133c47df1afbf..da54968c816b2d2fa76ec707918590df94238831 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-cfx.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-cfx.md
@@ -15,7 +15,7 @@ automation using session files, scripting and a powerful expression
 language.
 <span>To run ANSYS CFX in batch mode you can utilize/modify the default
 cfx.pbs script and execute it via the qsub command.</span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -l nodes=2:ppn=16
 #PBS -q qprod
@@ -42,7 +42,7 @@ for host in `cat $PBS_NODEFILE`
 do
  if [ "$hl" = "" ]
  then hl="$host:$procs_per_host"
- else hl="${hl}:$host:$procs_per_host"
+ else hl="$:$host:$procs_per_host"
  fi
 done
 echo Machines$hl
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-fluent.md b/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-fluent.md
index b8b7b424fe32b2abb0fc95bd409bd79835becc8c..0d71ac3a6d68d7acc4c7eac6e051b21ecd247c86 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-fluent.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-fluent.md
@@ -11,10 +11,10 @@ treatment plants. Special models that give the software the ability to
 model in-cylinder combustion, aeroacoustics, turbomachinery, and
 multiphase systems have served to broaden its reach.
 <span>1. Common way to run Fluent over pbs file</span>
---------
+------------------------------------------------------
 <span>To run ANSYS Fluent in batch mode you can utilize/modify the
 default fluent.pbs script and execute it via the qsub command.</span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -S /bin/bash
 #PBS -l nodes=2:ppn=16
@@ -64,8 +64,8 @@ structure:
 <span>The appropriate dimension of the problem has to be set by
 parameter (2d/3d). </span>
 <span>2. Fast way to run Fluent from command line</span>
-----------
-``` {.prettyprint .lang-sh}
+--------------------------------------------------------
+``` 
 fluent solver_version [FLUENT_options] -i journal_file -pbs
 ```
 This syntax will start the ANSYS FLUENT job under PBS Professional using
@@ -80,14 +80,14 @@ working directory, and all output will be written to the file <span
 class="monospace">fluent.o</span><span> </span><span
 class="emphasis">*job_ID*</span>.       
 3. Running Fluent via user's config file
------------------
+----------------------------------------
 The sample script uses a configuration file called <span
 class="monospace">pbs_fluent.conf</span>  if no command line arguments
 are present. This configuration file should be present in the directory
 from which the jobs are submitted (which is also the directory in which
 the jobs are executed). The following is an example of what the content
 of <span class="monospace">pbs_fluent.conf</span> can be:
-``` {.screen}
+``` 
 input="example_small.flin"
 case="Small-1.65m.cas"
 fluent_args="3d -pmyrinet"
@@ -116,7 +116,7 @@ execute the job across multiple processors.               
 <span>To run ANSYS Fluent in batch mode with user's config file you can
 utilize/modify the following script and execute it via the qsub
 command.</span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/sh
 #PBS -l nodes=2:ppn=4
 #PBS -1 qprod
@@ -141,7 +141,7 @@ command.</span>
      cpus=‘expr $num_nodes * $NCPUS‘
      #Default arguments for mpp jobs, these should be changed to suit your
      #needs.
-     fluent_args="-t${cpus} $fluent_args -cnf=$PBS_NODEFILE"
+     fluent_args="-t$ $fluent_args -cnf=$PBS_NODEFILE"
      ;;
    *)
      #SMP case
@@ -164,12 +164,12 @@ command.</span>
 <span>It runs the jobs out of the directory from which they are
 submitted (PBS_O_WORKDIR).</span>
 4. Running Fluent in parralel
-------
+-----------------------------
 Fluent could be run in parallel only under Academic Research license. To
 do so this ANSYS Academic Research license must be placed before ANSYS
 CFD license in user preferences. To make this change anslic_admin
 utility should be run
-``` {.prettyprint .lang-sh}
+``` 
 /ansys_inc/shared_les/licensing/lic_admin/anslic_admin
 ```
 ANSLIC_ADMIN Utility will be run
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md b/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md
index 736449e2dd3068d85f1db94b6618d088097fd66c..1fb00523e087054511e0417833543317ad8dc73a 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md
@@ -19,7 +19,7 @@ available within a single fully interactive modern  graphical user
 environment.</span></span>
 <span>To run ANSYS LS-DYNA in batch mode you can utilize/modify the
 default ansysdyna.pbs script and execute it via the qsub command.</span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -l nodes=2:ppn=16
 #PBS -q qprod
@@ -49,7 +49,7 @@ for host in `cat $PBS_NODEFILE`
 do
  if [ "$hl" = "" ]
  then hl="$host:$procs_per_host"
- else hl="${hl}:$host:$procs_per_host"
+ else hl="$:$host:$procs_per_host"
  fi
 done
 echo Machines$hl
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md b/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
index c29d2617a494e5fa3bff3f9b47f0e01344c6dc6e..9c4dc43aa566c19b99e1d42202c665e1f6114662 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
@@ -10,7 +10,7 @@ physics problems including direct coupled-field elements and the ANSYS
 multi-field solver.</span>
 <span>To run ANSYS MAPDL in batch mode you can utilize/modify the
 default mapdl.pbs script and execute it via the qsub command.</span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -l nodes=2:ppn=16
 #PBS -q qprod
@@ -37,7 +37,7 @@ for host in `cat $PBS_NODEFILE`
 do
  if [ "$hl" = "" ]
  then hl="$host:$procs_per_host"
- else hl="${hl}:$host:$procs_per_host"
+ else hl="$:$host:$procs_per_host"
  fi
 done
 echo Machines$hl
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-products-mechanical-fluent-cfx-mapdl.md b/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-products-mechanical-fluent-cfx-mapdl.md
index c8e08155b7b965077a12d582b462273bf2fee7c2..884bef9353c3dde2040ae848945b12af1d3ffb7a 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-products-mechanical-fluent-cfx-mapdl.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ansys-products-mechanical-fluent-cfx-mapdl.md
@@ -6,7 +6,7 @@ Republic provided all ANSYS licenses for ANSELM cluster and supports of
 all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent,
 Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging
 to problem of ANSYS functionality contact
-please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM){.email-link}
+please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
 Anselm provides as commercial as academic variants. Academic variants
 are distinguished by "**Academic...**" word in the name of  license or
 by two letter preposition "**aa_**" in the license feature name. Change
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ls-dyna.md b/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ls-dyna.md
index ca41e014650174b552c26e88f9166f24e0e8beb8..74aa5752b7c9c3f2278765419b22aaddae347457 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ls-dyna.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/ansys/ls-dyna.md
@@ -21,7 +21,7 @@ support now. </span>
 <span><span>To run LS-DYNA in batch mode you can utilize/modify the
 default lsdyna.pbs script and execute it via the qsub
 command.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -l nodes=1:ppn=16
 #PBS -q qprod
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/chemistry/molpro.md b/docs.it4i.cz/anselm-cluster-documentation/software/chemistry/molpro.md
index 73d81b27948e1ab1045bd290d9ef492ac4424827..d31000f53e1e8bcd7ab6287b43eaad8f501fe398 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/chemistry/molpro.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/chemistry/molpro.md
@@ -23,7 +23,7 @@ Currently on Anselm is installed version 2010.1, patch level 45,
 parallel version compiled with Intel compilers and Intel MPI.
 Compilation parameters are default :
   Parameter                                         Value
-  --- ------
+  ------------------------------------------------- -----------------------------
   <span>max number of atoms</span>                  200
   <span>max number of valence orbitals</span>       300
   <span>max number of basis functions</span>        4095
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/chemistry/nwchem.md b/docs.it4i.cz/anselm-cluster-documentation/software/chemistry/nwchem.md
index c6626f97404950e2e7e155b71ef5aa19cf3cc284..cc8e9683578d47926fc0ea7a674f32ccfeb03e05 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/chemistry/nwchem.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/chemistry/nwchem.md
@@ -2,7 +2,7 @@ NWChem
 ======
 High-Performance Computational Chemistry
 <span>Introduction</span>
---
+-------------------------
 <span>NWChem aims to provide its users with computational chemistry
 tools that are scalable both in their ability to treat large scientific
 computational chemistry problems efficiently, and in their use of
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/compilers.md b/docs.it4i.cz/anselm-cluster-documentation/software/compilers.md
index f3f1272e9af9b3bb42921eefd97ddf9c6e4acceb..fea65453620e8ccbc108f7f091523fc0f6e516c8 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/compilers.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/compilers.md
@@ -1,6 +1,7 @@
 Compilers 
 =========
 Available compilers, including GNU, INTEL and UPC compilers
+
   
 Currently there are several compilers for different programming
 languages available on the Anselm cluster:
@@ -19,7 +20,7 @@ products, please read the [Intel Parallel
 studio](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite)
 page.
 GNU C/C++ and Fortran Compilers
---------
+-------------------------------
 For compatibility reasons there are still available the original (old
 4.4.6-4) versions of GNU compilers as part of the OS. These are
 accessible in the search path  by default.
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/comsol.md b/docs.it4i.cz/anselm-cluster-documentation/software/comsol.md
index 7364eb191f81b7cebaf64e46bd505365a8842ca6..9a4c4b0b0ac0f68d8949c1c1c0461e4dd94e1b1c 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/comsol.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/comsol.md
@@ -1,9 +1,10 @@
 COMSOL Multiphysics® 
 ====================
+
   
 <span><span>Introduction
 </span></span>
---
+-------------------------
 <span><span>[COMSOL](http://www.comsol.com)</span></span><span><span>
 is a powerful environment for modelling and solving various engineering
 and scientific problems based on partial differential equations. COMSOL
@@ -52,14 +53,14 @@ stable version. There are two variants of the release:</span></span>
     soon</span>.</span></span>
     </span></span>
 <span><span>To load the of COMSOL load the module</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ module load comsol
 ```
 <span><span>By default the </span></span><span><span>**EDU
 variant**</span></span><span><span> will be loaded. If user needs other
 version or variant, load the particular version. To obtain the list of
 available versions use</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ module avail comsol
 ```
 <span><span>If user needs to prepare COMSOL jobs in the interactive mode
@@ -67,7 +68,7 @@ it is recommend to use COMSOL on the compute nodes via PBS Pro
 scheduler. In order run the COMSOL Desktop GUI on Windows is recommended
 to use the [Virtual Network Computing
 (VNC)](resolveuid/11e53ad0d2fd4c5187537f4baeedff33).</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ xhost +
 $ qsub -I -X -A PROJECT_ID -q qprod -l select=1:ncpus=16
 $ module load comsol
@@ -76,7 +77,7 @@ $ comsol
 <span><span>To run COMSOL in batch mode, without the COMSOL Desktop GUI
 environment, user can utilized the default (comsol.pbs) job script and
 execute it via the qsub command.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -l select=3:ncpus=16
 #PBS -q qprod
@@ -92,7 +93,7 @@ text_nodes < cat $PBS_NODEFILE
 module load comsol
 # module load comsol/43b-COM
 ntask=$(wc -l $PBS_NODEFILE)
-comsol -nn ${ntask} batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tmpdir /scratch/$USER/ -inputfile name_input_f.mph -outputfile name_output_f.mph -batchlog name_log_f.log
+comsol -nn $ batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tmpdir /scratch/$USER/ -inputfile name_input_f.mph -outputfile name_output_f.mph -batchlog name_log_f.log
 ```
 <span><span>Working directory has to be created before sending the
 (comsol.pbs) job script into the queue. Input file (name_input_f.mph)
@@ -100,7 +101,7 @@ has to be in working directory or full path to input file has to be
 specified. The appropriate path to the temp directory of the job has to
 be set by command option (-tmpdir).</span></span>
 LiveLink™* *for MATLAB^®^
---
+-------------------------
 <span><span>COMSOL is the software package for the numerical solution of
 the partial differential equations. LiveLink for MATLAB allows
 connection to the
@@ -120,7 +121,7 @@ of LiveLink for MATLAB (please see the [ISV
 Licenses](https://docs.it4i.cz/anselm-cluster-documentation/software/isv_licenses))
 are available. Following example shows how to start COMSOL model from
 MATLAB via LiveLink in the interactive mode.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ xhost +
 $ qsub -I -X -A PROJECT_ID -q qexp -l select=1:ncpus=16
 $ module load matlab
@@ -133,7 +134,7 @@ requested and this information is not requested again.</span></span>
 <span><span>To run LiveLink for MATLAB in batch mode with
 (comsol_matlab.pbs) job script you can utilize/modify the following
 script and execute it via the qsub command.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -l select=3:ncpus=16
 #PBS -q qprod
@@ -149,7 +150,7 @@ text_nodes < cat $PBS_NODEFILE
 module load matlab
 module load comsol/43b-EDU
 ntask=$(wc -l $PBS_NODEFILE)
-comsol -nn ${ntask} server -configuration /tmp -mpiarg -rmk -mpiarg pbs -tmpdir /scratch/$USER &
+comsol -nn $ server -configuration /tmp -mpiarg -rmk -mpiarg pbs -tmpdir /scratch/$USER &
 cd /apps/engineering/comsol/comsol43b/mli
 matlab -nodesktop -nosplash -r "mphstart; addpath /scratch/$USER; test_job"
 ```
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/comsol/comsol-multiphysics.md b/docs.it4i.cz/anselm-cluster-documentation/software/comsol/comsol-multiphysics.md
index 7364eb191f81b7cebaf64e46bd505365a8842ca6..9a4c4b0b0ac0f68d8949c1c1c0461e4dd94e1b1c 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/comsol/comsol-multiphysics.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/comsol/comsol-multiphysics.md
@@ -1,9 +1,10 @@
 COMSOL Multiphysics® 
 ====================
+
   
 <span><span>Introduction
 </span></span>
---
+-------------------------
 <span><span>[COMSOL](http://www.comsol.com)</span></span><span><span>
 is a powerful environment for modelling and solving various engineering
 and scientific problems based on partial differential equations. COMSOL
@@ -52,14 +53,14 @@ stable version. There are two variants of the release:</span></span>
     soon</span>.</span></span>
     </span></span>
 <span><span>To load the of COMSOL load the module</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ module load comsol
 ```
 <span><span>By default the </span></span><span><span>**EDU
 variant**</span></span><span><span> will be loaded. If user needs other
 version or variant, load the particular version. To obtain the list of
 available versions use</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ module avail comsol
 ```
 <span><span>If user needs to prepare COMSOL jobs in the interactive mode
@@ -67,7 +68,7 @@ it is recommend to use COMSOL on the compute nodes via PBS Pro
 scheduler. In order run the COMSOL Desktop GUI on Windows is recommended
 to use the [Virtual Network Computing
 (VNC)](resolveuid/11e53ad0d2fd4c5187537f4baeedff33).</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ xhost +
 $ qsub -I -X -A PROJECT_ID -q qprod -l select=1:ncpus=16
 $ module load comsol
@@ -76,7 +77,7 @@ $ comsol
 <span><span>To run COMSOL in batch mode, without the COMSOL Desktop GUI
 environment, user can utilized the default (comsol.pbs) job script and
 execute it via the qsub command.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -l select=3:ncpus=16
 #PBS -q qprod
@@ -92,7 +93,7 @@ text_nodes < cat $PBS_NODEFILE
 module load comsol
 # module load comsol/43b-COM
 ntask=$(wc -l $PBS_NODEFILE)
-comsol -nn ${ntask} batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tmpdir /scratch/$USER/ -inputfile name_input_f.mph -outputfile name_output_f.mph -batchlog name_log_f.log
+comsol -nn $ batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tmpdir /scratch/$USER/ -inputfile name_input_f.mph -outputfile name_output_f.mph -batchlog name_log_f.log
 ```
 <span><span>Working directory has to be created before sending the
 (comsol.pbs) job script into the queue. Input file (name_input_f.mph)
@@ -100,7 +101,7 @@ has to be in working directory or full path to input file has to be
 specified. The appropriate path to the temp directory of the job has to
 be set by command option (-tmpdir).</span></span>
 LiveLink™* *for MATLAB^®^
---
+-------------------------
 <span><span>COMSOL is the software package for the numerical solution of
 the partial differential equations. LiveLink for MATLAB allows
 connection to the
@@ -120,7 +121,7 @@ of LiveLink for MATLAB (please see the [ISV
 Licenses](https://docs.it4i.cz/anselm-cluster-documentation/software/isv_licenses))
 are available. Following example shows how to start COMSOL model from
 MATLAB via LiveLink in the interactive mode.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ xhost +
 $ qsub -I -X -A PROJECT_ID -q qexp -l select=1:ncpus=16
 $ module load matlab
@@ -133,7 +134,7 @@ requested and this information is not requested again.</span></span>
 <span><span>To run LiveLink for MATLAB in batch mode with
 (comsol_matlab.pbs) job script you can utilize/modify the following
 script and execute it via the qsub command.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -l select=3:ncpus=16
 #PBS -q qprod
@@ -149,7 +150,7 @@ text_nodes < cat $PBS_NODEFILE
 module load matlab
 module load comsol/43b-EDU
 ntask=$(wc -l $PBS_NODEFILE)
-comsol -nn ${ntask} server -configuration /tmp -mpiarg -rmk -mpiarg pbs -tmpdir /scratch/$USER &
+comsol -nn $ server -configuration /tmp -mpiarg -rmk -mpiarg pbs -tmpdir /scratch/$USER &
 cd /apps/engineering/comsol/comsol43b/mli
 matlab -nodesktop -nosplash -r "mphstart; addpath /scratch/$USER; test_job"
 ```
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers.1.md b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers.1.md
index 07f6e5ff872baf1df1c5ad1d4ae89a44a0080de5..eee5b70eab354088d337bdf368762f220866a505 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers.1.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers.1.md
@@ -1,5 +1,6 @@
 Debuggers and profilers summary 
 ===============================
+
   
 Introduction
 ------------
@@ -21,6 +22,7 @@ Read more at the [Intel
 Debugger](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-debugger)
 page.
 Allinea Forge (DDT/MAP)
+-----------------------
 Allinea DDT, is a commercial debugger primarily for debugging parallel
 MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel
 Xeon Phi accelerators. DDT provides all the standard debugging features
@@ -34,7 +36,7 @@ Read more at the [Allinea
 DDT](https://docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-ddt)
 page.
 Allinea Performance Reports
-----
+---------------------------
 Allinea Performance Reports characterize the performance of HPC
 application runs. After executing your application through the tool, a
 synthetic HTML report is generated automatically, containing information
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-ddt.md b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
index e36f9ca3e98e9a56193cec41fd8dce738d94d3da..45874f5214b6dd1f779686725effeeba691d6064 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-ddt.md
@@ -1,5 +1,6 @@
 Allinea Forge (DDT,MAP) 
 =======================
+
   
 Allinea Forge consist of two tools - debugger DDT and profiler MAP.
 Allinea DDT, is a commercial debugger primarily for debugging parallel
@@ -12,7 +13,7 @@ implementation.
 Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed
 for profiling parallel code, which uses pthreads, OpenMP or MPI.
 License and Limitations for Anselm Users
------------------
+----------------------------------------
 On Anselm users can debug OpenMP or MPI code that runs up to 64 parallel
 processes. In case of debugging GPU or Xeon Phi accelerated codes the
 limit is 8 accelerators. These limitation means that:
@@ -22,7 +23,7 @@ In case of debugging on accelerators:
 -   1 user can debug on up to 8 accelerators, or 
 -   8 users can debug on single accelerator. 
 Compiling Code to run with DDT
--------
+------------------------------
 ### Modules
 Load all necessary modules to compile the code. For example: 
     $ module load intel
@@ -43,6 +44,7 @@ GNU and INTEL C/C++ and Fortran compilers.
 **-O0** Suppress all optimizations.
  
 Starting a Job with DDT
+-----------------------
 Be sure to log in with an <span class="internal-link">X window
 forwarding</span> enabled. This could mean using the -X in the ssh 
     $ ssh -X username@anselm.it4i.cz 
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md
index bdae63a557958b04aa504e2feca9db309b7c0558..d81c5248f9c86ec3c28f9f7e69a87c772dd6994f 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md
@@ -1,6 +1,7 @@
 Allinea Performance Reports 
 ===========================
 quick application profiling
+
   
 Introduction
 ------------
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
index 57d8f1b4065ce3a983da772618cef9c1eb4614ef..4732b9d6c77626f4f93652af88ddfc4a0b527ff5 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
@@ -9,7 +9,7 @@ The difference between PCM and PAPI is that PCM supports only Intel
 hardware, but PCM can monitor also uncore metrics, like memory
 controllers and <span>QuickPath Interconnect links.</span>
 <span>Installed version</span>
--------
+------------------------------
 Currently installed version 2.6. To load the
 [module](https://docs.it4i.cz/anselm-cluster-documentation/environment-and-modules),
 issue :
@@ -25,13 +25,13 @@ Specify either a delay of updates in seconds or an external program to
 monitor. If you get an error about PMU in use, respond "y" and relaunch
 the program.
 Sample output:
-    ----------------||----------------
+    ---------------------------------------||---------------------------------------
     --             Socket 0              --||--             Socket 1              --
-    ----------------||----------------
-    ----------------||----------------
-    ----------------||----------------
+    ---------------------------------------||---------------------------------------
+    ---------------------------------------||---------------------------------------
+    ---------------------------------------||---------------------------------------
     --   Memory Performance Monitoring   --||--   Memory Performance Monitoring   --
-    ----------------||----------------
+    ---------------------------------------||---------------------------------------
     --  Mem Ch 0Reads (MB/s)   2.44  --||--  Mem Ch 0Reads (MB/s)   0.26  --
     --            Writes(MB/s)   2.16  --||--            Writes(MB/s)   0.08  --
     --  Mem Ch 1Reads (MB/s)   0.35  --||--  Mem Ch 1Reads (MB/s)   0.78  --
@@ -44,11 +44,11 @@ Sample output:
     -- NODE0 Mem Write (MB/s)    2.55  --||-- NODE1 Mem Write (MB/s)    0.88  --
     -- NODE0 P. Write (T/s)     31506  --||-- NODE1 P. Write (T/s)      9099  --
     -- NODE0 Memory (MB/s)       6.02  --||-- NODE1 Memory (MB/s)       2.33  --
-    ----------------||----------------
+    ---------------------------------------||---------------------------------------
     --                   System Read Throughput(MB/s)     4.93                  --
     --                  System Write Throughput(MB/s)     3.43                  --
     --                 System Memory Throughput(MB/s)     8.35                  --
-    ----------------||---------------- 
+    ---------------------------------------||--------------------------------------- 
 ### pcm-msr
 Command <span class="monospace">pcm-msr.x</span> can be used to
 read/write model specific registers of the CPU.
@@ -117,10 +117,10 @@ Sample output :
       13    1     0.00   0.22   0.00    1.26     336      581      0.42    0.04    0.44    0.06     N/A     N/A     69
       14    1     0.00   0.22   0.00    1.25     314      565      0.44    0.06    0.43    0.07     N/A     N/A     69
       15    1     0.00   0.29   0.00    1.19    2815     6926      0.59    0.39    0.29    0.08     N/A     N/A     69
-    
+    -------------------------------------------------------------------------------------------------------------------
      SKT    0     0.00   0.46   0.00    0.79      11 K     21 K    0.47    0.10    0.38    0.07    0.00    0.00     65
      SKT    1     0.29   1.79   0.16    1.29     190 K     15 M    0.99    0.59    0.05    0.70    0.01    0.01     61
-    
+    -------------------------------------------------------------------------------------------------------------------
      TOTAL  *     0.14   1.78   0.08    1.28     201 K     15 M    0.99    0.59    0.05    0.70    0.01    0.01     N/A
      Instructions retired1345 M ; Active cycles 755 M ; Time (TSC) 582 Mticks ; C0 (active,non-halted) core residency6.30 %
      C1 core residency0.14 %; C3 core residency0.20 %; C6 core residency0.00 %; C7 core residency93.36 %;
@@ -129,27 +129,27 @@ Sample output :
      Instructions per nominal CPU cycle0.14 => corresponds to 3.60 % core utilization over time interval
     Intel(r) QPI data traffic estimation in bytes (data traffic coming to CPU/socket through QPI links):
                    QPI0     QPI1    |  QPI0   QPI1  
-    --
+    ----------------------------------------------------------------------------------------------
      SKT    0        0        0     |    0%     0%   
      SKT    1        0        0     |    0%     0%   
-    --
+    ----------------------------------------------------------------------------------------------
     Total QPI incoming data traffic   0       QPI data traffic/Memory controller traffic0.00
     Intel(r) QPI traffic estimation in bytes (data and non-data traffic outgoing from CPU/socket through QPI links):
                    QPI0     QPI1    |  QPI0   QPI1  
-    --
+    ----------------------------------------------------------------------------------------------
      SKT    0        0        0     |    0%     0%   
      SKT    1        0        0     |    0%     0%   
-    --
+    ----------------------------------------------------------------------------------------------
     Total QPI outgoing data and non-data traffic   0  
-    --
+    ----------------------------------------------------------------------------------------------
      SKT    0 package consumed 4.06 Joules
      SKT    1 package consumed 9.40 Joules
-    --
+    ----------------------------------------------------------------------------------------------
      TOTAL                   13.46 Joules
-    --
+    ----------------------------------------------------------------------------------------------
      SKT    0 DIMMs consumed 4.18 Joules
      SKT    1 DIMMs consumed 4.28 Joules
-    --
+    ----------------------------------------------------------------------------------------------
      TOTAL                 8.47 Joules
     Cleaning up
  
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
index d52a512bd6d478fc366ede166a30384c72db5443..e7d1bac74ea9b7eff41e2fde19a0cf6e744f9f9b 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
@@ -1,5 +1,6 @@
 Intel VTune Amplifier 
 =====================
+
   
 Introduction
 ------------
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/papi.md b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/papi.md
index a7dd24df39c17f5cba66da984927f6d19501e837..79d5e3e430c57c688066734d799b9a1e53cbbf73 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/papi.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/papi.md
@@ -1,5 +1,6 @@
 PAPI 
 ====
+
   
 Introduction
 ------------
@@ -32,7 +33,7 @@ column indicated whether the preset event is available on the current
 CPU.
     $ papi_avail
     Available events and hardware information.
-    -----------
+    --------------------------------------------------------------------------------
     PAPI Version 5.3.2.0
     Vendor string and code GenuineIntel (1)
     Model string and code Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz (45)
@@ -49,7 +50,7 @@ CPU.
     Running in a VM no
     Number Hardware Counters 11
     Max Multiplex Counters 32
-    -----------
+    --------------------------------------------------------------------------------
     Name Code Avail Deriv Description (Note)
     PAPI_L1_DCM 0x80000000 Yes No Level 1 data cache misses
     PAPI_L1_ICM 0x80000001 Yes No Level 1 instruction cache misses
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/scalasca.md b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/scalasca.md
index b9db567ced9160a5fe4ed7c16b113eb13bbcfe97..de5b33926daf8da281537a02a5c29fe3c8552ed7 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/scalasca.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/scalasca.md
@@ -1,7 +1,7 @@
 Scalasca 
 ========
 <span>Introduction</span>
---
+-------------------------
 [Scalasca](http://www.scalasca.org/) is a software tool
 that supports the performance optimization of parallel programs by
 measuring and analyzing their runtime behavior. The analysis identifies
@@ -67,11 +67,11 @@ and
 modules loaded. The analysis is done in two steps, first, the data is
 preprocessed and then CUBE GUI tool is launched.
 To launch the analysis, run :
-``` {.fragment}
+``` 
 scalasca -examine [options] <experiment_directory>
 ```
 If you do not wish to launch the GUI tool, use the "-s" option :
-``` {.fragment}
+``` 
 scalasca -examine -s <experiment_directory>
 ```
 Alternatively you can open CUBE and load the data directly from here.
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/score-p.md b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/score-p.md
index c9a5716413433d3d73384855fd9a738c3771528d..d39b68ac1d0816fc181747d42631d3dfe2f65a79 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/score-p.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/score-p.md
@@ -36,13 +36,13 @@ regions of your code, consider using the manual instrumentation methods.
 To use automated instrumentation, simply prepend <span
 class="monospace">scorep</span> to your compilation command. For
 example, replace : 
-``` {.fragment}
+``` 
 $ mpif90 -c foo.f90
 $ mpif90 -c bar.f90
 $ mpif90 -o myapp foo.o bar.o
 ```
 with :
-``` {.fragment}
+``` 
 $ scorep  mpif90 -c foo.f90
 $ scorep  mpif90 -c bar.f90
 $ scorep  mpif90 -o myapp foo.o bar.o
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/summary.md b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/summary.md
index 07f6e5ff872baf1df1c5ad1d4ae89a44a0080de5..eee5b70eab354088d337bdf368762f220866a505 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/summary.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/summary.md
@@ -1,5 +1,6 @@
 Debuggers and profilers summary 
 ===============================
+
   
 Introduction
 ------------
@@ -21,6 +22,7 @@ Read more at the [Intel
 Debugger](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-debugger)
 page.
 Allinea Forge (DDT/MAP)
+-----------------------
 Allinea DDT, is a commercial debugger primarily for debugging parallel
 MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel
 Xeon Phi accelerators. DDT provides all the standard debugging features
@@ -34,7 +36,7 @@ Read more at the [Allinea
 DDT](https://docs.it4i.cz/anselm-cluster-documentation/software/debuggers/allinea-ddt)
 page.
 Allinea Performance Reports
-----
+---------------------------
 Allinea Performance Reports characterize the performance of HPC
 application runs. After executing your application through the tool, a
 synthetic HTML report is generated automatically, containing information
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/total-view.md b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/total-view.md
index 2ca6487cfa891f3bc61b145d79cc2b462d60c6f6..f001c125937316b2feb92904b4b012b770679e42 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/total-view.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/total-view.md
@@ -3,7 +3,7 @@ Total View
 TotalView is a GUI-based source code multi-process, multi-thread
 debugger.
 License and Limitations for Anselm Users
------------------
+----------------------------------------
 On Anselm users can debug OpenMP or MPI code that runs up to 64 parallel
 processes. These limitation means that:
     1 user can debug up 64 processes, or
@@ -12,14 +12,14 @@ Debugging of GPU accelerated codes is also supported.
 You can check the status of the licenses here:
     cat /apps/user/licenses/totalview_features_state.txt
     # totalview
-    # ---
+    # -------------------------------------------------
     # FEATURE                       TOTAL   USED  AVAIL
-    # ---
+    # -------------------------------------------------
     TotalView_Team                     64      0     64
     Replay                             64      0     64
     CUDA                               64      0     64
 Compiling Code to run with TotalView
--------------
+------------------------------------
 ### Modules
 Load all necessary modules to compile the code. For example:
     module load intel
@@ -36,7 +36,7 @@ includes even more debugging information. This option is available for
 GNU and INTEL C/C++ and Fortran compilers.
 **-O0** Suppress all optimizations.
 Starting a Job with TotalView
-------
+-----------------------------
 Be sure to log in with an X window forwarding enabled. This could mean
 using the -X in the ssh: 
     ssh -X username@anselm.it4i.cz 
@@ -57,8 +57,8 @@ to setup your TotalView environment: 
 **Please note:** To be able to run parallel debugging procedure from the
 command line without stopping the debugger in the mpiexec source code
 you have to add the following function to your **~/.tvdrc** file:
-    proc mpi_auto_run_starter {loaded_id} {
-        set starter_programs {mpirun mpiexec orterun}
+    proc mpi_auto_run_starter  {
+        set starter_programs 
         set executable_name [TV::symbol get $loaded_id full_pathname]
         set file_component [file tail $executable_name]
         if {[lsearch -exact $starter_programs $file_component] != -1} {
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/valgrind.md b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/valgrind.md
index b0e22272d083cd2fd93a7cf41bc3e094162918cc..fde929dbb06b17af31c9659fd0a64a79e89f6e9e 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/valgrind.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/valgrind.md
@@ -134,7 +134,7 @@ with <span class="monospace">--leak-check=full</span> option :
 Now we can see that the memory leak is due to the <span
 class="monospace">malloc()</span> at line 6.
 <span>Usage with MPI</span>
-----
+---------------------------
 Although Valgrind is not primarily a parallel debugger, it can be used
 to debug parallel applications as well. When launching your parallel
 applications, prepend the valgrind command. For example :
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vampir.md b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vampir.md
index 15d35afbb16fa342102847726baf14e7d1dd8f70..8a390da33d956147c437a8b944f685bbc114bc88 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vampir.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vampir.md
@@ -7,7 +7,7 @@ functionality to collect traces, you need to use a trace collection tool
 as [Score-P](https://docs.it4i.cz/salomon/software/debuggers/score-p))
 first to collect the traces.
 ![Vampir screenshot](https://docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160708v12.33.35.png/@@images/42d90ce5-8468-4edb-94bb-4009853d9f65.png "Vampir screenshot")
-------
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Installed versions
 ------------------
 Version 8.5.0 is currently installed as module <span
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/gpi2.md b/docs.it4i.cz/anselm-cluster-documentation/software/gpi2.md
index e4992de3684e26e66ead401dc1f46793f45441ad..f7c31802f486f2f071973ae685f857eef2c9ae41 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/gpi2.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/gpi2.md
@@ -1,6 +1,7 @@
 GPI-2 
 =====
 A library that implements the GASPI specification
+
   
 Introduction
 ------------
@@ -38,6 +39,7 @@ infinband communication library ibverbs.
     $ module load gpi2
     $ gcc myprog.c -o myprog.x -Wl,-rpath=$LIBRARY_PATH -lGPI2 -libverbs
 Running the GPI-2 codes
+-----------------------
 <span>gaspi_run</span>
 gaspi_run starts the GPI-2 application
 The gaspi_run utility is used to start and run GPI-2 applications:
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite.md b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite.md
index e387bae02e0c30aae5145d01995cb806bbeffb10..d57899b1d0484b1e16abb0c2cfee09cf320c6b9d 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite.md
@@ -1,10 +1,11 @@
 Intel Parallel Studio 
 =====================
+
   
 The Anselm cluster provides following elements of the Intel Parallel
 Studio XE
   Intel Parallel Studio XE
-  ---
+  -------------------------------------------------
   Intel Compilers
   Intel Debugger
   Intel MKL Library
@@ -35,7 +36,7 @@ Read more at the [Intel
 Debugger](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-debugger)
 page.
 Intel Math Kernel Library
---
+-------------------------
 Intel Math Kernel Library (Intel MKL) is a library of math kernel
 subroutines, extensively threaded and optimized for maximum performance.
 Intel MKL unites and provides these basic componentsBLAS, LAPACK,
@@ -46,7 +47,7 @@ Read more at the [Intel
 MKL](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-mkl)
 page.
 Intel Integrated Performance Primitives
-----------------
+---------------------------------------
 Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX
 is available, via module ipp. The IPP is a library of highly optimized
 algorithmic building blocks for media and data applications. This
@@ -58,7 +59,7 @@ Read more at the [Intel
 IPP](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives)
 page.
 Intel Threading Building Blocks
---------
+-------------------------------
 Intel Threading Building Blocks (Intel TBB) is a library that supports
 scalable parallel programming using standard ISO C++ code. It does not
 require special languages or compilers. It is designed to promote
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-compilers.md b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
index 67e32b6aaa6cf13e77d66f70e8f5ddd31aa92714..8dc1704c18f0838cc0d917dfbdbb23dd8ba30137 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
@@ -1,5 +1,6 @@
 Intel Compilers 
 ===============
+
   
 The Intel compilers version 13.1.1 are available, via module intel. The
 compilers include the icc C and C++ compiler and the ifort fortran
@@ -25,7 +26,7 @@ parallelization by the **-openmp** compiler switch.
 Read more at
 <http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/compiler/cpp-lin/index.htm>
 Sandy Bridge/Haswell binary compatibility
-------------------
+-----------------------------------------
 Anselm nodes are currently equipped with Sandy Bridge CPUs, while
 Salomon will use Haswell architecture. <span>The new processors are
 backward compatible with the Sandy Bridge nodes, so all programs that
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-debugger.md b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-debugger.md
index ce6bd5acf2a3efc049b1336fc2e191e7f5815e49..a843dc0cbcd755f995a7f468712f6000a7a75f79 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-debugger.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-debugger.md
@@ -1,8 +1,9 @@
 Intel Debugger 
 ==============
+
   
 Debugging serial applications
-------
+-----------------------------
  The intel debugger version 13.0 is available, via module intel. The
 debugger works for applications compiled with C and C++ compiler and the
 ifort fortran 77/90/95 compiler. The debugger provides java GUI
@@ -30,7 +31,7 @@ myprog.c with debugging options -O0 -g and run the idb debugger
 interactively on the myprog.x executable. The GUI access is via X11 port
 forwarding provided by the PBS workload manager.
 Debugging parallel applications
---------
+-------------------------------
 Intel debugger is capable of debugging multithreaded and MPI parallel
 programs as well.
 ### Small number of MPI ranks
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md
index 4f2eeb0c93cb6fb9b81311703649b2f3c7f0ede8..1d97531adaf3bbe6480650f084f26b5799b3d792 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives.md
@@ -1,8 +1,9 @@
 Intel IPP 
 =========
+
   
 Intel Integrated Performance Primitives
-----------------
+---------------------------------------
 Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX
 vector instructions is available, via module ipp. The IPP is a very rich
 library of highly optimized algorithmic building blocks for media and
@@ -62,7 +63,7 @@ executable
     $ module load ipp
     $ icc testipp.c -o testipp.x -Wl,-rpath=$LIBRARY_PATH -lippi -lipps -lippcore
 Code samples and documentation
--------
+------------------------------
 Intel provides number of [Code Samples for
 IPP](https://software.intel.com/en-us/articles/code-samples-for-intel-integrated-performance-primitives-library),
 illustrating use of IPP.
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-mkl.md b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
index ea084f26977a515c746dd04567da9540d28d6c6d..737295ce3d8094f03ba619b89cfd219cf1d22191 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-mkl.md
@@ -1,8 +1,9 @@
 Intel MKL 
 =========
+
   
 Intel Math Kernel Library
---
+-------------------------
 Intel Math Kernel Library (Intel MKL) is a library of math kernel
 subroutines, extensively threaded and optimized for maximum performance.
 Intel MKL provides these basic math kernels:
@@ -61,7 +62,7 @@ type (necessary for indexing large arrays, with more than 2^31^-1
 elements), whereas the LP64 libraries index arrays with the 32-bit
 integer type.
   Interface   Integer type
-  ----------- -
+  ----------- -----------------------------------------------
   LP64        32-bit, int, integer(kind=4), MPI_INT
   ILP64       64-bit, long int, integer(kind=8), MPI_INT64
 ### Linking
@@ -124,7 +125,7 @@ LP64 interface to threaded MKL and Intel OMP threads implementation.
 In this example, we compile, link and run the cblas_dgemm  example,
 using LP64 interface to threaded MKL and gnu OMP threads implementation.
 MKL and MIC accelerators
--
+------------------------
 The MKL is capable to automatically offload the computations o the MIC
 accelerator. See section [Intel Xeon
 Phi](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-xeon-phi)
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-parallel-studio-introduction.md b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-parallel-studio-introduction.md
index e387bae02e0c30aae5145d01995cb806bbeffb10..d57899b1d0484b1e16abb0c2cfee09cf320c6b9d 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-parallel-studio-introduction.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-parallel-studio-introduction.md
@@ -1,10 +1,11 @@
 Intel Parallel Studio 
 =====================
+
   
 The Anselm cluster provides following elements of the Intel Parallel
 Studio XE
   Intel Parallel Studio XE
-  ---
+  -------------------------------------------------
   Intel Compilers
   Intel Debugger
   Intel MKL Library
@@ -35,7 +36,7 @@ Read more at the [Intel
 Debugger](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-debugger)
 page.
 Intel Math Kernel Library
---
+-------------------------
 Intel Math Kernel Library (Intel MKL) is a library of math kernel
 subroutines, extensively threaded and optimized for maximum performance.
 Intel MKL unites and provides these basic componentsBLAS, LAPACK,
@@ -46,7 +47,7 @@ Read more at the [Intel
 MKL](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-mkl)
 page.
 Intel Integrated Performance Primitives
-----------------
+---------------------------------------
 Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX
 is available, via module ipp. The IPP is a library of highly optimized
 algorithmic building blocks for media and data applications. This
@@ -58,7 +59,7 @@ Read more at the [Intel
 IPP](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives)
 page.
 Intel Threading Building Blocks
---------
+-------------------------------
 Intel Threading Building Blocks (Intel TBB) is a library that supports
 scalable parallel programming using standard ISO C++ code. It does not
 require special languages or compilers. It is designed to promote
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-tbb.md b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-tbb.md
index 093b2b3e1a0af3377b99051bf8d4c0a708beb2f6..1c22dcb86e4100cb25bcf5f24f7f619b6db9cef9 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-tbb.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-tbb.md
@@ -1,8 +1,9 @@
 Intel TBB 
 =========
+
   
 Intel Threading Building Blocks
---------
+-------------------------------
 Intel Threading Building Blocks (Intel TBB) is a library that supports
 scalable parallel programming using standard ISO C++ code. It does not
 require special languages or compilers.  To use the library, you specify
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/intel-xeon-phi.md b/docs.it4i.cz/anselm-cluster-documentation/software/intel-xeon-phi.md
index 74fa9d5388346178c96416d70733c8aa0aecb733..15c82f520cee42819eb6c2100b140a5116aab05a 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/intel-xeon-phi.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/intel-xeon-phi.md
@@ -1,12 +1,13 @@
 Intel Xeon Phi 
 ==============
 A guide to Intel Xeon Phi usage
+
   
 Intel Xeon Phi can be programmed in several modes. The default mode on
 Anselm is offload mode, but all modes described in this document are
 supported.
 Intel Utilities for Xeon Phi
------
+----------------------------
 To get access to a compute node with Intel Xeon Phi accelerator, use the
 PBS interactive session
     $ qsub -I -q qmic -A NONE-0-0
@@ -180,7 +181,7 @@ Performance ooptimization
   xhost - FOR HOST ONLY - to generate AVX (Advanced Vector Extensions)
 instructions.
 Automatic Offload using Intel MKL Library
-------------------
+-----------------------------------------
 Intel MKL includes an Automatic Offload (AO) feature that enables
 computationally intensive MKL functions called in user code to benefit
 from attached Intel Xeon Phi coprocessors automatically and
@@ -684,11 +685,11 @@ accelerators on different nodes uses 1Gb Ethernet only.
 PBS also generates a set of node-files that can be used instead of
 manually creating a new one every time. Three node-files are genereated:
 **Host only node-file:**
- - /lscratch/${PBS_JOBID}/nodefile-cn
+ - /lscratch/$/nodefile-cn
 **MIC only node-file**:
- - /lscratch/${PBS_JOBID}/nodefile-mic
+ - /lscratch/$/nodefile-mic
 **Host and MIC node-file**:
- - /lscratch/${PBS_JOBID}/nodefile-mix
+ - /lscratch/$/nodefile-mix
 Please note each host or accelerator is listed only per files. User has
 to specify how many jobs should be executed per node using "-n"
 parameter of the mpirun command.
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/isv_licenses.md b/docs.it4i.cz/anselm-cluster-documentation/software/isv_licenses.md
index 70c075bf959a429b4bd023484ad27dee3ed69a03..7e2a7763507f73a3b1ddd28e0d1a54f4815f827e 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/isv_licenses.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/isv_licenses.md
@@ -1,6 +1,7 @@
 ISV Licenses 
 ============
 A guide to managing Independent Software Vendor licences
+
   
 On Anselm cluster there are also installed commercial software
 applications, also known as ISV (Independent Software Vendor), which are
@@ -16,7 +17,7 @@ and also for commercial purposes, then there are always two separate
 versions maintained and suffix "edu" is used in the name of the
 non-commercial version.
 Overview of the licenses usage
--------
+------------------------------
 The overview is generated every minute and is accessible from web or
 command line interface.
 ### Web interface
@@ -30,7 +31,7 @@ information about the name, number of available (purchased/licensed),
 number of used and number of free license features. The text files are
 accessible from the Anselm command prompt.[]()
   Product      File with license state                               Note
-  ------------ ------- ---------------------
+  ------------ ----------------------------------------------------- ---------------------
   ansys        /apps/user/licenses/ansys_features_state.txt        Commercial
   comsol       /apps/user/licenses/comsol_features_state.txt       Commercial
   comsol-edu   /apps/user/licenses/comsol-edu_features_state.txt   Non-commercial only
@@ -42,9 +43,9 @@ the file via a script.
 Example of the Commercial Matlab license state:
     $ cat /apps/user/licenses/matlab_features_state.txt
     # matlab
-    # ---
+    # -------------------------------------------------
     # FEATURE                       TOTAL   USED  AVAIL
-    # ---
+    # -------------------------------------------------
     MATLAB                              1      1      0
     SIMULINK                            1      0      1
     Curve_Fitting_Toolbox               1      0      1
@@ -57,7 +58,7 @@ Example of the Commercial Matlab license state:
     Signal_Toolbox                      1      0      1
     Statistics_Toolbox                  1      0      1
 License tracking in PBS Pro scheduler and users usage
--------
+-----------------------------------------------------
 Each feature of each license is accounted and checked by the scheduler
 of PBS Pro. If you ask for certain licences, the scheduler won't start
 the job until the asked licenses are free (available). This prevents to
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/java.md b/docs.it4i.cz/anselm-cluster-documentation/software/java.md
index 88c1c3746d34f2401ce6d816e0a2935f6b50ef63..87313f27a1ee003ad946fa038a49944dd9b01796 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/java.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/java.md
@@ -1,6 +1,7 @@
 Java 
 ====
 Java on ANSELM
+
   
 Java is available on Anselm cluster. Activate java by loading the java
 module
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1.md b/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1.md
index f44150a184df1262a5ddaf82e065dca4a97df385..86df2ccbc62084ce59a545f60d19fadc731d6a81 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1.md
@@ -1,8 +1,9 @@
 MPI 
 ===
+
   
 Setting up MPI Environment
----
+--------------------------
 The Anselm cluster provides several implementations of the MPI library:
 <table>
 <colgroup>
@@ -41,7 +42,7 @@ The Anselm cluster provides several implementations of the MPI library:
 MPI libraries are activated via the environment modules.
 Look up section modulefiles/mpi in module avail
     $ module avail
-    -- /opt/modules/modulefiles/mpi --
+    ------------------------- /opt/modules/modulefiles/mpi -------------------------
     bullxmpi/bullxmpi-1.2.4.1  mvapich2/1.9-icc
     impi/4.0.3.008             openmpi/1.6.5-gcc(default)
     impi/4.1.0.024             openmpi/1.6.5-gcc46
@@ -55,7 +56,7 @@ implementation. The defaults may be changed, the MPI libraries may be
 used in conjunction with any compiler.
 The defaults are selected via the modules in following way
   Module         MPI                Compiler suite
-  -------------- ------------------ -----------
+  -------------- ------------------ --------------------------------------------------------------------------------
   PrgEnv-gnu     bullxmpi-1.2.4.1   bullx GNU 4.4.6
   PrgEnv-intel   Intel MPI 4.1.1    Intel 13.1.1
   bullxmpi       bullxmpi-1.2.4.1   none, select via module
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/Running_OpenMPI.md b/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/Running_OpenMPI.md
index e26dc86330c5c1b0b7e313d63791183b5478145f..bbcde5b0938b47ad0bb552fd31006e4a7779d7bd 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/Running_OpenMPI.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/Running_OpenMPI.md
@@ -1,8 +1,9 @@
 Running OpenMPI 
 ===============
+
   
 OpenMPI program execution
---
+-------------------------
 The OpenMPI programs may be executed only via the PBS Workload manager,
 by entering an appropriate queue. On Anselm, the **bullxmpi-1.2.4.1**
 and **OpenMPI 1.6.5** are OpenMPI based MPI implementations.
@@ -94,7 +95,7 @@ later) the following variables may be used for Intel or GCC:
     $ export OMP_PROC_BIND=true
     $ export OMP_PLACES=cores 
 <span>OpenMPI Process Mapping and Binding</span>
---
+------------------------------------------------
 The mpiexec allows for precise selection of how the MPI processes will
 be mapped to the computational nodes and how these processes will bind
 to particular processor sockets and cores.
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/mpi.md b/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/mpi.md
index f44150a184df1262a5ddaf82e065dca4a97df385..86df2ccbc62084ce59a545f60d19fadc731d6a81 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/mpi.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/mpi.md
@@ -1,8 +1,9 @@
 MPI 
 ===
+
   
 Setting up MPI Environment
----
+--------------------------
 The Anselm cluster provides several implementations of the MPI library:
 <table>
 <colgroup>
@@ -41,7 +42,7 @@ The Anselm cluster provides several implementations of the MPI library:
 MPI libraries are activated via the environment modules.
 Look up section modulefiles/mpi in module avail
     $ module avail
-    -- /opt/modules/modulefiles/mpi --
+    ------------------------- /opt/modules/modulefiles/mpi -------------------------
     bullxmpi/bullxmpi-1.2.4.1  mvapich2/1.9-icc
     impi/4.0.3.008             openmpi/1.6.5-gcc(default)
     impi/4.1.0.024             openmpi/1.6.5-gcc46
@@ -55,7 +56,7 @@ implementation. The defaults may be changed, the MPI libraries may be
 used in conjunction with any compiler.
 The defaults are selected via the modules in following way
   Module         MPI                Compiler suite
-  -------------- ------------------ -----------
+  -------------- ------------------ --------------------------------------------------------------------------------
   PrgEnv-gnu     bullxmpi-1.2.4.1   bullx GNU 4.4.6
   PrgEnv-intel   Intel MPI 4.1.1    Intel 13.1.1
   bullxmpi       bullxmpi-1.2.4.1   none, select via module
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/mpi4py-mpi-for-python.md b/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/mpi4py-mpi-for-python.md
index fbc70cadd6edb3b6a71904354cf015f4687ac694..18c7db1c2d83ac1ae0eba33b770fca89384cc2a8 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/mpi4py-mpi-for-python.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/mpi4py-mpi-for-python.md
@@ -1,6 +1,7 @@
 MPI4Py (MPI for Python) 
 =======================
 OpenMPI interface to Python
+
   
 Introduction
 ------------
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/running-mpich2.md b/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/running-mpich2.md
index c8d0b2abc90013cd168d0571b9fa0eaaf9d14368..303c0b50ffa13933c63f1e0f1d44a9dfd1b963ac 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/running-mpich2.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/mpi-1/running-mpich2.md
@@ -1,8 +1,9 @@
 Running MPICH2 
 ==============
+
   
 MPICH2 program execution
--
+------------------------
 The MPICH2 programs use mpd daemon or ssh connection to spawn processes,
 no PBS support is needed. However the PBS allocation is required to
 access compute nodes. On Anselm, the **Intel MPI** and **mpich2 1.9**
@@ -90,7 +91,7 @@ later) the following variables may be used for Intel or GCC:
     $ export OMP_PLACES=cores 
  
 MPICH2 Process Mapping and Binding
------------
+----------------------------------
 The mpirun allows for precise selection of how the MPI processes will be
 mapped to the computational nodes and how these processes will bind to
 particular processor sockets and cores.
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages.1.md b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages.1.md
index d21af4a17880d87385405d699e9a2b6f0922528e..a74e0be4de1aa5f499917947c9c406cd9a707bf5 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages.1.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages.1.md
@@ -1,6 +1,7 @@
 Numerical languages 
 ===================
 Interpreted languages for numerical computations and analysis
+
   
 Introduction
 ------------
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/copy_of_matlab.md b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/copy_of_matlab.md
index d65abbc583fe9dc933ede89a50d669c7597aa16a..c122022064ae5e570db77cc4d83758548ea62f66 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/copy_of_matlab.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/copy_of_matlab.md
@@ -1,5 +1,6 @@
 Matlab 
 ======
+
   
 Introduction
 ------------
@@ -36,7 +37,7 @@ use
     $ matlab -nodesktop -nosplash
 plots, images, etc... will be still available.
 []()Running parallel Matlab using Distributed Computing Toolbox / Engine
----
+------------------------------------------------------------------------
 Distributed toolbox is available only for the EDU variant
 The MPIEXEC mode available in previous versions is no longer available
 in MATLAB 2015. Also, the programming interface has changed. Refer
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/introduction.md b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/introduction.md
index d21af4a17880d87385405d699e9a2b6f0922528e..a74e0be4de1aa5f499917947c9c406cd9a707bf5 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/introduction.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/introduction.md
@@ -1,6 +1,7 @@
 Numerical languages 
 ===================
 Interpreted languages for numerical computations and analysis
+
   
 Introduction
 ------------
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/matlab.md b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/matlab.md
index 833a63de9913ede2ed8f5e4109dd6ae0c078b94a..16c6a121e977c52680a8c923747879bbfb3eb823 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/matlab.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/matlab.md
@@ -1,5 +1,6 @@
 Matlab 2013-2014 
 ================
+
   
 Introduction
 ------------
@@ -39,7 +40,7 @@ use
     $ matlab -nodesktop -nosplash
 plots, images, etc... will be still available.
 Running parallel Matlab using Distributed Computing Toolbox / Engine
-----------------------
+--------------------------------------------------------------------
 Recommended parallel mode for running parallel Matlab on Anselm is
 MPIEXEC mode. In this mode user allocates resources through PBS prior to
 starting Matlab. Once resources are granted the main Matlab instance is
@@ -63,7 +64,7 @@ be exactly the same as in the following listing:
     lib = strcat(mpich, 'libmpich.so');
     mpl = strcat(mpich, 'libmpl.so');
     opa = strcat(mpich, 'libopa.so');
-    extras = {};
+    extras = ;
 System MPI library allows Matlab to communicate through 40Gbps
 Infiniband QDR interconnect instead of slower 1Gb ethernet network.
 Please noteThe path to MPI library in "mpiLibConf.m" has to match with
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/octave.md b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/octave.md
index c14965c877a4019eb8854ed5cda16bc244959710..9da7e8dd434a17025f55e2f8d8e7962647793308 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/octave.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/octave.md
@@ -1,5 +1,6 @@
 Octave 
 ======
+
   
 Introduction
 ------------
@@ -15,7 +16,7 @@ so that most programs are easily portable. Read more on
 **
 **Two versions of octave are available on Anselm, via module
   Version                                                     module
-  ------------- ----
+  ----------------------------------------------------------- ---------------------------
   Octave 3.8.2, compiled with GCC and Multithreaded MKL       Octave/3.8.2-gimkl-2.11.5
   Octave 4.0.1, compiled with GCC and Multithreaded MKL       Octave/4.0.1-gimkl-2.11.5
   Octave 4.0.0, compiled with <span>GCC and OpenBLAS</span>   Octave/4.0.0-foss-2015g
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/r.md b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/r.md
index f7abed65a21defa132f7f5a40afcf21bcd204443..2e59d5ade0fabaea7b178255df2c36a713bd3707 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/r.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/r.md
@@ -1,7 +1,8 @@
 R 
 =
+
   
-Introduction {#parent-fieldname-title}
+Introduction 
 ------------
 The R is a language and environment for statistical computing and
 graphics.  R provides a wide variety of statistical (linear and
@@ -282,7 +283,7 @@ using the mclapply() in place of mpi.parSapply().
 Execute the example as:
     $ R --slave --no-save --no-restore -f pi3parSapply.R
 Combining parallel and Rmpi
-----
+---------------------------
 Currently, the two packages can not be combined for hybrid calculations.
 Parallel execution
 ------------------
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/fftw.md b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/fftw.md
index 3d4d6b8663976cc25fb80bc5624b7feef64efef7..8533170010342069f4882c892699e193ee5195ec 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/fftw.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/fftw.md
@@ -1,6 +1,7 @@
 FFTW 
 ====
 The discrete Fourier transform in one or more dimensions, MPI parallel
+
   
  
 FFTW is a C subroutine library for computing the discrete Fourier
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/gsl.md b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/gsl.md
index a0ea6c83b62bd602e811237f32f3b8f6ea4eac4b..05d4dd354f8712918b859a6f7eb30455023ae361 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/gsl.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/gsl.md
@@ -2,6 +2,7 @@ GSL
 ===
 The GNU Scientific Library. Provides a wide range of mathematical
 routines.
+
   
 Introduction
 ------------
@@ -13,7 +14,7 @@ Applications Programming Interface (API) for C programmers, allowing
 wrappers to be written for very high level languages.
 The library covers a wide range of topics in numerical computing.
 Routines are available for the following areas:
-  - - -
+  ------------------------ ------------------------ ------------------------
                            Complex Numbers          Roots of Polynomials
                            Special Functions        Vectors and Matrices
                            Permutations             Combinations
@@ -33,13 +34,13 @@ Routines are available for the following areas:
                            Least-Squares Fitting    Minimization
                            IEEE Floating-Point      Physical Constants
                            Basis Splines            Wavelets
-  - - -
+  ------------------------ ------------------------ ------------------------
 Modules
 -------
 The GSL 1.16 is available on Anselm, compiled for GNU and Intel
 compiler. These variants are available via modules:
   Module                  Compiler
-   -----------
+  ----------------------- -----------
   gsl/1.16-gcc            gcc 4.8.6
   gsl/1.16-icc(default)   icc
      $ module load gsl
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/intel-numerical-libraries.md b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/intel-numerical-libraries.md
index 1aa7b9f1fb0be0070badc613e430479cafe1ad65..fbbcaa4c18f1ca5ca7269ba5346a25b6b2da235c 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/intel-numerical-libraries.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/intel-numerical-libraries.md
@@ -1,9 +1,10 @@
 Intel numerical libraries 
 =========================
 Intel libraries for high performance in numerical computing
+
   
 Intel Math Kernel Library
---
+-------------------------
 Intel Math Kernel Library (Intel MKL) is a library of math kernel
 subroutines, extensively threaded and optimized for maximum performance.
 Intel MKL unites and provides these basic componentsBLAS, LAPACK,
@@ -14,7 +15,7 @@ Read more at the [Intel
 MKL](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-mkl)
 page.
 Intel Integrated Performance Primitives
-----------------
+---------------------------------------
 Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX
 is available, via module ipp. The IPP is a library of highly optimized
 algorithmic building blocks for media and data applications. This
@@ -26,7 +27,7 @@ Read more at the [Intel
 IPP](https://docs.it4i.cz/anselm-cluster-documentation/software/intel-suite/intel-integrated-performance-primitives)
 page.
 Intel Threading Building Blocks
---------
+-------------------------------
 Intel Threading Building Blocks (Intel TBB) is a library that supports
 scalable parallel programming using standard ISO C++ code. It does not
 require special languages or compilers. It is designed to promote
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/petsc.md b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/petsc.md
index 8c2bbdd271f08197dd6eb462706d8e28eaac8850..ab74091f00e9e25a50113ab9c90190ebbad9d0f0 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/petsc.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/numerical-libraries/petsc.md
@@ -4,6 +4,7 @@ PETSc is a suite of building blocks for the scalable solution of
 scientific and engineering applications modelled by partial differential
 equations. It supports MPI, shared memory, and GPUs through CUDA or
 OpenCL, as well as hybrid MPI-shared memory or MPI-GPU parallelism.
+
   
 Introduction
 ------------
@@ -36,7 +37,7 @@ names obey this pattern:
     # module load petsc/version-compiler-mpi-blas-variant, e.g.
       module load petsc/3.4.4-icc-impi-mkl-opt
 where `variant` is replaced by one of
-`{dbg, opt, threads-dbg, threads-opt}`. The `opt` variant is compiled
+``. The `opt` variant is compiled
 without debugging information (no `-g` option) and with aggressive
 compiler optimizations (`-O3 -xAVX`). This variant is suitable for
 performance measurements and production runs. In all other cases use the
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/nvidia-cuda.md b/docs.it4i.cz/anselm-cluster-documentation/software/nvidia-cuda.md
index e8f6dc8e2d65fd507e268852d45103f100ae0764..efcaf3f18cf845ffe0b974b00725dc940031b03b 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/nvidia-cuda.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/nvidia-cuda.md
@@ -1,9 +1,10 @@
 nVidia CUDA 
 ===========
 A guide to nVidia CUDA programming and GPU usage
+
   
 CUDA Programming on Anselm
----
+--------------------------
 The default programming model for GPU accelerators on Anselm is Nvidia
 CUDA. To set up the environment for CUDA use
     $ module load cuda
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/diagnostic-component-team.md b/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/diagnostic-component-team.md
index c58203dfe8ab675cb388815d043503d7033b3f8d..703e1b3e032988e1ca146bd0283574dc281a2876 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/diagnostic-component-team.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/diagnostic-component-team.md
@@ -1,12 +1,13 @@
 Diagnostic component (TEAM) 
 ===========================
+
   
 ### Access
 TEAM is available at the following address
 : <http://omics.it4i.cz/team/>
 The address is accessible only via
 [VPN. ](https://docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster/vpn-access)
-### Diagnostic component (TEAM) {#diagnostic-component-team}
+### Diagnostic component (TEAM) 
 VCF files are scanned by this diagnostic tool for known diagnostic
 disease-associated variants. When no diagnostic mutation is found, the
 file can be sent to the disease-causing gene discovery tool to see
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/overview.md b/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/overview.md
index 926745ff46a7d8354021fa192b0aa33a7cd0b5ed..25118238f1c7553c686612551e9dd07cd4308a0a 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/overview.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/overview.md
@@ -1,6 +1,7 @@
 Overview 
 ========
 The human NGS data processing solution
+
   
 Introduction
 ------------
@@ -477,7 +478,7 @@ If we want to re-launch the pipeline from stage 4 until stage 20 we
 should use the next command:
     $ ngsPipeline -i /scratch/$USER/omics/sample_data/data -o /scratch/$USER/omics/results -p /scratch/$USER/omics/sample_data/data/file.ped -s 4 -e 20 --project OPEN-0-0 --queue qprod 
 <span>Details on the pipeline</span>
--------------
+------------------------------------
 <span>The pipeline calls the following tools:</span>
 -   <span>[fastqc](http://www.bioinformatics.babraham.ac.uk/projects/fastqc/),
     a<span> quality control tool for high throughput
@@ -552,7 +553,7 @@ This listing show which tools are used in each step of the pipeline :
 -   <span>stage-27snpEff</span>
 -   <span>stage-28hpg-variant</span>
 <span>Interpretation</span>
-----
+---------------------------
 The output folder contains all the subfolders with the intermediate
 data. This folder contains the final VCF with all the variants. This
 file can be uploaded into
@@ -628,6 +629,7 @@ associated to the phenotypelarge intestine tumor.***
 <span>
 </span>
 <span>References</span>
+-----------------------
 1.  <span class="discreet">Heng Li, Bob Handsaker, Alec Wysoker, Tim
     Fennell, Jue Ruan, Nils Homer, Gabor Marth5, Goncalo Abecasis6,
     Richard Durbin and 1000 Genome Project Data Processing SubgroupThe
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/openfoam.md b/docs.it4i.cz/anselm-cluster-documentation/software/openfoam.md
index 64dd985ceb86a9863c14c4d0ca002912af8bfba1..24c7ad2811eb9e04e4217a6098ffbd60f6dea760 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/openfoam.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/openfoam.md
@@ -1,6 +1,7 @@
 OpenFOAM 
 ========
 A free, open source CFD software package
+
   
 **Introduction**
 ----------------
@@ -36,7 +37,7 @@ To check available modules use
     $ module avail
 In /opt/modules/modulefiles/engineering you can see installed
 engineering softwares:
-    ------------- /opt/modules/modulefiles/engineering ---------------
+    ------------------------------------ /opt/modules/modulefiles/engineering -------------------------------------------------------------
     ansys/14.5.x               matlab/R2013a-COM                                openfoam/2.2.1-icc-impi4.1.1.036-DP
     comsol/43b-COM             matlab/R2013a-EDU                                openfoam/2.2.1-icc-openmpi1.6.5-DP
     comsol/43b-EDU             openfoam/2.2.1-gcc481-openmpi1.6.5-DP            paraview/4.0.1-gcc481-bullxmpi1.2.4.1-osmesa10.0
@@ -65,7 +66,7 @@ the run directory:
 Now you can run the first case for example incompressible laminar flow
 in a cavity.
 **Running Serial Applications**
---------
+-------------------------------
 <span>Create a Bash script </span><span>test.sh</span> 
 <span></span>
 <span> </span>
@@ -86,7 +87,7 @@ in a cavity.
 <span> </span>For information about job submission please [look
 here](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job-submission-and-execution "Job submission").
 **<span>Running applications in parallel</span>**
----
+-------------------------------------------------
 <span>Run the second case for example external incompressible turbulent
 flow - case - motorBike.</span>
 <span>First we must run serial application bockMesh and decomposePar for
@@ -125,9 +126,9 @@ testParallel.pbs</span>:</span></span>
     source $FOAM_BASHRC
     cd $FOAM_RUN/tutorials/incompressible/simpleFoam/motorBike
     nproc = 32
-    mpirun -hostfile ${PBS_NODEFILE} -np $nproc snappyHexMesh -overwrite -parallel | tee snappyHexMesh.log
-    mpirun -hostfile ${PBS_NODEFILE} -np $nproc potentialFoam -noFunctionObject-writep -parallel | tee potentialFoam.log
-    mpirun -hostfile ${PBS_NODEFILE} -np $nproc simpleFoam -parallel | tee simpleFoam.log 
+    mpirun -hostfile $ -np $nproc snappyHexMesh -overwrite -parallel | tee snappyHexMesh.log
+    mpirun -hostfile $ -np $nproc potentialFoam -noFunctionObject-writep -parallel | tee potentialFoam.log
+    mpirun -hostfile $ -np $nproc simpleFoam -parallel | tee simpleFoam.log 
 <span> </span>
 <span>nproc – number of subdomains</span>
 <span>Job submission</span>
@@ -135,7 +136,7 @@ testParallel.pbs</span>:</span></span>
     $ qsub testParallel.pbs
 <span> </span>
 **<span>Compile your own solver</span>**
------------------
+----------------------------------------
 <span>Initialize OpenFOAM environment before compiling your solver
 </span>
 <span> </span>
@@ -166,7 +167,7 @@ testParallel.pbs</span>:</span></span>
 <span>In directory My_icoFoam give the compilation command:</span>
 <span> </span>
     $ wmake
----
+------------------------------------------------------------------------
  
 ** Have a fun with OpenFOAM :)**
  <span id="__caret"><span id="__caret"></span></span>
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/operating-system.md b/docs.it4i.cz/anselm-cluster-documentation/software/operating-system.md
index 02a808b1ba8b4f39005e9b431e5208a50e513088..2e07818ec1bf79b4d5f7eaac03cc9028df065839 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/operating-system.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/operating-system.md
@@ -1,6 +1,7 @@
 Operating System 
 ================
 The operating system, deployed on ANSELM
+
   
 The operating system on Anselm is Linux - bullx Linux Server release
 6.3.
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/paraview.md b/docs.it4i.cz/anselm-cluster-documentation/software/paraview.md
index 52fae86d68e3144b3f13a03e66c7bad528343097..dce0e27819d807c80137520253b0a4c0c6377d8e 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/paraview.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/paraview.md
@@ -2,6 +2,7 @@ ParaView
 ========
 An open-source, multi-platform data analysis and visualization
 application
+
   
 Introduction
 ------------
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/virtualization.md b/docs.it4i.cz/anselm-cluster-documentation/software/virtualization.md
index 88fa64f1d31f9802f03d328b24d140e65f8d7bbe..1ddb57a7dd18b78d7375e0e3ef8f4c6f54a3f6c4 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/virtualization.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/virtualization.md
@@ -1,6 +1,7 @@
 Virtualization 
 ==============
 Running virtual machines on compute nodes
+
   
 Introduction
 ------------
@@ -169,30 +170,30 @@ class="hps trans-target-highlight"></span></span>[Virtual Machine Job
 Workflow](#virtual-machine-job-workflow).
 Example job for Windows virtual machine:
     #/bin/sh
-    JOB_DIR=/scratch/$USER/win/${PBS_JOBID}
+    JOB_DIR=/scratch/$USER/win/$
     #Virtual machine settings
     VM_IMAGE=~/work/img/win.img
     VM_MEMORY=49152
     VM_SMP=16
     # Prepare job dir
-    mkdir -p ${JOB_DIR} && cd ${JOB_DIR} || exit 1
+    mkdir -p $ && cd $ || exit 1
     ln -s ~/work/win .
     ln -s /scratch/$USER/data .
     ln -s ~/work/win/script/run/run-appl.bat run.bat
     # Run virtual machine
-    export TMPDIR=/lscratch/${PBS_JOBID}
+    export TMPDIR=/lscratch/$
     module add qemu
     qemu-system-x86_64 
       -enable-kvm 
       -cpu host 
-      -smp ${VM_SMP} 
-      -m ${VM_MEMORY} 
+      -smp $ 
+      -m $ 
       -vga std 
       -localtime 
       -usb -usbdevice tablet 
       -device virtio-net-pci,netdev=net0 
-      -netdev user,id=net0,smb=${JOB_DIR},hostfwd=tcp::3389-:3389 
-      -drive file=${VM_IMAGE},media=disk,if=virtio 
+      -netdev user,id=net0,smb=$,hostfwd=tcp::3389-:3389 
+      -drive file=$,media=disk,if=virtio 
       -snapshot 
       -nographic
 Job script links application data (win), input data (data) and run
@@ -341,7 +342,7 @@ In snapshot mode image is not written, changes are written to temporary
 file (and discarded after virtual machine exits). **It is strongly
 recommended mode for running your jobs.** Set TMPDIR environment
 variable to local scratch directory for placement temporary files.
-    $ export TMPDIR=/lscratch/${PBS_JOBID}
+    $ export TMPDIR=/lscratch/$
     $ qemu-system-x86_64 ... -snapshot
 ### Windows guests
 For Windows guests we recommend these options, life will be easier:
diff --git a/docs.it4i.cz/anselm-cluster-documentation/software/virtualization/virtualization.md b/docs.it4i.cz/anselm-cluster-documentation/software/virtualization/virtualization.md
index 88fa64f1d31f9802f03d328b24d140e65f8d7bbe..1ddb57a7dd18b78d7375e0e3ef8f4c6f54a3f6c4 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/software/virtualization/virtualization.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/software/virtualization/virtualization.md
@@ -1,6 +1,7 @@
 Virtualization 
 ==============
 Running virtual machines on compute nodes
+
   
 Introduction
 ------------
@@ -169,30 +170,30 @@ class="hps trans-target-highlight"></span></span>[Virtual Machine Job
 Workflow](#virtual-machine-job-workflow).
 Example job for Windows virtual machine:
     #/bin/sh
-    JOB_DIR=/scratch/$USER/win/${PBS_JOBID}
+    JOB_DIR=/scratch/$USER/win/$
     #Virtual machine settings
     VM_IMAGE=~/work/img/win.img
     VM_MEMORY=49152
     VM_SMP=16
     # Prepare job dir
-    mkdir -p ${JOB_DIR} && cd ${JOB_DIR} || exit 1
+    mkdir -p $ && cd $ || exit 1
     ln -s ~/work/win .
     ln -s /scratch/$USER/data .
     ln -s ~/work/win/script/run/run-appl.bat run.bat
     # Run virtual machine
-    export TMPDIR=/lscratch/${PBS_JOBID}
+    export TMPDIR=/lscratch/$
     module add qemu
     qemu-system-x86_64 
       -enable-kvm 
       -cpu host 
-      -smp ${VM_SMP} 
-      -m ${VM_MEMORY} 
+      -smp $ 
+      -m $ 
       -vga std 
       -localtime 
       -usb -usbdevice tablet 
       -device virtio-net-pci,netdev=net0 
-      -netdev user,id=net0,smb=${JOB_DIR},hostfwd=tcp::3389-:3389 
-      -drive file=${VM_IMAGE},media=disk,if=virtio 
+      -netdev user,id=net0,smb=$,hostfwd=tcp::3389-:3389 
+      -drive file=$,media=disk,if=virtio 
       -snapshot 
       -nographic
 Job script links application data (win), input data (data) and run
@@ -341,7 +342,7 @@ In snapshot mode image is not written, changes are written to temporary
 file (and discarded after virtual machine exits). **It is strongly
 recommended mode for running your jobs.** Set TMPDIR environment
 variable to local scratch directory for placement temporary files.
-    $ export TMPDIR=/lscratch/${PBS_JOBID}
+    $ export TMPDIR=/lscratch/$
     $ qemu-system-x86_64 ... -snapshot
 ### Windows guests
 For Windows guests we recommend these options, life will be easier:
diff --git a/docs.it4i.cz/anselm-cluster-documentation/storage-1.md b/docs.it4i.cz/anselm-cluster-documentation/storage-1.md
index 8fca8932e4dd28e96ea3b96c0c2383987a62c404..167b6fa6399a542db4304c83a3a2de37ce109472 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/storage-1.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/storage-1.md
@@ -1,5 +1,6 @@
 Storage 
 =======
+
   
 There are two main shared file systems on Anselm cluster, the
 [HOME](#home) and [SCRATCH](#scratch). All
@@ -64,12 +65,12 @@ Use the lfs getstripe for getting the stripe parameters. Use the lfs
 setstripe command for setting the stripe parameters to get optimal I/O
 performance The correct stripe setting depends on your needs and file
 access patterns. 
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs getstripe dir|filename 
 $ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename 
 ```
 Example:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs getstripe /scratch/username/
 /scratch/username/
 stripe_count  1 stripe_size   1048576 stripe_offset -1
@@ -84,7 +85,7 @@ and verified. All files written to this directory will be striped over
 10 OSTs
 Use lfs check OSTs to see the number and status of active OSTs for each
 filesystem on Anselm. Learn more by reading the man page
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs check osts
 $ man lfs
 ```
@@ -223,11 +224,11 @@ Number of OSTs
 ### <span>Disk usage and quota commands</span>
 <span>User quotas on the file systems can be checked and reviewed using
 following command:</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota dir
 ```
 Example for Lustre HOME directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota /home
 Disk quotas for user user001 (uid 1234):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
@@ -239,7 +240,7 @@ Disk quotas for group user001 (gid 1234):
 In this example, we view current quota size limit of 250GB and 300MB
 currently used by user001.
 Example for Lustre SCRATCH directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota /scratch
 Disk quotas for user user001 (uid 1234):
      Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
@@ -253,11 +254,11 @@ currently used by user001.
  
 To have a better understanding of where the space is exactly used, you
 can use following command to find out.
-``` {.prettyprint .lang-sh}
+``` 
 $ du -hs dir
 ```
 Example for your HOME directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ cd /home
 $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
 258M     cuda-samples
@@ -272,10 +273,10 @@ is sorted in descending order from largest to smallest
 files/directories.
 <span>To have a better understanding of previous commands, you can read
 manpages.</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ man lfs
 ```
-``` {.prettyprint .lang-sh}
+``` 
 $ man du 
 ```
 ### Extended ACLs
@@ -287,7 +288,7 @@ number of named user and named group entries.
 ACLs on a Lustre file system work exactly like ACLs on any Linux file
 system. They are manipulated with the standard tools in the standard
 manner. Below, we create a directory and allow a specific user access.
-``` {.prettyprint .lang-sh}
+``` 
 [vop999@login1.anselm ~]$ umask 027
 [vop999@login1.anselm ~]$ mkdir test
 [vop999@login1.anselm ~]$ ls -ld test
@@ -388,7 +389,7 @@ files in /tmp directory are automatically purged.
 **
 ----------
   Mountpoint                                 Usage                       Protocol   Net Capacity     Throughput   Limitations   Access                    Services
-  ------------------- ---- ---------- ---------------- ------------ ------------- -- ------
+  ------------------------------------------ --------------------------- ---------- ---------------- ------------ ------------- ------------------------- -----------------------------
   <span class="monospace">/home</span>       home directory              Lustre     320 TiB          2 GB/s       Quota 250GB   Compute and login nodes   backed up
   <span class="monospace">/scratch</span>    cluster shared jobs' data   Lustre     146 TiB          6 GB/s       Quota 100TB   Compute and login nodes   files older 90 days removed
   <span class="monospace">/lscratch</span>   node local jobs' data       local      330 GB           100 MB/s     none          Compute nodes             purged after job ends
diff --git a/docs.it4i.cz/anselm-cluster-documentation/storage-1/cesnet-data-storage.md b/docs.it4i.cz/anselm-cluster-documentation/storage-1/cesnet-data-storage.md
index 51eb708c8ba89f571f846712728ebbb61fbf8ad8..2f2c5a878f90f24358789ea4cc48115f6cba8576 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/storage-1/cesnet-data-storage.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/storage-1/cesnet-data-storage.md
@@ -1,5 +1,6 @@
 CESNET Data Storage 
 ===================
+
   
 Introduction
 ------------
@@ -25,7 +26,7 @@ Policy, AUP)”.
 The service is documented at
 <https://du.cesnet.cz/wiki/doku.php/en/start>. For special requirements
 please contact directly CESNET Storage Department via e-mail
-[du-support(at)cesnet.cz](mailto:du-support@cesnet.cz){.email-link}.
+[du-support(at)cesnet.cz](mailto:du-support@cesnet.cz).
 The procedure to obtain the CESNET access is quick and trouble-free.
 (source
 [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage"))
diff --git a/docs.it4i.cz/anselm-cluster-documentation/storage-1/storage.md b/docs.it4i.cz/anselm-cluster-documentation/storage-1/storage.md
index 8fca8932e4dd28e96ea3b96c0c2383987a62c404..167b6fa6399a542db4304c83a3a2de37ce109472 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/storage-1/storage.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/storage-1/storage.md
@@ -1,5 +1,6 @@
 Storage 
 =======
+
   
 There are two main shared file systems on Anselm cluster, the
 [HOME](#home) and [SCRATCH](#scratch). All
@@ -64,12 +65,12 @@ Use the lfs getstripe for getting the stripe parameters. Use the lfs
 setstripe command for setting the stripe parameters to get optimal I/O
 performance The correct stripe setting depends on your needs and file
 access patterns. 
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs getstripe dir|filename 
 $ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename 
 ```
 Example:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs getstripe /scratch/username/
 /scratch/username/
 stripe_count  1 stripe_size   1048576 stripe_offset -1
@@ -84,7 +85,7 @@ and verified. All files written to this directory will be striped over
 10 OSTs
 Use lfs check OSTs to see the number and status of active OSTs for each
 filesystem on Anselm. Learn more by reading the man page
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs check osts
 $ man lfs
 ```
@@ -223,11 +224,11 @@ Number of OSTs
 ### <span>Disk usage and quota commands</span>
 <span>User quotas on the file systems can be checked and reviewed using
 following command:</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota dir
 ```
 Example for Lustre HOME directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota /home
 Disk quotas for user user001 (uid 1234):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
@@ -239,7 +240,7 @@ Disk quotas for group user001 (gid 1234):
 In this example, we view current quota size limit of 250GB and 300MB
 currently used by user001.
 Example for Lustre SCRATCH directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota /scratch
 Disk quotas for user user001 (uid 1234):
      Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
@@ -253,11 +254,11 @@ currently used by user001.
  
 To have a better understanding of where the space is exactly used, you
 can use following command to find out.
-``` {.prettyprint .lang-sh}
+``` 
 $ du -hs dir
 ```
 Example for your HOME directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ cd /home
 $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
 258M     cuda-samples
@@ -272,10 +273,10 @@ is sorted in descending order from largest to smallest
 files/directories.
 <span>To have a better understanding of previous commands, you can read
 manpages.</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ man lfs
 ```
-``` {.prettyprint .lang-sh}
+``` 
 $ man du 
 ```
 ### Extended ACLs
@@ -287,7 +288,7 @@ number of named user and named group entries.
 ACLs on a Lustre file system work exactly like ACLs on any Linux file
 system. They are manipulated with the standard tools in the standard
 manner. Below, we create a directory and allow a specific user access.
-``` {.prettyprint .lang-sh}
+``` 
 [vop999@login1.anselm ~]$ umask 027
 [vop999@login1.anselm ~]$ mkdir test
 [vop999@login1.anselm ~]$ ls -ld test
@@ -388,7 +389,7 @@ files in /tmp directory are automatically purged.
 **
 ----------
   Mountpoint                                 Usage                       Protocol   Net Capacity     Throughput   Limitations   Access                    Services
-  ------------------- ---- ---------- ---------------- ------------ ------------- -- ------
+  ------------------------------------------ --------------------------- ---------- ---------------- ------------ ------------- ------------------------- -----------------------------
   <span class="monospace">/home</span>       home directory              Lustre     320 TiB          2 GB/s       Quota 250GB   Compute and login nodes   backed up
   <span class="monospace">/scratch</span>    cluster shared jobs' data   Lustre     146 TiB          6 GB/s       Quota 100TB   Compute and login nodes   files older 90 days removed
   <span class="monospace">/lscratch</span>   node local jobs' data       local      330 GB           100 MB/s     none          Compute nodes             purged after job ends
diff --git a/docs.it4i.cz/anselm-cluster-documentation/storage.md b/docs.it4i.cz/anselm-cluster-documentation/storage.md
index 8fca8932e4dd28e96ea3b96c0c2383987a62c404..167b6fa6399a542db4304c83a3a2de37ce109472 100644
--- a/docs.it4i.cz/anselm-cluster-documentation/storage.md
+++ b/docs.it4i.cz/anselm-cluster-documentation/storage.md
@@ -1,5 +1,6 @@
 Storage 
 =======
+
   
 There are two main shared file systems on Anselm cluster, the
 [HOME](#home) and [SCRATCH](#scratch). All
@@ -64,12 +65,12 @@ Use the lfs getstripe for getting the stripe parameters. Use the lfs
 setstripe command for setting the stripe parameters to get optimal I/O
 performance The correct stripe setting depends on your needs and file
 access patterns. 
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs getstripe dir|filename 
 $ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename 
 ```
 Example:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs getstripe /scratch/username/
 /scratch/username/
 stripe_count  1 stripe_size   1048576 stripe_offset -1
@@ -84,7 +85,7 @@ and verified. All files written to this directory will be striped over
 10 OSTs
 Use lfs check OSTs to see the number and status of active OSTs for each
 filesystem on Anselm. Learn more by reading the man page
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs check osts
 $ man lfs
 ```
@@ -223,11 +224,11 @@ Number of OSTs
 ### <span>Disk usage and quota commands</span>
 <span>User quotas on the file systems can be checked and reviewed using
 following command:</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota dir
 ```
 Example for Lustre HOME directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota /home
 Disk quotas for user user001 (uid 1234):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
@@ -239,7 +240,7 @@ Disk quotas for group user001 (gid 1234):
 In this example, we view current quota size limit of 250GB and 300MB
 currently used by user001.
 Example for Lustre SCRATCH directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota /scratch
 Disk quotas for user user001 (uid 1234):
      Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
@@ -253,11 +254,11 @@ currently used by user001.
  
 To have a better understanding of where the space is exactly used, you
 can use following command to find out.
-``` {.prettyprint .lang-sh}
+``` 
 $ du -hs dir
 ```
 Example for your HOME directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ cd /home
 $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
 258M     cuda-samples
@@ -272,10 +273,10 @@ is sorted in descending order from largest to smallest
 files/directories.
 <span>To have a better understanding of previous commands, you can read
 manpages.</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ man lfs
 ```
-``` {.prettyprint .lang-sh}
+``` 
 $ man du 
 ```
 ### Extended ACLs
@@ -287,7 +288,7 @@ number of named user and named group entries.
 ACLs on a Lustre file system work exactly like ACLs on any Linux file
 system. They are manipulated with the standard tools in the standard
 manner. Below, we create a directory and allow a specific user access.
-``` {.prettyprint .lang-sh}
+``` 
 [vop999@login1.anselm ~]$ umask 027
 [vop999@login1.anselm ~]$ mkdir test
 [vop999@login1.anselm ~]$ ls -ld test
@@ -388,7 +389,7 @@ files in /tmp directory are automatically purged.
 **
 ----------
   Mountpoint                                 Usage                       Protocol   Net Capacity     Throughput   Limitations   Access                    Services
-  ------------------- ---- ---------- ---------------- ------------ ------------- -- ------
+  ------------------------------------------ --------------------------- ---------- ---------------- ------------ ------------- ------------------------- -----------------------------
   <span class="monospace">/home</span>       home directory              Lustre     320 TiB          2 GB/s       Quota 250GB   Compute and login nodes   backed up
   <span class="monospace">/scratch</span>    cluster shared jobs' data   Lustre     146 TiB          6 GB/s       Quota 100TB   Compute and login nodes   files older 90 days removed
   <span class="monospace">/lscratch</span>   node local jobs' data       local      330 GB           100 MB/s     none          Compute nodes             purged after job ends
diff --git a/docs.it4i.cz/anselm.md b/docs.it4i.cz/anselm.md
index 7d1bde3e9229026bb5cd0d102710e86979641d53..cf3bfcbbadff97a674e71305f57127f8ec033092 100644
--- a/docs.it4i.cz/anselm.md
+++ b/docs.it4i.cz/anselm.md
@@ -1,5 +1,6 @@
 Introduction 
 ============
+
   
 Welcome to Anselm supercomputer cluster. The Anselm cluster consists of
 209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters.md
index f918d611f7785269ff28f4a45f3bb4999e19d455..1e3972c628d29ede9e92aa13be79c1e8d5149793 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters.md
@@ -10,6 +10,6 @@ pages.
 ### PuTTY
 On **Windows**, use [PuTTY ssh
 client](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty).
-### SSH keys {#parent-fieldname-title}
+### SSH keys 
 Read more about [SSH keys
 management](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys).
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface.md
index ff3738063f55085dad7cad70901c9102e176ce07..bf8c88bdf23764386bc0098a609c08830258c367 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface.md
@@ -1,5 +1,6 @@
 Graphical User Interface 
 ========================
+
   
 X Window System
 ---------------
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md
index ff3738063f55085dad7cad70901c9102e176ce07..bf8c88bdf23764386bc0098a609c08830258c367 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface.md
@@ -1,5 +1,6 @@
 Graphical User Interface 
 ========================
+
   
 X Window System
 ---------------
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/vnc.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/vnc.md
index 639e7df17d9fb68eae74bd555db95cd03c5a5262..e04f8ace26a0c15a2618bfefe1ae30f1d932e46e 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/vnc.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/vnc.md
@@ -1,5 +1,6 @@
 VNC 
 ===
+
   
 The **Virtual Network Computing** (**VNC**) is a graphical [desktop
 sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing")
@@ -10,9 +11,9 @@ remotely control another
 transmits the
 [keyboard](http://en.wikipedia.org/wiki/Computer_keyboard "Computer keyboard")
 and
-[mouse](http://en.wikipedia.org/wiki/Computer_mouse "Computer mouse"){.mw-redirect}
+[mouse](http://en.wikipedia.org/wiki/Computer_mouse "Computer mouse")
 events from one computer to another, relaying the graphical
-[screen](http://en.wikipedia.org/wiki/Computer_screen "Computer screen"){.mw-redirect}
+[screen](http://en.wikipedia.org/wiki/Computer_screen "Computer screen")
 updates back in the other direction, over a
 [network](http://en.wikipedia.org/wiki/Computer_network "Computer network").^[<span>[</span>1<span>]</span>](http://en.wikipedia.org/wiki/Virtual_Network_Computing#cite_note-1)^
 The recommended clients are
@@ -84,7 +85,7 @@ tcp        0      0 127.0.0.1:5961          0.0.0.0:*   
 tcp6       0      0 ::1:5961                :::*                    LISTEN      2022/ssh 
 ```
 Or on Mac OS use this command:
-``` {.prettyprint .lang-sh}
+``` 
 local-mac $ lsof -n -i4TCP:5961 | grep LISTEN
 ssh 75890 sta545 7u IPv4 0xfb062b5c15a56a3b 0t0 TCP 127.0.0.1:5961 (LISTEN)
 ```
@@ -166,7 +167,7 @@ Or this way:
 [username@login2 .vnc]$  pkill vnc
 ```
 GUI applications on compute nodes over VNC
--------------------
+------------------------------------------
 The very [same methods as described
 above](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-and-vnc#gui-applications-on-compute-nodes),
 may be used to run the GUI applications on compute nodes. However, for
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.1.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.1.md
index bc1a29316f043b469e4b5c1a5ae8a65a3d6ce2be..e6ceaf2d13c9352ed4d6387c387d371cac4524e2 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.1.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.1.md
@@ -1,12 +1,13 @@
 X Window System 
 ===============
+
   
 The X Window system is a principal way to get GUI access to the
 clusters. The **X Window System** (commonly known as **X11**, based on
 its current major version being 11, or shortened to simply **X**, and
 sometimes informally **X-Windows**) is a computer software system and
 network
-[protocol](http://en.wikipedia.org/wiki/Protocol_%28computing%29 "Protocol (computing)"){.mw-redirect}
+[protocol](http://en.wikipedia.org/wiki/Protocol_%28computing%29 "Protocol (computing)")
 that provides a basis for [graphical user
 interfaces](http://en.wikipedia.org/wiki/Graphical_user_interface "Graphical user interface")
 (GUIs) and rich input device capability for [networked
@@ -17,7 +18,7 @@ client side
 In order to display graphical user interface GUI of various software
 tools, you need to enable the X display forwarding. On Linux and Mac,
 log in using the -X option tho ssh client:
-``` {.prettyprint .lang-sh}
+``` 
  local $ ssh -X username@cluster-name.it4i.cz
 ```
 ### X Display Forwarding on Windows
@@ -25,11 +26,11 @@ On Windows use the PuTTY client to enable X11 forwarding.   In PuTTY
 menu, go to Connection-&gt;SSH-&gt;X11, mark the Enable X11 forwarding
 checkbox before logging in. Then log in as usual.
 To verify the forwarding, type
-``` {.prettyprint .lang-sh}
+``` 
 $ echo $DISPLAY
 ```
 if you receive something like
-``` {.prettyprint .lang-sh}
+``` 
 localhost:10.0
 ```
 then the X11 forwarding is enabled.
@@ -79,18 +80,18 @@ Read more on
 Make sure that X forwarding is activated and the X server is running.
 Then launch the application as usual. Use the & to run the application
 in background.
-``` {.prettyprint .lang-sh}
+``` 
 $ module load intel (idb and gvim not installed yet)
 $ gvim &
 ```
-``` {.prettyprint .lang-sh}
+``` 
 $ xterm
 ```
 In this example, we activate the intel programing environment tools,
 then start the graphical gvim editor.
 ### []()GUI Applications on Compute Nodes
 Allocate the compute nodes using -X option on the qsub command
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -q qexp -l select=2:ncpus=24 -X -I
 ```
 In this example, we allocate 2 nodes via qexp queue, interactively. We
@@ -98,7 +99,7 @@ request X11 forwarding with the -X option. It will be possible to run
 the GUI enabled applications directly on the first compute node.
 **Better performance** is obtained by logging on the allocated compute
 node via ssh, using the -X option.
-``` {.prettyprint .lang-sh}
+``` 
 $ ssh -X r24u35n680
 ```
 In this example, we log in on the r24u35n680 compute node, with the X11
@@ -114,7 +115,7 @@ need to install Xephyr. Ubuntu package is <span
 class="monospace">xserver-xephyr</span>, on OS X it is part of
 [XQuartz](http://xquartz.macosforge.org/landing/).
 First, launch Xephyr on local machine:
-``` {.prettyprint .lang-sh}
+``` 
 local $ Xephyr -ac -screen 1024x768 -br -reset -terminate :1 &
 ```
 This will open a new X window with size 1024x768 at DISPLAY :1. Next,
@@ -139,7 +140,7 @@ Use Xlaunch to start the Xming server or run the XWin.exe. Select the
 ''One window" mode.
 Log in to the cluster, using PuTTY. On the cluster, run the
 gnome-session command.
-``` {.prettyprint .lang-sh}
+``` 
 $ gnome-session &
 ```
 In this way, we run remote gnome session on the cluster, displaying it
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding.md
index 355972864bb013b33ff322dbcbc308557f0d6c78..61eaaf137f7977ea1203cc8b3267e0c580636b50 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding.md
@@ -1,7 +1,7 @@
 Cygwin and X11 forwarding 
 =========================
 ### If <span style="text-alignleft; floatnone; ">no able to forward X11 using PuTTY to CygwinX</span>
-``` {.prettyprint .lang-sh}
+``` 
 [usename@login1.anselm ~]$ gnome-session &
 [1] 23691
 [usename@login1.anselm ~]$ PuTTY X11 proxyunable to connect to forwarded X serverNetwork errorConnection refused
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/cygwin-and-x11-forwarding.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/cygwin-and-x11-forwarding.md
index 355972864bb013b33ff322dbcbc308557f0d6c78..61eaaf137f7977ea1203cc8b3267e0c580636b50 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/cygwin-and-x11-forwarding.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/cygwin-and-x11-forwarding.md
@@ -1,7 +1,7 @@
 Cygwin and X11 forwarding 
 =========================
 ### If <span style="text-alignleft; floatnone; ">no able to forward X11 using PuTTY to CygwinX</span>
-``` {.prettyprint .lang-sh}
+``` 
 [usename@login1.anselm ~]$ gnome-session &
 [1] 23691
 [usename@login1.anselm ~]$ PuTTY X11 proxyunable to connect to forwarded X serverNetwork errorConnection refused
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.md
index bc1a29316f043b469e4b5c1a5ae8a65a3d6ce2be..e6ceaf2d13c9352ed4d6387c387d371cac4524e2 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc.md
@@ -1,12 +1,13 @@
 X Window System 
 ===============
+
   
 The X Window system is a principal way to get GUI access to the
 clusters. The **X Window System** (commonly known as **X11**, based on
 its current major version being 11, or shortened to simply **X**, and
 sometimes informally **X-Windows**) is a computer software system and
 network
-[protocol](http://en.wikipedia.org/wiki/Protocol_%28computing%29 "Protocol (computing)"){.mw-redirect}
+[protocol](http://en.wikipedia.org/wiki/Protocol_%28computing%29 "Protocol (computing)")
 that provides a basis for [graphical user
 interfaces](http://en.wikipedia.org/wiki/Graphical_user_interface "Graphical user interface")
 (GUIs) and rich input device capability for [networked
@@ -17,7 +18,7 @@ client side
 In order to display graphical user interface GUI of various software
 tools, you need to enable the X display forwarding. On Linux and Mac,
 log in using the -X option tho ssh client:
-``` {.prettyprint .lang-sh}
+``` 
  local $ ssh -X username@cluster-name.it4i.cz
 ```
 ### X Display Forwarding on Windows
@@ -25,11 +26,11 @@ On Windows use the PuTTY client to enable X11 forwarding.   In PuTTY
 menu, go to Connection-&gt;SSH-&gt;X11, mark the Enable X11 forwarding
 checkbox before logging in. Then log in as usual.
 To verify the forwarding, type
-``` {.prettyprint .lang-sh}
+``` 
 $ echo $DISPLAY
 ```
 if you receive something like
-``` {.prettyprint .lang-sh}
+``` 
 localhost:10.0
 ```
 then the X11 forwarding is enabled.
@@ -79,18 +80,18 @@ Read more on
 Make sure that X forwarding is activated and the X server is running.
 Then launch the application as usual. Use the & to run the application
 in background.
-``` {.prettyprint .lang-sh}
+``` 
 $ module load intel (idb and gvim not installed yet)
 $ gvim &
 ```
-``` {.prettyprint .lang-sh}
+``` 
 $ xterm
 ```
 In this example, we activate the intel programing environment tools,
 then start the graphical gvim editor.
 ### []()GUI Applications on Compute Nodes
 Allocate the compute nodes using -X option on the qsub command
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -q qexp -l select=2:ncpus=24 -X -I
 ```
 In this example, we allocate 2 nodes via qexp queue, interactively. We
@@ -98,7 +99,7 @@ request X11 forwarding with the -X option. It will be possible to run
 the GUI enabled applications directly on the first compute node.
 **Better performance** is obtained by logging on the allocated compute
 node via ssh, using the -X option.
-``` {.prettyprint .lang-sh}
+``` 
 $ ssh -X r24u35n680
 ```
 In this example, we log in on the r24u35n680 compute node, with the X11
@@ -114,7 +115,7 @@ need to install Xephyr. Ubuntu package is <span
 class="monospace">xserver-xephyr</span>, on OS X it is part of
 [XQuartz](http://xquartz.macosforge.org/landing/).
 First, launch Xephyr on local machine:
-``` {.prettyprint .lang-sh}
+``` 
 local $ Xephyr -ac -screen 1024x768 -br -reset -terminate :1 &
 ```
 This will open a new X window with size 1024x768 at DISPLAY :1. Next,
@@ -139,7 +140,7 @@ Use Xlaunch to start the Xming server or run the XWin.exe. Select the
 ''One window" mode.
 Log in to the cluster, using PuTTY. On the cluster, run the
 gnome-session command.
-``` {.prettyprint .lang-sh}
+``` 
 $ gnome-session &
 ```
 In this way, we run remote gnome session on the cluster, displaying it
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/accessing-the-clusters.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/accessing-the-clusters.md
index f918d611f7785269ff28f4a45f3bb4999e19d455..1e3972c628d29ede9e92aa13be79c1e8d5149793 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/accessing-the-clusters.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/accessing-the-clusters.md
@@ -10,6 +10,6 @@ pages.
 ### PuTTY
 On **Windows**, use [PuTTY ssh
 client](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty).
-### SSH keys {#parent-fieldname-title}
+### SSH keys 
 Read more about [SSH keys
 management](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys).
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.1.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.1.md
index 61311b4df8d471a634be11891872d209f9fc1fa9..612d53fa59edff1f95d55ff551ccf106d81e58c3 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.1.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.1.md
@@ -1,8 +1,9 @@
 PuTTY 
 =====
+
   
 PuTTY -<span class="Apple-converted-space"> </span>before we start SSH connection {#putty---before-we-start-ssh-connection style="text-alignstart; "}
-------------
+---------------------------------------------------------------------------------
 ### Windows PuTTY Installer
 We recommned you to download "**A Windows installer for everything
 except PuTTYtel**" with ***Pageant*** (SSH authentication agent) and
@@ -26,7 +27,7 @@ if needed.
 holds your private key in memory without needing to retype a passphrase
 on every login. We recommend its usage.
 []()PuTTY - how to connect to the IT4Innovations cluster
-----------
+--------------------------------------------------------
 -   Run PuTTY
 -   Enter Host name and Save session fields with [Login
     address](https://docs.it4i.cz/salomon/accessing-the-cluster/shell-and-data-access/shell-and-data-access)
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/pageant.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/pageant.md
index 03cd2bbce4d30e078c087bcb5f72181ccc4a013c..2e15cc77b6b19702115d6294b5cc538f86133d1d 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/pageant.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/pageant.md
@@ -1,5 +1,6 @@
 Pageant SSH agent 
 =================
+
   
 Pageant holds your private key in memory without needing to retype a
 passphrase on every login.
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.md
index 61311b4df8d471a634be11891872d209f9fc1fa9..612d53fa59edff1f95d55ff551ccf106d81e58c3 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty.md
@@ -1,8 +1,9 @@
 PuTTY 
 =====
+
   
 PuTTY -<span class="Apple-converted-space"> </span>before we start SSH connection {#putty---before-we-start-ssh-connection style="text-alignstart; "}
-------------
+---------------------------------------------------------------------------------
 ### Windows PuTTY Installer
 We recommned you to download "**A Windows installer for everything
 except PuTTYtel**" with ***Pageant*** (SSH authentication agent) and
@@ -26,7 +27,7 @@ if needed.
 holds your private key in memory without needing to retype a passphrase
 on every login. We recommend its usage.
 []()PuTTY - how to connect to the IT4Innovations cluster
-----------
+--------------------------------------------------------
 -   Run PuTTY
 -   Enter Host name and Save session fields with [Login
     address](https://docs.it4i.cz/salomon/accessing-the-cluster/shell-and-data-access/shell-and-data-access)
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/puttygen.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/puttygen.md
index 7716b5024c56dd5ecb97dae506452a006dc20905..c21d211b929601154e700db239eb557d06e237ce 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/puttygen.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/puttygen.md
@@ -1,5 +1,6 @@
 PuTTY key generator 
 ===================
+
   
 PuTTYgen is the PuTTY key generator. You can load in an existing private
 key and change your passphrase or generate a new public/private key
diff --git a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
index 38516c776631b328200c599013ba3715fdd2c36e..09b61542e1a939bc862b355d50a0906f18082ce7 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md
@@ -1,8 +1,9 @@
 SSH keys 
 ========
+
   
 <span id="Key_management" class="mw-headline">Key management</span>
----------------------
+-------------------------------------------------------------------
 After logging in, you can see .ssh/ directory with SSH keys and
 authorized_keys file:
     $ cd /home/username/
diff --git a/docs.it4i.cz/get-started-with-it4innovations/applying-for-resources.md b/docs.it4i.cz/get-started-with-it4innovations/applying-for-resources.md
index 373340cca4967fdfe27d4c79164c8f04ac36c535..61fe19e28e6a7df6af8840607291aa8a1299bfaf 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/applying-for-resources.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/applying-for-resources.md
@@ -1,5 +1,6 @@
 Applying for Resources 
 ======================
+
   
 Computational resources may be allocated by any of the following
 [Computing resources
diff --git a/docs.it4i.cz/get-started-with-it4innovations/introduction.md b/docs.it4i.cz/get-started-with-it4innovations/introduction.md
index 2fadd6e75b803ba9888700875e5380c7026bf290..7ec0bf31b6f3fb347648c8693454039dfaaa4f29 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/introduction.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/introduction.md
@@ -1,5 +1,6 @@
 Documentation 
 =============
+
   
 Welcome to IT4Innovations documentation pages. The IT4Innovations
 national supercomputing center operates supercomputers
@@ -12,7 +13,7 @@ industrial community worldwide. The purpose of these pages is to provide
 a comprehensive documentation on hardware, software and usage of the
 computers.
 <span class="link-external"><span class="WYSIWYG_LINK">How to read the documentation</span></span>
-------
+--------------------------------------------------------------------------------------------------
 1.  Read the list in the left column. Select the subject of interest.
     Alternatively, use the Search box in the upper right corner.
 2.  Read the CONTENTS in the upper right corner.
@@ -26,9 +27,9 @@ The call-out.   Focus on the call-outs before reading full details.
     [Changelog](https://docs.it4i.cz/get-started-with-it4innovations/changelog)
     to keep up to date.
 Getting Help and Support
--
+------------------------
 Contact [support [at]
-it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz){.email-link} for help and
+it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz) for help and
 support regarding the cluster technology at IT4Innovations.
 Please use **Czech**, **Slovak** or **English** language for
 communication with us.
@@ -62,7 +63,7 @@ Proficieny in MPI, OpenMP, CUDA, UPC or GPI2 programming may be gained
 via the [training provided by
 IT4Innovations.](http://prace.it4i.cz)
 Terminology Frequently Used on These Pages
--------------------
+------------------------------------------
 -   **node:** a computer, interconnected by network to other computers -
     Computational nodes are powerful computers, designed and dedicated
     for executing demanding scientific computations.
@@ -94,11 +95,11 @@ Conventions
 In this documentation, you will find a number of pages containing
 examples. We use the following conventions:
  Cluster command prompt
-``` {.prettyprint .lang-sh}
+``` 
 $
 ```
 Your local linux host command prompt
-``` {.prettyprint .lang-sh}
+``` 
 local $
 ```
  Errata
diff --git a/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md b/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md
index ff56a00ae127392d17f469d57f5f1b8992972f9a..0974a027e4f9b6aea33c3bd9595d775ead4412a5 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq.md
@@ -1,9 +1,10 @@
 Certificates FAQ 
 ================
 FAQ about certificates in general
+
   
 QWhat are certificates?
---
+-------------------------
 IT4Innovations employs X.509 certificates for secure communication (e.
 g. credentials exchange) and for grid services related to PRACE, as they
 present a single method of authentication for all PRACE services, where
@@ -22,7 +23,7 @@ However, users need only manage User and CA certificates. Note that your
 user certificate is protected by an associated private key, and this
 **private key must never be disclosed**.
 QWhich X.509 certificates are recognised by IT4Innovations?
----------------
+-------------------------------------------------------------
 Any certificate that has been issued by a Certification Authority (CA)
 from a member of the IGTF ([http:www.igtf.net](http://www.igtf.net/)) is
 recognised by IT4InnovationsEuropean certificates are issued by
@@ -34,7 +35,7 @@ authentication within Europe. Further the Czech *"Qualified certificate"
 <http://www.ica.cz/Kvalifikovany-certifikat.aspx>), that is used in
 electronic contact with Czech public authorities is accepted.
 QHow do I get a User Certificate that can be used with IT4Innovations?
----
+------------------------------------------------------------------------
 To get a certificate, you must make a request to your local, IGTF
 approved, Certificate Authority (CA). Usually you then must visit, in
 person, your nearest Registration Authority (RA) to verify your
@@ -45,18 +46,18 @@ locate your trusted CA via <http://www.eugridpma.org/members/worldmap>.
 In some countries certificates can also be retrieved using the TERENA
 Certificate Service, see the FAQ below for the link.
 QDoes IT4Innovations support short lived certificates (SLCS)?
------------------
+---------------------------------------------------------------
 Yes, provided that the CA which provides this service is also a member
 of IGTF.
 QDoes IT4Innovations support the TERENA certificate service?
-----------------
+--------------------------------------------------------------
 Yes, ITInnovations supports TERENA eScience personal certificates. For
 more information, please visit
 [https://tcs-escience-portal.terena.org](https://tcs-escience-portal.terena.org/){.spip_url
 .spip_out}, where you also can find if your organisation/country can use
 this service
 QWhat format should my certificate take?
--------------------
+------------------------------------------
 User Certificates come in many formats, the three most common being the
 ’PKCS12’, ’PEM’ and the JKS formats.
 The PKCS12 (often abbreviated to ’p12’) format stores your user
@@ -78,7 +79,7 @@ UNICORE6.
 To convert your Certificate from p12 to JKS, IT4Innovations recommends
 using the keytool utiliy (see separate FAQ entry).
 QWhat are CA certificates?
------
+----------------------------
 Certification Authority (CA) certificates are used to verify the link
 between your user certificate and the authority which issued it. They
 are also used to verify the link between the host certificate of a
@@ -106,7 +107,7 @@ then the certificates will be installed into
 $HOME/.globus/certificates. For Globus, you can download the
 globuscerts.tar.gz packet from <http://winnetou.sara.nl/prace/certs/>.
 QWhat is a DN and how do I find mine?
-----------------
+---------------------------------------
 DN stands for Distinguished Name and is part of your user certificate.
 IT4Innovations needs to know your DN to enable your account to use the
 grid services. You may use openssl (see below) to determine your DN or,
@@ -120,7 +121,7 @@ For users running Firefox under Windows, the DN is referred to as the
 Tools-&gt;Options-&gt;Advanced-&gt;Encryption-&gt;View Certificates.
 Highlight your name and then Click View-&gt;Details-&gt;Subject.
 QHow do I use the openssl tool?
-----------
+---------------------------------
 The following examples are for Unix/Linux operating systems only.
 To convert from PEM to p12, enter the following command:
     openssl pkcs12 -export -in usercert.pem -inkey userkey.pem -out
@@ -140,6 +141,7 @@ To download openssl for both Linux and Windows, please visit
 <http://www.openssl.org/related/binaries.html>. On Macintosh Mac OS X
 computers openssl is already pre-installed and can be used immediately.
 QHow do I create and then manage a keystore?
+----------------------------------------------
 IT4innovations recommends the java based keytool utility to create and
 manage keystores, which themselves are stores of keys and certificates.
 For example if you want to convert your pkcs12 formatted key pair into a
@@ -159,6 +161,7 @@ where $mydomain.crt is the certificate of a trusted signing authority
 More information on the tool can be found
 at:<http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/keytool.html>
 QHow do I use my certificate to access the different grid Services?
+---------------------------------------------------------------------
 Most grid services require the use of your certificate; however, the
 format of your certificate depends on the grid Service you wish to
 employ.
@@ -184,7 +187,7 @@ directory is either $HOME/.globus/certificates or
 prace-ri.eu then you do not have to create the .globus directory nor
 install CA certificates to use this tool alone).
 QHow do I manually import my certificate into my browser?
--------------
+-----------------------------------------------------------
 If you employ the Firefox browser, then you can import your certificate
 by first choosing the "Preferences" window. For Windows, this is
 Tools-&gt;Options. For Linux, this is Edit-&gt;Preferences. For Mac,
@@ -206,7 +209,7 @@ for your password. In the "Certificate File To Import" box, type the
 filename of the certificate you wish to import, and then click OK. Click
 Close, and then click OK.
 QWhat is a proxy certificate?
---------
+-------------------------------
 A proxy certificate is a short-lived certificate which may be employed
 by UNICORE and the Globus services. The proxy certificate consists of a
 new user certificate and a newly generated proxy private key. This proxy
@@ -215,7 +218,7 @@ allows a limited delegation of rights. Its default location, for
 Unix/Linux, is /tmp/x509_u*uid* but can be set via the
 $X509_USER_PROXY environment variable.
 QWhat is the MyProxy service?
---------
+-------------------------------
 [The MyProxy Service](http://grid.ncsa.illinois.edu/myproxy/){.spip_in
 .external-link}, can be employed by gsissh-term and Globus tools, and is
 an online repository that allows users to store long lived proxy
@@ -226,7 +229,7 @@ to carry their private keys and certificates when travelling; nor do
 users have to install private keys and certificates on possibly insecure
 computers.
 QSomeone may have copied or had access to the private key of my certificate either in a separate file or in the browser. What should I do?
---
+--------------------------------------------------------------------------------------------------------------------------------------------
 Please ask the CA that issued your certificate to revoke this certifcate
 and to supply you with a new one. In addition, please report this to
 IT4Innovations by contacting [the support
diff --git a/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md b/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
index b5da24e46f2786c2b817b3855b760c5a85fa7dc1..060f1713ea008cf338f3040bb929479051709d37 100644
--- a/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
+++ b/docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
@@ -1,7 +1,9 @@
 Obtaining Login Credentials 
 ===========================
+
   
 Obtaining Authorization
+-----------------------
 The computational resources of IT4I  are allocated by the Allocation
 Committee to a
 [Project](https://docs.it4i.cz/get-started-with-it4innovations/introduction),
@@ -68,7 +70,7 @@ be** digitally signed. Read more on [digital
 signatures](#the-certificates-for-digital-signatures)
 below.
 []()The Login Credentials
---
+-------------------------
 Once authorized by PI, every person (PI or Collaborator) wishing to
 access the clusters, should contact the [IT4I
 support](https://support.it4i.cz/rt/) (E-mail[support
@@ -134,7 +136,7 @@ Generator](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-cl
 Change password in your user profile at 
 <https://extranet.it4i.cz/user/>
 []()The Certificates for Digital Signatures
---------------------
+-------------------------------------------
 We accept personal certificates issued by any widely respected
 certification authority (CA). This includes certificates by CAs
 organized in International Grid Trust Federation
@@ -153,7 +155,7 @@ Certificate generation process is well-described here:
 A FAQ about certificates can be found here<span>[Certificates
 FAQ](https://docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq).</span>
 []()Alternative Way to Personal Certificate
---------------------
+-------------------------------------------
 Follow these steps **only** if you can not obtain your certificate in a
 standard way.
 In case you choose this procedure, please attach a **scan of photo ID**
@@ -177,7 +179,7 @@ credentials](#the-login-credentials).
     -   Simultaneously you'll get an e-mail with a link to
         the certificate.
 Installation of the Certificate Into Your Mail Client
--------
+-----------------------------------------------------
 The procedure is similar to the following guides:
 -   MS Outlook 2010
     -   [How to Remove, Import, and Export Digital
@@ -190,7 +192,7 @@ The procedure is similar to the following guides:
     -   [Importing a PKCS #12 certificate
         (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp)
 End of User Account Lifecycle
-------
+-----------------------------
 User accounts are supported by membership in active Project(s) or by
 affiliation to IT4Innovations. User accounts, that loose the support
 (meaning, are not attached to an active project and are not affiliated
diff --git a/docs.it4i.cz/heartbleed-bug.md b/docs.it4i.cz/heartbleed-bug.md
index 04dcdb6db0002f21d5597606595417fce82bfb6b..752b9978ef611cfc80027bd133b42c734525900e 100644
--- a/docs.it4i.cz/heartbleed-bug.md
+++ b/docs.it4i.cz/heartbleed-bug.md
@@ -1,5 +1,6 @@
 Heartbleed bug 
 ==============
+
   
 Last update2014-04-16 12:20:32
 Introduction
@@ -46,9 +47,9 @@ credential in a secure way.
 In case that you're using a new digital signature since getting access
 to Anselm, please let us know by writing to <span
 class="moz-txt-link-abbreviated">[support [at]
-it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz){.email-link}</span> .
+it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz)</span> .
 Don't forget to digitally sign your message and please include the
 string "**Digital Signature Change**" in the subject line.
 We'll be collecting new digital signatures till Friday, 18th April.
-<span>[](http://heartbleed.com/){.moz-txt-link-freetext}</span>
+<span>[](http://heartbleed.com/)</span>
  
diff --git a/docs.it4i.cz/index.md b/docs.it4i.cz/index.md
index 2fadd6e75b803ba9888700875e5380c7026bf290..7ec0bf31b6f3fb347648c8693454039dfaaa4f29 100644
--- a/docs.it4i.cz/index.md
+++ b/docs.it4i.cz/index.md
@@ -1,5 +1,6 @@
 Documentation 
 =============
+
   
 Welcome to IT4Innovations documentation pages. The IT4Innovations
 national supercomputing center operates supercomputers
@@ -12,7 +13,7 @@ industrial community worldwide. The purpose of these pages is to provide
 a comprehensive documentation on hardware, software and usage of the
 computers.
 <span class="link-external"><span class="WYSIWYG_LINK">How to read the documentation</span></span>
-------
+--------------------------------------------------------------------------------------------------
 1.  Read the list in the left column. Select the subject of interest.
     Alternatively, use the Search box in the upper right corner.
 2.  Read the CONTENTS in the upper right corner.
@@ -26,9 +27,9 @@ The call-out.   Focus on the call-outs before reading full details.
     [Changelog](https://docs.it4i.cz/get-started-with-it4innovations/changelog)
     to keep up to date.
 Getting Help and Support
--
+------------------------
 Contact [support [at]
-it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz){.email-link} for help and
+it4i.cz](mailto:support%20%5Bat%5D%20it4i.cz) for help and
 support regarding the cluster technology at IT4Innovations.
 Please use **Czech**, **Slovak** or **English** language for
 communication with us.
@@ -62,7 +63,7 @@ Proficieny in MPI, OpenMP, CUDA, UPC or GPI2 programming may be gained
 via the [training provided by
 IT4Innovations.](http://prace.it4i.cz)
 Terminology Frequently Used on These Pages
--------------------
+------------------------------------------
 -   **node:** a computer, interconnected by network to other computers -
     Computational nodes are powerful computers, designed and dedicated
     for executing demanding scientific computations.
@@ -94,11 +95,11 @@ Conventions
 In this documentation, you will find a number of pages containing
 examples. We use the following conventions:
  Cluster command prompt
-``` {.prettyprint .lang-sh}
+``` 
 $
 ```
 Your local linux host command prompt
-``` {.prettyprint .lang-sh}
+``` 
 local $
 ```
  Errata
diff --git a/docs.it4i.cz/links.md b/docs.it4i.cz/links.md
index 45bd97354f9de1bea558c8b300d7f30d0a8f8538..4624a3bafee5d6be09ae2f1e3ef09e19d2347fde 100644
--- a/docs.it4i.cz/links.md
+++ b/docs.it4i.cz/links.md
@@ -1,5 +1,6 @@
 Links 
 =====
+
   
 -   <span
     class="WYSIWYG_LINK">[IT4Innovations](http://www.it4i.cz/?lang=en)</span>
diff --git a/docs.it4i.cz/salomon/accessing-the-cluster.md b/docs.it4i.cz/salomon/accessing-the-cluster.md
index db82c44a3fd9611f54757a8dde3e776d15d7e67f..1df98a694e67214dca3d64365da66d4e571d11a9 100644
--- a/docs.it4i.cz/salomon/accessing-the-cluster.md
+++ b/docs.it4i.cz/salomon/accessing-the-cluster.md
@@ -1,5 +1,6 @@
 Shell access and data transfer 
 ==============================
+
   
 Interactive Login
 -----------------
@@ -11,7 +12,7 @@ The alias <span>salomon.it4i.cz is currently not available through VPN
 connection. Please use loginX.salomon.it4i.cz when connected to
 VPN.</span>
   Login address            Port   Protocol   Login node
-  - ------ ---------- ------------------
+  ------------------------ ------ ---------- -----------------------------------------
   salomon.it4i.cz          22     ssh        round-robin DNS record for login[1-4]
   login1.salomon.it4i.cz   22     ssh        login1
   login2.salomon.it4i.cz   22     ssh        login2
@@ -26,12 +27,12 @@ f6:28:98:e4:f9:b2:a6:8f:f2:f4:2d:0a:09:67:69:80 (DSA)
  
 Private keys authentication:
 On **Linux** or **Mac**, use
-``` {.prettyprint .lang-sh}
+``` 
 local $ ssh -i /path/to/id_rsa username@salomon.it4i.cz
 ```
 If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this
 command to set lower permissions to private key file.
-``` {.prettyprint .lang-sh}
+``` 
 local $ chmod 600 /path/to/id_rsa
 ```
 On **Windows**, use [PuTTY ssh
@@ -60,7 +61,7 @@ nodes cedge[1-3].salomon.it4i.cz for increased performance.
  
 HTML commented section #1 (removed cedge servers from the table)
   Address                                                Port   Protocol
-  -------- ------ ------------------
+  ------------------------------------------------------ ------ -----------------------------------------
   salomon.it4i.cz                                        22     scp, sftp
   login1.salomon.it4i.cz                                 22     scp, sftp
   login2.salomon.it4i.cz                                 22     scp, sftp
@@ -71,26 +72,26 @@ key](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters
 HTML commented section #2 (ssh transfer performance data need to be
 verified)
 On linux or Mac, use scp or sftp client to transfer the data to Salomon:
-``` {.prettyprint .lang-sh}
+``` 
 local $ scp -i /path/to/id_rsa my-local-file username@salomon.it4i.cz:directory/file
 ```
-``` {.prettyprint .lang-sh}
+``` 
 local $ scp -i /path/to/id_rsa -r my-local-dir username@salomon.it4i.cz:directory
 ```
 > or
-``` {.prettyprint .lang-sh}
+``` 
 local $ sftp -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz
 ```
 Very convenient way to transfer files in and out of the Salomon computer
 is via the fuse filesystem
 [sshfs](http://linux.die.net/man/1/sshfs)
-``` {.prettyprint .lang-sh}
+``` 
 local $ sshfs -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz:. mountpoint
 ```
 Using sshfs, the users Salomon home directory will be mounted on your
 local computer, just like an external disk.
 Learn more on ssh, scp and sshfs by reading the manpages
-``` {.prettyprint .lang-sh}
+``` 
 $ man ssh
 $ man scp
 $ man sshfs
diff --git a/docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface.md b/docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface.md
index ff3738063f55085dad7cad70901c9102e176ce07..bf8c88bdf23764386bc0098a609c08830258c367 100644
--- a/docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface.md
+++ b/docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface.md
@@ -1,5 +1,6 @@
 Graphical User Interface 
 ========================
+
   
 X Window System
 ---------------
diff --git a/docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface/vnc.md b/docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface/vnc.md
index 639e7df17d9fb68eae74bd555db95cd03c5a5262..e04f8ace26a0c15a2618bfefe1ae30f1d932e46e 100644
--- a/docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface/vnc.md
+++ b/docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface/vnc.md
@@ -1,5 +1,6 @@
 VNC 
 ===
+
   
 The **Virtual Network Computing** (**VNC**) is a graphical [desktop
 sharing](http://en.wikipedia.org/wiki/Desktop_sharing "Desktop sharing")
@@ -10,9 +11,9 @@ remotely control another
 transmits the
 [keyboard](http://en.wikipedia.org/wiki/Computer_keyboard "Computer keyboard")
 and
-[mouse](http://en.wikipedia.org/wiki/Computer_mouse "Computer mouse"){.mw-redirect}
+[mouse](http://en.wikipedia.org/wiki/Computer_mouse "Computer mouse")
 events from one computer to another, relaying the graphical
-[screen](http://en.wikipedia.org/wiki/Computer_screen "Computer screen"){.mw-redirect}
+[screen](http://en.wikipedia.org/wiki/Computer_screen "Computer screen")
 updates back in the other direction, over a
 [network](http://en.wikipedia.org/wiki/Computer_network "Computer network").^[<span>[</span>1<span>]</span>](http://en.wikipedia.org/wiki/Virtual_Network_Computing#cite_note-1)^
 The recommended clients are
@@ -84,7 +85,7 @@ tcp        0      0 127.0.0.1:5961          0.0.0.0:*   
 tcp6       0      0 ::1:5961                :::*                    LISTEN      2022/ssh 
 ```
 Or on Mac OS use this command:
-``` {.prettyprint .lang-sh}
+``` 
 local-mac $ lsof -n -i4TCP:5961 | grep LISTEN
 ssh 75890 sta545 7u IPv4 0xfb062b5c15a56a3b 0t0 TCP 127.0.0.1:5961 (LISTEN)
 ```
@@ -166,7 +167,7 @@ Or this way:
 [username@login2 .vnc]$  pkill vnc
 ```
 GUI applications on compute nodes over VNC
--------------------
+------------------------------------------
 The very [same methods as described
 above](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-and-vnc#gui-applications-on-compute-nodes),
 may be used to run the GUI applications on compute nodes. However, for
diff --git a/docs.it4i.cz/salomon/accessing-the-cluster/outgoing-connections.md b/docs.it4i.cz/salomon/accessing-the-cluster/outgoing-connections.md
index 1f497c593dfc58be589659a3781714ef1df145a1..88c9c84887eab6f463c3659c7573ae1ed60ba713 100644
--- a/docs.it4i.cz/salomon/accessing-the-cluster/outgoing-connections.md
+++ b/docs.it4i.cz/salomon/accessing-the-cluster/outgoing-connections.md
@@ -1,7 +1,9 @@
 Outgoing connections 
 ====================
+
   
 Connection restrictions
+-----------------------
 Outgoing connections, from Salomon Cluster login nodes to the outside
 world, are restricted to following ports:
   Port   Protocol
diff --git a/docs.it4i.cz/salomon/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md b/docs.it4i.cz/salomon/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md
index db82c44a3fd9611f54757a8dde3e776d15d7e67f..1df98a694e67214dca3d64365da66d4e571d11a9 100644
--- a/docs.it4i.cz/salomon/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md
+++ b/docs.it4i.cz/salomon/accessing-the-cluster/shell-and-data-access/shell-and-data-access.md
@@ -1,5 +1,6 @@
 Shell access and data transfer 
 ==============================
+
   
 Interactive Login
 -----------------
@@ -11,7 +12,7 @@ The alias <span>salomon.it4i.cz is currently not available through VPN
 connection. Please use loginX.salomon.it4i.cz when connected to
 VPN.</span>
   Login address            Port   Protocol   Login node
-  - ------ ---------- ------------------
+  ------------------------ ------ ---------- -----------------------------------------
   salomon.it4i.cz          22     ssh        round-robin DNS record for login[1-4]
   login1.salomon.it4i.cz   22     ssh        login1
   login2.salomon.it4i.cz   22     ssh        login2
@@ -26,12 +27,12 @@ f6:28:98:e4:f9:b2:a6:8f:f2:f4:2d:0a:09:67:69:80 (DSA)
  
 Private keys authentication:
 On **Linux** or **Mac**, use
-``` {.prettyprint .lang-sh}
+``` 
 local $ ssh -i /path/to/id_rsa username@salomon.it4i.cz
 ```
 If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this
 command to set lower permissions to private key file.
-``` {.prettyprint .lang-sh}
+``` 
 local $ chmod 600 /path/to/id_rsa
 ```
 On **Windows**, use [PuTTY ssh
@@ -60,7 +61,7 @@ nodes cedge[1-3].salomon.it4i.cz for increased performance.
  
 HTML commented section #1 (removed cedge servers from the table)
   Address                                                Port   Protocol
-  -------- ------ ------------------
+  ------------------------------------------------------ ------ -----------------------------------------
   salomon.it4i.cz                                        22     scp, sftp
   login1.salomon.it4i.cz                                 22     scp, sftp
   login2.salomon.it4i.cz                                 22     scp, sftp
@@ -71,26 +72,26 @@ key](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters
 HTML commented section #2 (ssh transfer performance data need to be
 verified)
 On linux or Mac, use scp or sftp client to transfer the data to Salomon:
-``` {.prettyprint .lang-sh}
+``` 
 local $ scp -i /path/to/id_rsa my-local-file username@salomon.it4i.cz:directory/file
 ```
-``` {.prettyprint .lang-sh}
+``` 
 local $ scp -i /path/to/id_rsa -r my-local-dir username@salomon.it4i.cz:directory
 ```
 > or
-``` {.prettyprint .lang-sh}
+``` 
 local $ sftp -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz
 ```
 Very convenient way to transfer files in and out of the Salomon computer
 is via the fuse filesystem
 [sshfs](http://linux.die.net/man/1/sshfs)
-``` {.prettyprint .lang-sh}
+``` 
 local $ sshfs -o IdentityFile=/path/to/id_rsa username@salomon.it4i.cz:. mountpoint
 ```
 Using sshfs, the users Salomon home directory will be mounted on your
 local computer, just like an external disk.
 Learn more on ssh, scp and sshfs by reading the manpages
-``` {.prettyprint .lang-sh}
+``` 
 $ man ssh
 $ man scp
 $ man sshfs
diff --git a/docs.it4i.cz/salomon/accessing-the-cluster/vpn-access.md b/docs.it4i.cz/salomon/accessing-the-cluster/vpn-access.md
index 5f114d10c967459e662bfc58ca6e4cb8cde9b759..9542e5d3b326d08123c614397831c240a47ec3e3 100644
--- a/docs.it4i.cz/salomon/accessing-the-cluster/vpn-access.md
+++ b/docs.it4i.cz/salomon/accessing-the-cluster/vpn-access.md
@@ -1,8 +1,9 @@
 VPN Access 
 ==========
+
   
 Accessing IT4Innovations internal resources via VPN
------
+---------------------------------------------------
 For using resources and licenses which are located at IT4Innovations
 local network, it is necessary to VPN connect to this network.
 We use Cisco AnyConnect Secure Mobility Client, which is supported on
@@ -15,7 +16,7 @@ the following operating systems:
 -   <span>MacOS</span>
 It is impossible to connect to VPN from other operating systems.
 <span>VPN client installation</span>
--------------
+------------------------------------
 You can install VPN client from web interface after successful login
 with LDAP credentials on address <https://vpn.it4i.cz/user>
 [![VPN
@@ -43,6 +44,7 @@ Install](https://docs.it4i.cz/salomon/vpn_web_download_2.png/@@images/3358d2ce-f
 After successful download of installation file, you have to execute this
 tool with administrator's rights and install VPN client manually.
 Working with VPN client
+-----------------------
 You can use graphical user interface or command line interface to run
 VPN client on all supported operating systems. We suggest using GUI.
 ![Icon](https://docs.it4i.cz/anselm-cluster-documentation/icon.jpg/@@images/56ee3417-80b8-4988-b9d5-8cda3f894963.jpeg "Icon")
diff --git a/docs.it4i.cz/salomon/compute-nodes.md b/docs.it4i.cz/salomon/compute-nodes.md
index 0ba4cc01e03adb5117df3b66423a48939d89cd32..1988e6b6bb782989f07baff384219de00078139a 100644
--- a/docs.it4i.cz/salomon/compute-nodes.md
+++ b/docs.it4i.cz/salomon/compute-nodes.md
@@ -1,5 +1,6 @@
 Compute Nodes 
 =============
+
   
 Nodes Configuration
 -------------------
diff --git a/docs.it4i.cz/salomon/environment-and-modules.md b/docs.it4i.cz/salomon/environment-and-modules.md
index 6df4d9bb6ad575fa3f2d975da1b7a90f949847c5..2788e79d1eeddddc8f5a3ad01c4d8c6647b3345f 100644
--- a/docs.it4i.cz/salomon/environment-and-modules.md
+++ b/docs.it4i.cz/salomon/environment-and-modules.md
@@ -1,11 +1,12 @@
 Environment and Modules 
 =======================
+
   
 ### Environment Customization
 After logging in, you may want to configure the environment. Write your
 preferred path definitions, aliases, functions and module loads in the
 .bashrc file
-``` {.prettyprint .lang-sh}
+``` 
 # ./bashrc
 # Source global definitions
 if [ -f /etc/bashrc ]; then
@@ -32,7 +33,7 @@ Salomon we use Module package interface.
 Application modules on Salomon cluster are built using
 [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). The
 modules are divided into the following structure:
-``` {.prettyprint}
+``` 
  baseDefault module class
  bioBioinformatics, biology and biomedical
  caeComputer Aided Engineering (incl. CFD)
@@ -60,25 +61,25 @@ variables for running particular application.
 The modules may be loaded, unloaded and switched, according to momentary
 needs.
 To check available modules use
-``` {.prettyprint .lang-sh}
+``` 
 $ module avail
 ```
 To load a module, for example the OpenMPI module  use
-``` {.prettyprint .lang-sh}
+``` 
 $ module load OpenMPI
 ```
 loading the OpenMPI module will set up paths and environment variables
 of your active shell such that you are ready to run the OpenMPI software
 To check loaded modules use
-``` {.prettyprint .lang-sh}
+``` 
 $ module list
 ```
  To unload a module, for example the OpenMPI module use
-``` {.prettyprint .lang-sh}
+``` 
 $ module unload OpenMPI
 ```
 Learn more on modules by reading the module man page
-``` {.prettyprint .lang-sh}
+``` 
 $ man module
 ```
 ### EasyBuild Toolchains
@@ -97,8 +98,8 @@ specified in some way.
 The EasyBuild framework prepares the build environment for the different
 toolchain components, by loading their respective modules and defining
 environment variables to specify compiler commands (e.g.,
-via `$F90`{.docutils .literal}), compiler and linker options (e.g.,
-via `$CFLAGS`{.docutils .literal} and `$LDFLAGS`{.docutils .literal}),
+via `$F90`), compiler and linker options (e.g.,
+via `$CFLAGS` and `$LDFLAGS`),
 the list of library names to supply to the linker (via `$LIBS`{.docutils
 .literal}), etc. This enables making easyblocks
 largely toolchain-agnostic since they can simply rely on these
@@ -114,7 +115,7 @@ for:
  
 On Salomon, we have currently following toolchains installed:
   Toolchain            Module(s)
-  -------------------- --
+  -------------------- ------------------------------------------------
   GCC                  GCC
   ictce                icc, ifort, imkl, impi
   intel                GCC, icc, ifort, imkl, impi
diff --git a/docs.it4i.cz/salomon/hardware-overview-1.1.md b/docs.it4i.cz/salomon/hardware-overview-1.1.md
index 298cb25728bbd82d988d06f51b4e0056ea4a4d5b..6e3489de732af2a4cc622d026c4d73ba787081a3 100644
--- a/docs.it4i.cz/salomon/hardware-overview-1.1.md
+++ b/docs.it4i.cz/salomon/hardware-overview-1.1.md
@@ -1,5 +1,6 @@
 Hardware Overview 
 =================
+
   
 Introduction
 ------------
@@ -51,17 +52,17 @@ Total amount of RAM
 Compute nodes
 -------------
   Node              Count   Processor                          Cores   Memory   Accelerator
-  ----------------- ------- ----------- ------- -------- ---------------------
+  ----------------- ------- ---------------------------------- ------- -------- --------------------------------------------
   w/o accelerator   576     2x Intel Xeon E5-2680v3, 2.5GHz    24      128GB    -
   MIC accelerated   432     2x Intel Xeon E5-2680v3, 2.5GHz    24      128GB    2x Intel Xeon Phi 7120P, 61cores, 16GB RAM
 For more details please refer to the [Compute
 nodes](https://docs.it4i.cz/salomon/compute-nodes).
 Remote visualization nodes
----
+--------------------------
 For remote visualization two nodes with NICE DCV software are available
 each configured:
   Node            Count   Processor                         Cores   Memory   GPU Accelerator
-  --------------- ------- ---------- ------- -------- -------
+  --------------- ------- --------------------------------- ------- -------- ------------------------------
   visualization   2       2x Intel Xeon E5-2695v3, 2.3GHz   28      512GB    NVIDIA QUADRO K5000, 4GB RAM
 SGI UV 2000
 -----------
diff --git a/docs.it4i.cz/salomon/hardware-overview-1/hardware-overview.md b/docs.it4i.cz/salomon/hardware-overview-1/hardware-overview.md
index 298cb25728bbd82d988d06f51b4e0056ea4a4d5b..6e3489de732af2a4cc622d026c4d73ba787081a3 100644
--- a/docs.it4i.cz/salomon/hardware-overview-1/hardware-overview.md
+++ b/docs.it4i.cz/salomon/hardware-overview-1/hardware-overview.md
@@ -1,5 +1,6 @@
 Hardware Overview 
 =================
+
   
 Introduction
 ------------
@@ -51,17 +52,17 @@ Total amount of RAM
 Compute nodes
 -------------
   Node              Count   Processor                          Cores   Memory   Accelerator
-  ----------------- ------- ----------- ------- -------- ---------------------
+  ----------------- ------- ---------------------------------- ------- -------- --------------------------------------------
   w/o accelerator   576     2x Intel Xeon E5-2680v3, 2.5GHz    24      128GB    -
   MIC accelerated   432     2x Intel Xeon E5-2680v3, 2.5GHz    24      128GB    2x Intel Xeon Phi 7120P, 61cores, 16GB RAM
 For more details please refer to the [Compute
 nodes](https://docs.it4i.cz/salomon/compute-nodes).
 Remote visualization nodes
----
+--------------------------
 For remote visualization two nodes with NICE DCV software are available
 each configured:
   Node            Count   Processor                         Cores   Memory   GPU Accelerator
-  --------------- ------- ---------- ------- -------- -------
+  --------------- ------- --------------------------------- ------- -------- ------------------------------
   visualization   2       2x Intel Xeon E5-2695v3, 2.3GHz   28      512GB    NVIDIA QUADRO K5000, 4GB RAM
 SGI UV 2000
 -----------
diff --git a/docs.it4i.cz/salomon/list_of_modules.md b/docs.it4i.cz/salomon/list_of_modules.md
index 121d2a3a3533e55cb0b06d79814cd06cdaed0679..c9175253de8cd8c6eae2f629f2011d2d7bdfd3ea 100644
--- a/docs.it4i.cz/salomon/list_of_modules.md
+++ b/docs.it4i.cz/salomon/list_of_modules.md
@@ -23,8 +23,8 @@ Categories
 -   [toolchain](#toolchain)
 -   [tools](#tools)
 -   [vis](#vis)
-[base](#categories "Go to list of categories..."){.tooltip}
--------------
+[base](#categories "Go to list of categories...")
+-----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -59,8 +59,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[bio](#categories "Go to list of categories..."){.tooltip}
-------------
+[bio](#categories "Go to list of categories...")
+----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -197,8 +197,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[cae](#categories "Go to list of categories..."){.tooltip}
-------------
+[cae](#categories "Go to list of categories...")
+----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -284,8 +284,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[chem](#categories "Go to list of categories..."){.tooltip}
--------------
+[chem](#categories "Go to list of categories...")
+-----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -388,8 +388,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[compiler](#categories "Go to list of categories..."){.tooltip}
------------------
+[compiler](#categories "Go to list of categories...")
+---------------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -511,8 +511,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[data](#categories "Go to list of categories..."){.tooltip}
--------------
+[data](#categories "Go to list of categories...")
+-----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -586,8 +586,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[debugger](#categories "Go to list of categories..."){.tooltip}
------------------
+[debugger](#categories "Go to list of categories...")
+---------------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -657,8 +657,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[devel](#categories "Go to list of categories..."){.tooltip}
---------------
+[devel](#categories "Go to list of categories...")
+------------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -1061,8 +1061,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[geo](#categories "Go to list of categories..."){.tooltip}
-------------
+[geo](#categories "Go to list of categories...")
+----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -1103,8 +1103,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[lang](#categories "Go to list of categories..."){.tooltip}
--------------
+[lang](#categories "Go to list of categories...")
+-----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -1334,8 +1334,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[lib](#categories "Go to list of categories..."){.tooltip}
-------------
+[lib](#categories "Go to list of categories...")
+----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -1621,8 +1621,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[math](#categories "Go to list of categories..."){.tooltip}
--------------
+[math](#categories "Go to list of categories...")
+-----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -1764,8 +1764,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[mpi](#categories "Go to list of categories..."){.tooltip}
-------------
+[mpi](#categories "Go to list of categories...")
+----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -1851,8 +1851,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[numlib](#categories "Go to list of categories..."){.tooltip}
----------------
+[numlib](#categories "Go to list of categories...")
+-------------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -1980,8 +1980,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[perf](#categories "Go to list of categories..."){.tooltip}
--------------
+[perf](#categories "Go to list of categories...")
+-----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -2087,8 +2087,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[phys](#categories "Go to list of categories..."){.tooltip}
--------------
+[phys](#categories "Go to list of categories...")
+-----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -2121,8 +2121,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[system](#categories "Go to list of categories..."){.tooltip}
----------------
+[system](#categories "Go to list of categories...")
+-------------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -2160,8 +2160,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[toolchain](#categories "Go to list of categories..."){.tooltip}
-------------------
+[toolchain](#categories "Go to list of categories...")
+----------------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -2282,8 +2282,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[tools](#categories "Go to list of categories..."){.tooltip}
---------------
+[tools](#categories "Go to list of categories...")
+------------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
@@ -2607,8 +2607,8 @@ Categories
 </tr>
 </tbody>
 </table>
-[vis](#categories "Go to list of categories..."){.tooltip}
-------------
+[vis](#categories "Go to list of categories...")
+----------------------------------------------------------
 <table>
 <colgroup>
 <col width="33%" />
diff --git a/docs.it4i.cz/salomon/network-1.md b/docs.it4i.cz/salomon/network-1.md
index 51e23f72489530d4d3eb5683323a9138453c3b05..e6f11e73d55724c3f0f8453cd5a1959a4e798bf8 100644
--- a/docs.it4i.cz/salomon/network-1.md
+++ b/docs.it4i.cz/salomon/network-1.md
@@ -1,5 +1,6 @@
 Network 
 =======
+
   
 All compute and login nodes of Salomon are interconnected by 7D Enhanced
 hypercube
@@ -33,7 +34,7 @@ The network provides **2170MB/s** transfer rates via the TCP connection
  
 Example
 -------
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
 $ qstat -n -u username
                                                             Req'd  Req'd   Elap
@@ -44,12 +45,12 @@ Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
 ```
 In this example, we access the node r4i1n0 by Infiniband network via the
 ib0 interface.
-``` {.prettyprint .lang-sh}
+``` 
 $ ssh 10.17.35.19
 ```
 In this example, we <span style="text-alignstart; floatnone; ">get
 information of the Infiniband network.</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ ifconfig
 ....
 inet addr:10.17.35.19....
diff --git a/docs.it4i.cz/salomon/network-1/7d-enhanced-hypercube.md b/docs.it4i.cz/salomon/network-1/7d-enhanced-hypercube.md
index bdd229b2043d711e9af22ca5a8b34b004806d60b..c39e1421d95d9fcaf61c0f06448400aa453e60af 100644
--- a/docs.it4i.cz/salomon/network-1/7d-enhanced-hypercube.md
+++ b/docs.it4i.cz/salomon/network-1/7d-enhanced-hypercube.md
@@ -4,7 +4,7 @@
 dimension.](https://docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-submission-and-execution)
 Nodes may be selected via the PBS resource attribute ehc_[1-7]d .
   Hypercube dimension   <span class="pun">node_group_key</span>
-  --------------------- --------------------
+  --------------------- -------------------------------------------
   1D                    ehc_1d
   2D                    ehc_2d
   3D                    ehc_3d
@@ -16,11 +16,11 @@ Nodes may be selected via the PBS resource attribute ehc_[1-7]d .
 topology represents <span class="internal-link">hypercube
 dimension</span>
 0](https://docs.it4i.cz/salomon/network-1/ib-single-plane-topology).
-### 7D Enhanced Hypercube {#d-enhanced-hypercube}
+### 7D Enhanced Hypercube 
 [![7D_Enhanced_hypercube.png](https://docs.it4i.cz/salomon/network-1/7D_Enhanced_hypercube.png/@@images/03693a91-beec-4d04-bf11-7d89c20420a6.png "7D_Enhanced_hypercube.png")](https://docs.it4i.cz/salomon/network-1/7D_Enhanced_hypercube.png)
  
   Node type                              Count   Short name         Long name                  Rack
-  --------------- ------- ------------------ --- -------
+  -------------------------------------- ------- ------------------ -------------------------- -------
   M-Cell compute nodes w/o accelerator   576     cns1 -cns576       r1i0n0 - r4i7n17           1-4
   compute nodes MIC accelerated          432     cns577 - cns1008   r21u01n577 - r37u31n1008   21-38
 ###  IB Topology
diff --git a/docs.it4i.cz/salomon/network-1/ib-single-plane-topology.md b/docs.it4i.cz/salomon/network-1/ib-single-plane-topology.md
index dcd472760ef118f528d79f5ad668e2905bcb4e92..0796d1ec34d8154b446b5552eddd0973ccc474f3 100644
--- a/docs.it4i.cz/salomon/network-1/ib-single-plane-topology.md
+++ b/docs.it4i.cz/salomon/network-1/ib-single-plane-topology.md
@@ -1,5 +1,6 @@
 IB single-plane topology 
 ========================
+
   
 A complete M-Cell assembly consists of four compute racks. Each rack
 contains 4x physical IRUs - Independent rack units. Using one dual
diff --git a/docs.it4i.cz/salomon/network-1/ib-single-plane-topology/schematic-representation-of-the-salomon-cluster-ib-single-plain-topology-hypercube-dimension-0.md b/docs.it4i.cz/salomon/network-1/ib-single-plane-topology/schematic-representation-of-the-salomon-cluster-ib-single-plain-topology-hypercube-dimension-0.md
index dcd472760ef118f528d79f5ad668e2905bcb4e92..0796d1ec34d8154b446b5552eddd0973ccc474f3 100644
--- a/docs.it4i.cz/salomon/network-1/ib-single-plane-topology/schematic-representation-of-the-salomon-cluster-ib-single-plain-topology-hypercube-dimension-0.md
+++ b/docs.it4i.cz/salomon/network-1/ib-single-plane-topology/schematic-representation-of-the-salomon-cluster-ib-single-plain-topology-hypercube-dimension-0.md
@@ -1,5 +1,6 @@
 IB single-plane topology 
 ========================
+
   
 A complete M-Cell assembly consists of four compute racks. Each rack
 contains 4x physical IRUs - Independent rack units. Using one dual
diff --git a/docs.it4i.cz/salomon/network-1/network.md b/docs.it4i.cz/salomon/network-1/network.md
index 51e23f72489530d4d3eb5683323a9138453c3b05..e6f11e73d55724c3f0f8453cd5a1959a4e798bf8 100644
--- a/docs.it4i.cz/salomon/network-1/network.md
+++ b/docs.it4i.cz/salomon/network-1/network.md
@@ -1,5 +1,6 @@
 Network 
 =======
+
   
 All compute and login nodes of Salomon are interconnected by 7D Enhanced
 hypercube
@@ -33,7 +34,7 @@ The network provides **2170MB/s** transfer rates via the TCP connection
  
 Example
 -------
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
 $ qstat -n -u username
                                                             Req'd  Req'd   Elap
@@ -44,12 +45,12 @@ Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
 ```
 In this example, we access the node r4i1n0 by Infiniband network via the
 ib0 interface.
-``` {.prettyprint .lang-sh}
+``` 
 $ ssh 10.17.35.19
 ```
 In this example, we <span style="text-alignstart; floatnone; ">get
 information of the Infiniband network.</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ ifconfig
 ....
 inet addr:10.17.35.19....
diff --git a/docs.it4i.cz/salomon/prace.md b/docs.it4i.cz/salomon/prace.md
index f02965f8a44160a665567d0735818a719bba2b33..4a6adfcd47fe2c8797161b0d272627c82636d111 100644
--- a/docs.it4i.cz/salomon/prace.md
+++ b/docs.it4i.cz/salomon/prace.md
@@ -1,5 +1,6 @@
 PRACE User Support 
 ==================
+
   
 Intro
 -----
@@ -19,7 +20,7 @@ All general [PRACE User
 Documentation](http://www.prace-ri.eu/user-documentation/)
 should be read before continuing reading the local documentation here.
 []()[]()Help and Support
--
+------------------------
 If you have any troubles, need information, request support or want to
 install additional software, please use [PRACE
 Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/).
@@ -31,7 +32,7 @@ to access the web interface of the local (IT4Innovations) request
 tracker and thus a new ticket should be created by sending an e-mail to
 support[at]it4i.cz.
 Obtaining Login Credentials
-----
+---------------------------
 In general PRACE users already have a PRACE account setup through their
 HOMESITE (institution from their country) as a result of rewarded PRACE
 project proposal. This includes signed PRACE AuP, generated and
@@ -80,7 +81,7 @@ class="monospace">salomon-prace.it4i.cz</span> which is distributed
 between the two login nodes. If needed, user can login directly to one
 of the login nodes. The addresses are:
   Login address                  Port   Protocol   Login node
-  ------- ------ ---------- -----------
+  ------------------------------ ------ ---------- ----------------------------------
   salomon-prace.it4i.cz          2222   gsissh     login1, login2, login3 or login4
   login1-prace.salomon.it4i.cz   2222   gsissh     login1
   login2-prace.salomon.it4i.cz   2222   gsissh     login2
@@ -98,7 +99,7 @@ class="monospace">salomon.it4i.cz</span> which is distributed between
 the two login nodes. If needed, user can login directly to one of the
 login nodes. The addresses are:
   Login address            Port   Protocol   Login node
-  - ------ ---------- -----------
+  ------------------------ ------ ---------- ----------------------------------
   salomon.it4i.cz          2222   gsissh     login1, login2, login3 or login4
   login1.salomon.it4i.cz   2222   gsissh     login1
   login2.salomon.it4i.cz   2222   gsissh     login2
@@ -149,7 +150,7 @@ There's one control server and three backend servers for striping and/or
 backup in case one of them would fail.
 **Access from PRACE network:**
   Login address                   Port   Node role
-  -------- ------ ------
+  ------------------------------- ------ -----------------------------
   gridftp-prace.salomon.it4i.cz   2812   Front end /control server
   lgw1-prace.salomon.it4i.cz      2813   Backend / data mover server
   lgw2-prace.salomon.it4i.cz      2813   Backend / data mover server
@@ -166,7 +167,7 @@ Or by using <span class="monospace">prace_service</span> script:
  
 **Access from public Internet:**
   Login address             Port   Node role
-  -- ------ ------
+  ------------------------- ------ -----------------------------
   gridftp.salomon.it4i.cz   2812   Front end /control server
   lgw1.salomon.it4i.cz      2813   Backend / data mover server
   lgw2.salomon.it4i.cz      2813   Backend / data mover server
@@ -183,7 +184,7 @@ Or by using <span class="monospace">prace_service</span> script:
  
 Generally both shared file systems are available through GridFTP:
   File system mount point   Filesystem   Comment
-  -- ------------ ------------------
+  ------------------------- ------------ ----------------------------------------------------------------
   /home                     Lustre       Default HOME directories of users in format /home/prace/login/
   /scratch                  Lustre       Shared SCRATCH mounted on the whole cluster
 More information about the shared file systems is available
@@ -191,7 +192,7 @@ More information about the shared file systems is available
 Please note, that for PRACE users a "prace" directory is used also on
 the SCRATCH file system.
   Data type                      Default path
-  ------- ----------
+  ------------------------------ ---------------------------------
   large project files            /scratch/work/user/prace/login/
   large scratch/temporary data   /scratch/temp/
 Usage of the cluster
@@ -222,10 +223,10 @@ execution is in this [section of general
 documentation](https://docs.it4i.cz/salomon/resource-allocation-and-job-execution/introduction).
 For PRACE users, the default production run queue is "qprace". PRACE
 users can also use two other queues "qexp" and "qfree".
-  ---
+  ---------------------------------------------------------------------------------------------------------------------------------------------
   queue                 Active project   Project resources   Nodes                                     priority   authorization   walltime
                                                                                                                                   default/max
-  --------------------- ---------------- ------------------- ------------------ ---------- --------------- -------------
+  --------------------- ---------------- ------------------- ----------------------------------------- ---------- --------------- -------------
   **qexp**             no               none required       32 nodes, max 8 per user                  150        no              1 / 1h
   Express queue                                                                                                                   
   **qprace**           yes             &gt; 0              <span>1006 nodes, max 86 per job</span>   0          no              24 / 48h
@@ -233,7 +234,7 @@ users can also use two other queues "qexp" and "qfree".
                                                                                                                                   
   **qfree**            yes              none required       752 nodes, max 86 per job                 -1024      no              12 / 12h
   Free resource queue                                                                                                             
-  ---
+  ---------------------------------------------------------------------------------------------------------------------------------------------
 **qprace**, the PRACE Production queue****This queue is intended for
 normal production runs. It is required that active project with nonzero
 remaining resources is specified to enter the qprace. The queue runs
diff --git a/docs.it4i.cz/salomon/resource-allocation-and-job-execution.md b/docs.it4i.cz/salomon/resource-allocation-and-job-execution.md
index ca4160e21fe25241cca4ca2ec5d350132d9a665d..42375d5673906c73b7fc5b05cd8f60c815706fcd 100644
--- a/docs.it4i.cz/salomon/resource-allocation-and-job-execution.md
+++ b/docs.it4i.cz/salomon/resource-allocation-and-job-execution.md
@@ -1,5 +1,6 @@
 Resource Allocation and Job Execution 
 =====================================
+
   
 To run a
 [job](https://docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-submission-and-execution),
@@ -13,7 +14,7 @@ here](https://docs.it4i.cz/pbspro-documentation),
 especially in the [PBS Pro User's
 Guide](https://docs.it4i.cz/pbspro-documentation/pbspro-users-guide).
 Resources Allocation Policy
-----
+---------------------------
 The resources are allocated to the job in a fairshare fashion, subject
 to constraints set by the queue and resources available to the Project.
 [The
@@ -34,7 +35,7 @@ Read more on the [Resource Allocation
 Policy](https://docs.it4i.cz/salomon/resource-allocation-and-job-execution/resources-allocation-policy)
 page.
 Job submission and execution
------
+----------------------------
 Use the **qsub** command to submit your jobs.
 The qsub submits the job into the queue. The qsub command creates a
 request to the PBS Job manager for allocation of specified resources. 
diff --git a/docs.it4i.cz/salomon/resource-allocation-and-job-execution/capacity-computing.md b/docs.it4i.cz/salomon/resource-allocation-and-job-execution/capacity-computing.md
index 8bd8c2272a5756bee74cdfbfdec47e0928992583..693d152c779e870dce9059a94cb948bd8c8249a8 100644
--- a/docs.it4i.cz/salomon/resource-allocation-and-job-execution/capacity-computing.md
+++ b/docs.it4i.cz/salomon/resource-allocation-and-job-execution/capacity-computing.md
@@ -1,5 +1,6 @@
 Capacity computing 
 ==================
+
   
 Introduction
 ------------
@@ -52,11 +53,11 @@ file001, ..., file900). Assume we would like to use each of these input
 files with program executable myprog.x, each as a separate job.
 First, we create a tasklist file (or subjobs list), listing all tasks
 (subjobs) - all input files in our example:
-``` {.prettyprint .lang-sh}
+``` 
 $ find . -name 'file*' > tasklist
 ```
 Then we create jobscript:
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -A PROJECT_ID
 #PBS -q qprod
@@ -65,7 +66,7 @@ Then we create jobscript:
 SCR=/scratch/work/user/$USER/$PBS_JOBID
 mkdir -p $SCR ; cd $SCR || exit
 # get individual tasks from tasklist with index from PBS JOB ARRAY
-TASK=$(sed -n "${PBS_ARRAY_INDEX}p" $PBS_O_WORKDIR/tasklist)  
+TASK=$(sed -n "$p" $PBS_O_WORKDIR/tasklist)  
 # copy input file and executable to scratch 
 cp $PBS_O_WORKDIR/$TASK input ; cp $PBS_O_WORKDIR/myprog.x .
 # execute the calculation
@@ -94,7 +95,7 @@ run has to be used properly.
 To submit the job array, use the qsub -J command. The 900 jobs of the
 [example above](#array_example) may be submitted like
 this:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -N JOBNAME -J 1-900 jobscript
 506493[].isrv5
 ```
@@ -104,14 +105,14 @@ the #PBS directives in the beginning of the jobscript file, dont'
 forget to set your valid PROJECT_ID and desired queue).
 Sometimes for testing purposes, you may need to submit only one-element
 array. This is not allowed by PBSPro, but there's a workaround:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -N JOBNAME -J 9-10:2 jobscript
 ```
 This will only choose the lower index (9 in this example) for
 submitting/running your job.
 ### Manage the job array
 Check status of the job array by the qstat command.
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -a 506493[].isrv5
 isrv5:
                                                             Req'd  Req'd   Elap
@@ -121,7 +122,7 @@ Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
 ```
 The status B means that some subjobs are already running.
 Check status of the first 100 subjobs by the qstat command.
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -a 12345[1-100].isrv5
 isrv5:
                                                             Req'd  Req'd   Elap
@@ -137,16 +138,16 @@ Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time
 ```
 Delete the entire job array. Running subjobs will be killed, queueing
 subjobs will be deleted.
-``` {.prettyprint .lang-sh}
+``` 
 $ qdel 12345[].isrv5
 ```
 Deleting large job arrays may take a while.
 Display status information for all user's jobs, job arrays, and subjobs.
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -u $USER -t
 ```
 Display status information for all user's subjobs.
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -u $USER -tJ
 ```
 Read more on job arrays in the [PBSPro Users
@@ -159,7 +160,7 @@ more computers. A job can be a single command or a small script that has
 to be run for each of the lines in the input. GNU parallel is most
 useful in running single core jobs via the queue system on  Anselm.
 For more information and examples see the parallel man page:
-``` {.prettyprint .lang-sh}
+``` 
 $ module add parallel
 $ man parallel
 ```
@@ -174,11 +175,11 @@ files with program executable myprog.x, each as a separate single core
 job. We call these single core jobs tasks.
 First, we create a tasklist file, listing all tasks - all input files in
 our example:
-``` {.prettyprint .lang-sh}
+``` 
 $ find . -name 'file*' > tasklist
 ```
 Then we create jobscript:
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -A PROJECT_ID
 #PBS -q qprod
@@ -209,7 +210,7 @@ $TASK.out name. 
 ### Submit the job
 To submit the job, use the qsub command. The 101 tasks' job of the
 [example above](#gp_example) may be submitted like this:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -N JOBNAME jobscript
 12345.dm2
 ```
@@ -219,7 +220,7 @@ complete in less than 2 hours.
 Please note the #PBS directives in the beginning of the jobscript file,
 dont' forget to set your valid PROJECT_ID and desired queue.
 []()Job arrays and GNU parallel
---------
+-------------------------------
 Combine the Job arrays and GNU parallel for best throughput of single
 core jobs
 While job arrays are able to utilize all available computational nodes,
@@ -241,16 +242,16 @@ files with program executable myprog.x, each as a separate single core
 job. We call these single core jobs tasks.
 First, we create a tasklist file, listing all tasks - all input files in
 our example:
-``` {.prettyprint .lang-sh}
+``` 
 $ find . -name 'file*' > tasklist
 ```
 Next we create a file, controlling how many tasks will be executed in
 one subjob
-``` {.prettyprint .lang-sh}
+``` 
 $ seq 32 > numtasks
 ```
 Then we create jobscript:
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -A PROJECT_ID
 #PBS -q qprod
@@ -262,7 +263,7 @@ SCR=/scratch/work/user/$USER/$PBS_JOBID/$PARALLEL_SEQ
 mkdir -p $SCR ; cd $SCR || exit
 # get individual task from tasklist with index from PBS JOB ARRAY and index form Parallel
 IDX=$(($PBS_ARRAY_INDEX + $PARALLEL_SEQ - 1))
-TASK=$(sed -n "${IDX}p" $PBS_O_WORKDIR/tasklist)
+TASK=$(sed -n "$p" $PBS_O_WORKDIR/tasklist)
 [ -z "$TASK" ] && exit
 # copy input file and executable to scratch 
 cp $PBS_O_WORKDIR/$TASK input 
@@ -291,7 +292,7 @@ Select  subjob walltime and number of tasks per subjob  carefully
 To submit the job array, use the qsub -J command. The 992 tasks' job of
 the [example above](#combined_example) may be submitted
 like this:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -N JOBNAME -J 1-992:32 jobscript
 12345[].dm2
 ```
@@ -311,7 +312,7 @@ recommend to try out the examples, before using this for running
 production jobs.
 Unzip the archive in an empty directory on Anselm and follow the
 instructions in the README file
-``` {.prettyprint .lang-sh}
+``` 
 $ unzip capacity.zip
 $ cd capacity
 $ cat README
diff --git a/docs.it4i.cz/salomon/resource-allocation-and-job-execution/introduction.md b/docs.it4i.cz/salomon/resource-allocation-and-job-execution/introduction.md
index ca4160e21fe25241cca4ca2ec5d350132d9a665d..42375d5673906c73b7fc5b05cd8f60c815706fcd 100644
--- a/docs.it4i.cz/salomon/resource-allocation-and-job-execution/introduction.md
+++ b/docs.it4i.cz/salomon/resource-allocation-and-job-execution/introduction.md
@@ -1,5 +1,6 @@
 Resource Allocation and Job Execution 
 =====================================
+
   
 To run a
 [job](https://docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-submission-and-execution),
@@ -13,7 +14,7 @@ here](https://docs.it4i.cz/pbspro-documentation),
 especially in the [PBS Pro User's
 Guide](https://docs.it4i.cz/pbspro-documentation/pbspro-users-guide).
 Resources Allocation Policy
-----
+---------------------------
 The resources are allocated to the job in a fairshare fashion, subject
 to constraints set by the queue and resources available to the Project.
 [The
@@ -34,7 +35,7 @@ Read more on the [Resource Allocation
 Policy](https://docs.it4i.cz/salomon/resource-allocation-and-job-execution/resources-allocation-policy)
 page.
 Job submission and execution
------
+----------------------------
 Use the **qsub** command to submit your jobs.
 The qsub submits the job into the queue. The qsub command creates a
 request to the PBS Job manager for allocation of specified resources. 
diff --git a/docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-submission-and-execution.md b/docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-submission-and-execution.md
index ca28db65094cf965efd6d3b0881e5eae154eef41..7b9d0d3cea124c13d18b8a2225cfddc0023f07e9 100644
--- a/docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-submission-and-execution.md
+++ b/docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-submission-and-execution.md
@@ -1,5 +1,6 @@
 Job submission and execution 
 ============================
+
   
 Job Submission
 --------------
@@ -14,7 +15,7 @@ When allocating computational resources for the job, please specify
 Use the **qsub** command to submit your job to a queue for allocation of
 the computational resources.
 Submit the job using the qsub command:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -A Project_ID -q queue -l select=x:ncpus=y,walltime=[[hh:]mm:]ss[.ms] jobscript
 ```
 The qsub submits the job into the queue, in another words the qsub
@@ -27,7 +28,7 @@ PBS statement nodes (qsub -l nodes=nodespec) is not supported on Salomon
 cluster.**
 **
 ### Job Submission Examples
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -A OPEN-0-0 -q qprod -l select=64:ncpus=24,walltime=03:00:00 ./myjob
 ```
 In this example, we allocate 64 nodes, 24 cores per node, for 3 hours.
@@ -35,21 +36,21 @@ We allocate these resources via the qprod queue, consumed resources will
 be accounted to the Project identified by Project ID OPEN-0-0. Jobscript
 myjob will be executed on the first node in the allocation.
  
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -q qexp -l select=4:ncpus=24 -I
 ```
 In this example, we allocate 4 nodes, 24 cores per node, for 1 hour. We
 allocate these resources via the qexp queue. The resources will be
 available interactively
  
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -A OPEN-0-0 -q qlong -l select=10:ncpus=24 ./myjob
 ```
 In this example, we allocate 10 nodes, 24 cores per node, for  72 hours.
 We allocate these resources via the qlong queue. Jobscript myjob will be
 executed on the first node in the allocation.
  
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=24 ./myjob
 ```
 In this example, we allocate 10  nodes, 24 cores per node, for 12 hours.
@@ -70,13 +71,13 @@ for testing/experiments, qlong for longer jobs, qfree after the project
 resources have been spent, etc. The Phi cards are thus also available to
 PRACE users. There's no need to ask for permission to utilize the Phi
 cards in project proposals.
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub  -A OPEN-0-0 -I -q qprod -l select=1:ncpus=24:accelerator=True:naccelerators=2:accelerator_model=phi7120 ./myjob
 ```
 In this example, we allocate 1 node, with 24 cores, with 2 Xeon Phi
 7120p cards, running batch job ./myjob. The default time for qprod is
 used, e. g. 24 hours.
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub  -A OPEN-0-0 -I -q qlong -l select=4:ncpus=24:accelerator=True:naccelerators=2 -l walltime=56:00:00 -I
 ```
 In this example, we allocate 4 nodes, with 24 cores per node (totalling
@@ -96,13 +97,13 @@ The jobs on UV2000 are isolated from each other by cpusets, so that a
 job by one user may not utilize CPU or memory allocated to a job by
 other user. Always, full chunks are allocated, a job may only use
 resources of  the NUMA nodes allocated to itself.
-``` {.prettyprint .lang-sh}
+``` 
  $ qsub -A OPEN-0-0 -q qfat -l select=14 ./myjob
 ```
 In this example, we allocate all 14 NUMA nodes (corresponds to 14
 chunks), 112 cores of the SGI UV2000 node  for 72 hours. Jobscript myjob
 will be executed on the node uv1.
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -A OPEN-0-0 -q qfat -l select=1:mem=2000GB ./myjob
 ```
 In this example, we allocate 2000GB of memory on the UV2000 for 72
@@ -112,24 +113,24 @@ Jobscript myjob will be executed on the node uv1.
 All qsub options may be [saved directly into the
 jobscript](#PBSsaved). In such a case, no options to qsub
 are needed.
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub ./myjob
 ```
  
 By default, the PBS batch system sends an e-mail only when the job is
 aborted. Disabling mail events completely can be done like this:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -m n
 ```
 []()Advanced job placement
----
+--------------------------
 ### Placement by name
 Specific nodes may be allocated via the PBS
-``` {.prettyprint .lang-sh}
+``` 
 qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=24:host=r24u35n680+1:ncpus=24:host=r24u36n681 -I
 ```
 Or using short names
-``` {.prettyprint .lang-sh}
+``` 
 qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=24:host=cns680+1:ncpus=24:host=cns681 -I
 ```
 In this example, we allocate nodes r24u35n680 and r24u36n681, all 24
@@ -139,7 +140,7 @@ available interactively.
 ### Placement by Hypercube dimension
 Nodes may be selected via the PBS resource attribute ehc_[1-7]d .
   Hypercube dimension   <span class="pun">node_group_key</span>
-  --------------------- --------------------
+  --------------------- -------------------------------------------
   1D                    ehc_1d
   2D                    ehc_2d
   3D                    ehc_3d
@@ -165,7 +166,7 @@ same switch prevents hops in the network and provides for unbiased, most
 efficient network communication.
 There are at most 9 nodes sharing the same Infiniband switch.
 Infiniband switch list:
-``` {.prettyprint .lang-sh}
+``` 
 $ qmgr -c "print node @a" | grep switch
 set node r4i1n11 resources_available.switch = r4i1s0sw1
 set node r2i0n0 resources_available.switch = r2i0s0sw1
@@ -173,7 +174,7 @@ set node r2i0n1 resources_available.switch = r2i0s0sw1
 ...
 ```
 List of all nodes per Infiniband switch:
-``` {.prettyprint .lang-sh}
+``` 
 $ qmgr -c "print node @a" | grep r36sw3
 set node r36u31n964 resources_available.switch = r36sw3
 set node r36u32n965 resources_available.switch = r36sw3
@@ -190,12 +191,12 @@ attribute switch.
 We recommend allocating compute nodes of a single switch when best
 possible computational network performance is required to run the job
 efficiently:
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -A OPEN-0-0 -q qprod -l select=9:ncpus=24:switch=r4i1s0sw1 ./myjob
 ```
 In this example, we request all the 9 nodes sharing the r4i1s0sw1 switch
 for 24 hours.
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -A OPEN-0-0 -q qprod -l select=9:ncpus=24 -l place=group=switch ./myjob
 ```
 In this example, we request 9 nodes placed on the same switch using node
@@ -205,14 +206,14 @@ Job Management
 --------------
 Check status of your jobs using the **qstat** and **check-pbs-jobs**
 commands
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -a
 $ qstat -a -u username
 $ qstat -an -u username
 $ qstat -f 12345.isrv5
 ```
 []()Example:
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -a
 srv11:
                                                             Req'd  Req'd   Elap
@@ -233,7 +234,7 @@ Check status of your jobs using check-pbs-jobs command. Check presence
 of user's PBS jobs' processes on execution hosts. Display load,
 processes. Display job standard and error output. Continuously display
 (tail -f) job standard or error output.
-``` {.prettyprint .lang-sh}
+``` 
 $ check-pbs-jobs --check-all
 $ check-pbs-jobs --print-load --print-processes
 $ check-pbs-jobs --print-job-out --print-job-err
@@ -241,7 +242,7 @@ $ check-pbs-jobs --jobid JOBID --check-all --print-all
 $ check-pbs-jobs --jobid JOBID --tailf-job-out
 ```
 Examples:
-``` {.prettyprint .lang-sh}
+``` 
 $ check-pbs-jobs --check-all
 JOB 35141.dm2, session_id 71995, user user2, nodes r3i6n2,r3i6n3
 Check session idOK
@@ -251,7 +252,7 @@ r3i6n3No process
 ```
 In this example we see that job 35141.dm2 currently runs no process on
 allocated node r3i6n2, which may indicate an execution error.
-``` {.prettyprint .lang-sh}
+``` 
 $ check-pbs-jobs --print-load --print-processes
 JOB 35141.dm2, session_id 71995, user user2, nodes r3i6n2,r3i6n3
 Print load
@@ -267,7 +268,7 @@ r3i6n299.7 run-task
 In this example we see that job 35141.dm2 currently runs process
 run-task on node r3i6n2, using one thread only, while node r3i6n3 is
 empty, which may indicate an execution error.
-``` {.prettyprint .lang-sh}
+``` 
 $ check-pbs-jobs --jobid 35141.dm2 --print-job-out
 JOB 35141.dm2, session_id 71995, user user2, nodes r3i6n2,r3i6n3
 Print job standard output:
@@ -283,15 +284,15 @@ In this example, we see actual output (some iteration loops) of the job
 Manage your queued or running jobs, using the **qhold**, **qrls**,
 **qdel,** **qsig** or **qalter** commands
 You may release your allocation at any time, using qdel command
-``` {.prettyprint .lang-sh}
+``` 
 $ qdel 12345.isrv5
 ```
 You may kill a running job by force, using qsig command
-``` {.prettyprint .lang-sh}
+``` 
 $ qsig -s 9 12345.isrv5
 ```
 Learn more by reading the pbs man page
-``` {.prettyprint .lang-sh}
+``` 
 $ man pbs_professional
 ```
 Job Execution
@@ -305,7 +306,7 @@ command as an argument and executed by the PBS Professional workload
 manager.
 The jobscript or interactive shell is executed on first of the allocated
 nodes.
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -q qexp -l select=4:ncpus=24 -N Name0 ./myjob
 $ qstat -n -u username
 isrv5:
@@ -321,7 +322,7 @@ myjob will be executed on the node r21u01n577, while the
 nodes r21u02n578, r21u03n579, r21u04n580 are available for use as well.
 The jobscript or interactive shell is by default executed in home
 directory
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub -q qexp -l select=4:ncpus=24 -I
 qsubwaiting for job 15210.isrv5 to start
 qsubjob 15210.isrv5 ready
@@ -337,7 +338,7 @@ may access each other via ssh as well.
 Calculations on allocated nodes may be executed remotely via the MPI,
 ssh, pdsh or clush. You may find out which nodes belong to the
 allocation by reading the $PBS_NODEFILE file
-``` {.prettyprint .lang-sh}
+``` 
 qsub -q qexp -l select=2:ncpus=24 -I
 qsubwaiting for job 15210.isrv5 to start
 qsubjob 15210.isrv5 ready
@@ -364,7 +365,7 @@ Production jobs must use the /scratch directory for I/O
 The recommended way to run production jobs is to change to /scratch
 directory early in the jobscript, copy all inputs to /scratch, execute
 the calculations and copy outputs to home directory.
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 # change to scratch directory, exit on failure
 SCRDIR=/scratch/work/user/$USER/myjob
@@ -403,7 +404,7 @@ Use **mpiprocs** and **ompthreads** qsub options to control the MPI job
 execution.
 Example jobscript for an MPI job with preloaded inputs and executables,
 options for qsub are stored within the script :
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -q qprod
 #PBS -N MYJOB
@@ -434,7 +435,7 @@ operational memory.
 Example jobscript for single node calculation, using [local
 scratch](https://docs.it4i.cz/salomon/storage) on the
 node:
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 # change to local scratch directory
 cd /lscratch/$PBS_JOBID || exit
diff --git a/docs.it4i.cz/salomon/resource-allocation-and-job-execution/resources-allocation-policy.md b/docs.it4i.cz/salomon/resource-allocation-and-job-execution/resources-allocation-policy.md
index 75f0aa81b34b92eb1811c290ea7eb3de94aef169..ecfb16a7d31da174c3da3c597aa6e1117ae19ae6 100644
--- a/docs.it4i.cz/salomon/resource-allocation-and-job-execution/resources-allocation-policy.md
+++ b/docs.it4i.cz/salomon/resource-allocation-and-job-execution/resources-allocation-policy.md
@@ -1,8 +1,9 @@
 Resources Allocation Policy 
 ===========================
+
   
-Resources Allocation Policy {#resources-allocation-policy}
-----
+Resources Allocation Policy 
+---------------------------
 The resources are allocated to the job in a fairshare fashion, subject
 to constraints set by the queue and resources available to the Project.
 The Fairshare at Anselm ensures that individual users may consume
@@ -213,12 +214,12 @@ Check the status of jobs, queues and compute nodes at
 Salomon](https://docs.it4i.cz/salomon/resource-allocation-and-job-execution/rswebsalomon.png "RSWEB Salomon")
  
 Display the queue status on Salomon:
-``` {.prettyprint .lang-sh}
+``` 
 $ qstat -q
 ```
 The PBS allocation overview may be obtained also using the rspbs
 command.
-``` {.prettyprint .lang-sh}
+``` 
 $ rspbs
 Usagerspbs [options]
 Options:
@@ -274,7 +275,7 @@ Options:
   --incl-finished       Include finished jobs
 ```
 []()Resources Accounting Policy
---------
+-------------------------------
 ### The Core-Hour
 The resources that are currently subject to accounting are the
 core-hours. The core-hours are accounted on the wall clock basis. The
@@ -293,7 +294,7 @@ located here:
 User may check at any time, how many core-hours have been consumed by
 himself/herself and his/her projects. The command is available on
 clusters' login nodes.
-``` {.prettyprint .lang-sh}
+``` 
 $ it4ifree
 Password:
      PID    Total   Used   ...by me Free
diff --git a/docs.it4i.cz/salomon/software/ansys.md b/docs.it4i.cz/salomon/software/ansys.md
index 7b9535924fdf95994646696efcf45adcb8f0bfc9..8a999cee5faa39bbccd69acd1243dcb08cd6fe97 100644
--- a/docs.it4i.cz/salomon/software/ansys.md
+++ b/docs.it4i.cz/salomon/software/ansys.md
@@ -6,7 +6,7 @@ Republic provided all ANSYS licenses for all clusters and supports of
 all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent,
 Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging
 to problem of ANSYS functionality contact
-please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM){.email-link}
+please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
 The clusters provides as commercial as academic variants. Academic
 variants are distinguished by "**Academic...**" word in the name of
  license or by two letter preposition "**aa_**" in the license feature
diff --git a/docs.it4i.cz/salomon/software/ansys/ansys-cfx.md b/docs.it4i.cz/salomon/software/ansys/ansys-cfx.md
index 2038509e37819e5e190cffeb0b8a95faa1478b0a..9965e5fc78e9ee00e218f79fc7d43e88d3ce206a 100644
--- a/docs.it4i.cz/salomon/software/ansys/ansys-cfx.md
+++ b/docs.it4i.cz/salomon/software/ansys/ansys-cfx.md
@@ -40,7 +40,7 @@ cfx.pbs script and execute it via the qsub command.</span>
     do
      if [ "$hl" = "" ]
      then hl="$host:$procs_per_host"
-     else hl="${hl}:$host:$procs_per_host"
+     else hl="$:$host:$procs_per_host"
      fi
     done
     echo Machines$hl
diff --git a/docs.it4i.cz/salomon/software/ansys/ansys-fluent.md b/docs.it4i.cz/salomon/software/ansys/ansys-fluent.md
index be67f0a505a3e61547154402eaf221327c48362b..122eacdf15a3b6b3f4f011b4f9b8c261f368fef9 100644
--- a/docs.it4i.cz/salomon/software/ansys/ansys-fluent.md
+++ b/docs.it4i.cz/salomon/software/ansys/ansys-fluent.md
@@ -11,7 +11,7 @@ treatment plants. Special models that give the software the ability to
 model in-cylinder combustion, aeroacoustics, turbomachinery, and
 multiphase systems have served to broaden its reach.
 <span>1. Common way to run Fluent over pbs file</span>
---------
+------------------------------------------------------
 <span>To run ANSYS Fluent in batch mode you can utilize/modify the
 default fluent.pbs script and execute it via the qsub command.</span>
     #!/bin/bash
@@ -62,7 +62,7 @@ structure:
 <span>The appropriate dimension of the problem has to be set by
 parameter (2d/3d). </span>
 <span>2. Fast way to run Fluent from command line</span>
-----------
+--------------------------------------------------------
     fluent solver_version [FLUENT_options] -i journal_file -pbs
 This syntax will start the ANSYS FLUENT job under PBS Professional using
 the <span class="monospace">qsub</span> command in a batch manner. When
@@ -76,14 +76,14 @@ working directory, and all output will be written to the file <span
 class="monospace">fluent.o</span><span> </span><span
 class="emphasis">*job_ID*</span>.       
 3. Running Fluent via user's config file
------------------
+----------------------------------------
 The sample script uses a configuration file called <span
 class="monospace">pbs_fluent.conf</span>  if no command line arguments
 are present. This configuration file should be present in the directory
 from which the jobs are submitted (which is also the directory in which
 the jobs are executed). The following is an example of what the content
 of <span class="monospace">pbs_fluent.conf</span> can be:
-``` {.screen}
+``` 
   input="example_small.flin"
   case="Small-1.65m.cas"
   fluent_args="3d -pmyrinet"
@@ -112,7 +112,7 @@ execute the job across multiple processors.               
 <span>To run ANSYS Fluent in batch mode with user's config file you can
 utilize/modify the following script and execute it via the qsub
 command.</span>
-``` {.screen}
+``` 
 #!/bin/sh
 #PBS -l nodes=2:ppn=24
 #PBS -1 qprod
@@ -137,7 +137,7 @@ command.</span>
      cpus=‘expr $num_nodes * $NCPUS‘
      #Default arguments for mpp jobs, these should be changed to suit your
      #needs.
-     fluent_args="-t${cpus} $fluent_args -cnf=$PBS_NODEFILE"
+     fluent_args="-t$ $fluent_args -cnf=$PBS_NODEFILE"
      ;;
    *)
      #SMP case
@@ -160,7 +160,7 @@ command.</span>
 <span>It runs the jobs out of the directory from which they are
 submitted (PBS_O_WORKDIR).</span>
 4. Running Fluent in parralel
-------
+-----------------------------
 []()Fluent could be run in parallel only under Academic Research
 license. To do so this ANSYS Academic Research license must be placed
 before ANSYS CFD license in user preferences. To make this change
diff --git a/docs.it4i.cz/salomon/software/ansys/ansys-ls-dyna.md b/docs.it4i.cz/salomon/software/ansys/ansys-ls-dyna.md
index 04670fd4ea88d9937256eee5a4fdb773d4f614f9..6e39bf24438987808bda2b7807f8bf644fc2b844 100644
--- a/docs.it4i.cz/salomon/software/ansys/ansys-ls-dyna.md
+++ b/docs.it4i.cz/salomon/software/ansys/ansys-ls-dyna.md
@@ -44,7 +44,7 @@ default ansysdyna.pbs script and execute it via the qsub command.</span>
     do
      if [ "$hl" = "" ]
      then hl="$host:$procs_per_host"
-     else hl="${hl}:$host:$procs_per_host"
+     else hl="$:$host:$procs_per_host"
      fi
     done
     echo Machines$hl
diff --git a/docs.it4i.cz/salomon/software/ansys/ansys-mechanical-apdl.md b/docs.it4i.cz/salomon/software/ansys/ansys-mechanical-apdl.md
index 442d38b9b591128ad0d163503da84af566ff0160..8f72959ac5c1c04c35c0a01b60e055c09f78c9f4 100644
--- a/docs.it4i.cz/salomon/software/ansys/ansys-mechanical-apdl.md
+++ b/docs.it4i.cz/salomon/software/ansys/ansys-mechanical-apdl.md
@@ -35,7 +35,7 @@ default mapdl.pbs script and execute it via the qsub command.</span>
     do
      if [ "$hl" = "" ]
      then hl="$host:$procs_per_host"
-     else hl="${hl}:$host:$procs_per_host"
+     else hl="$:$host:$procs_per_host"
      fi
     done
     echo Machines$hl
diff --git a/docs.it4i.cz/salomon/software/ansys/ansys-products-mechanical-fluent-cfx-mapdl.md b/docs.it4i.cz/salomon/software/ansys/ansys-products-mechanical-fluent-cfx-mapdl.md
index 7b9535924fdf95994646696efcf45adcb8f0bfc9..8a999cee5faa39bbccd69acd1243dcb08cd6fe97 100644
--- a/docs.it4i.cz/salomon/software/ansys/ansys-products-mechanical-fluent-cfx-mapdl.md
+++ b/docs.it4i.cz/salomon/software/ansys/ansys-products-mechanical-fluent-cfx-mapdl.md
@@ -6,7 +6,7 @@ Republic provided all ANSYS licenses for all clusters and supports of
 all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent,
 Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging
 to problem of ANSYS functionality contact
-please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM){.email-link}
+please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
 The clusters provides as commercial as academic variants. Academic
 variants are distinguished by "**Academic...**" word in the name of
  license or by two letter preposition "**aa_**" in the license feature
diff --git a/docs.it4i.cz/salomon/software/ansys/licensing.md b/docs.it4i.cz/salomon/software/ansys/licensing.md
index 475c2915e3bcea08d71d228dfb39c997082c4a1e..707f337779da1c8831f0106c8d84e0e526ebd89a 100644
--- a/docs.it4i.cz/salomon/software/ansys/licensing.md
+++ b/docs.it4i.cz/salomon/software/ansys/licensing.md
@@ -1,7 +1,7 @@
 Licensing and Available Versions 
 ================================
 ANSYS licence can be used by:
-------
+-----------------------------
 -   all persons in the carrying out of the CE IT4Innovations Project (In
     addition to the primary licensee, which is VSB - Technical
     University of Ostrava, users are CE IT4Innovations third parties -
@@ -14,6 +14,7 @@ ANSYS licence can be used by:
 -   <span id="result_box" class="short_text"><span class="hps">students
     of</span> <span class="hps">the Technical University</span></span>
 ANSYS Academic Research
+-----------------------
 The licence intended to be used for science and research, publications,
 students’ projects (academic licence).
 ANSYS COM
diff --git a/docs.it4i.cz/salomon/software/chemistry/molpro.md b/docs.it4i.cz/salomon/software/chemistry/molpro.md
index cd7dbf7c8ca72508fdb3449a46f52800d0b48964..ec14070d5aa17688543e6b37c87181405c947a5b 100644
--- a/docs.it4i.cz/salomon/software/chemistry/molpro.md
+++ b/docs.it4i.cz/salomon/software/chemistry/molpro.md
@@ -23,7 +23,7 @@ Currently on Salomon is installed version 2010.1, patch level 57,
 parallel version compiled with Intel compilers and Intel MPI.
 Compilation parameters are default :
   Parameter                                         Value
-  --- ------
+  ------------------------------------------------- -----------------------------
   <span>max number of atoms</span>                  200
   <span>max number of valence orbitals</span>       300
   <span>max number of basis functions</span>        4095
diff --git a/docs.it4i.cz/salomon/software/chemistry/nwchem.md b/docs.it4i.cz/salomon/software/chemistry/nwchem.md
index 625eb93bcd905b4e3b758bacc030e0f7f5613a6a..6e26309aa353e2b9323dcd5ded0340b1e88dd466 100644
--- a/docs.it4i.cz/salomon/software/chemistry/nwchem.md
+++ b/docs.it4i.cz/salomon/software/chemistry/nwchem.md
@@ -2,7 +2,7 @@ NWChem
 ======
 High-Performance Computational Chemistry
 <span>Introduction</span>
---
+-------------------------
 <span>NWChem aims to provide its users with computational chemistry
 tools that are scalable both in their ability to treat large scientific
 computational chemistry problems efficiently, and in their use of
diff --git a/docs.it4i.cz/salomon/software/chemistry/phono3py.md b/docs.it4i.cz/salomon/software/chemistry/phono3py.md
index 5c34f9576f597ddd06c2cbf12bf7c0e18b8903eb..e4cbb61a8c146fe258409ee956a2f189e90f4f15 100644
--- a/docs.it4i.cz/salomon/software/chemistry/phono3py.md
+++ b/docs.it4i.cz/salomon/software/chemistry/phono3py.md
@@ -1,5 +1,6 @@
 Phono3py 
 ========
+
   
  Introduction
 -------------
@@ -10,18 +11,18 @@ order, joint density of states (JDOS) and weighted-JDOS. For details see
 Phys. Rev. B 91, 094306 (2015) and
 http://atztogo.github.io/phono3py/index.html
 Load the phono3py/0.9.14-ictce-7.3.5-Python-2.7.9 module
-``` {.prettyprint .lang-sh}
+``` 
 $ module load phono3py/0.9.14-ictce-7.3.5-Python-2.7.9
 ```
 Example of calculating thermal conductivity of Si using VASP code.
---------------------
+------------------------------------------------------------------
 ### Calculating force constants
 One needs to calculate second order and third order force constants
 using the diamond structure of silicon stored in
 [POSCAR](https://docs.it4i.cz/salomon/software/chemistry/phono3py-input/poscar-si) 
 (the same form as in VASP) using single displacement calculations within
 supercell.
-``` {.prettyprint .lang-sh}
+``` 
 $ cat POSCAR
  Si
    1.0
@@ -41,14 +42,14 @@ Direct
    0.6250000000000000  0.6250000000000000  0.1250000000000000
 ```
 ### Generating displacement using 2x2x2 supercell for both second and third order force constants
-``` {.prettyprint .lang-sh}
+``` 
 $ phono3py -d --dim="2 2 2" -c POSCAR
 ```
 <span class="n">111 displacements is created stored in <span
 class="n">disp_fc3.yaml</span>, and the structure input files with this
 displacements are POSCAR-00XXX, where the XXX=111.
 </span>
-``` {.prettyprint .lang-sh}
+``` 
 disp_fc3.yaml  POSCAR-00008  POSCAR-00017  POSCAR-00026  POSCAR-00035  POSCAR-00044  POSCAR-00053  POSCAR-00062  POSCAR-00071  POSCAR-00080  POSCAR-00089  POSCAR-00098  POSCAR-00107
 POSCAR         POSCAR-00009  POSCAR-00018  POSCAR-00027  POSCAR-00036  POSCAR-00045  POSCAR-00054  POSCAR-00063  POSCAR-00072  POSCAR-00081  POSCAR-00090  POSCAR-00099  POSCAR-00108
 POSCAR-00001   POSCAR-00010  POSCAR-00019  POSCAR-00028  POSCAR-00037  POSCAR-00046  POSCAR-00055  POSCAR-00064  POSCAR-00073  POSCAR-00082  POSCAR-00091  POSCAR-00100  POSCAR-00109
@@ -72,7 +73,7 @@ script. Then each of the single 111 calculations is submitted
 [run.sh](https://docs.it4i.cz/salomon/software/chemistry/phono3py-input/run.sh)
 by
 [submit.sh](https://docs.it4i.cz/salomon/software/chemistry/phono3py-input/submit.sh).</span>
-``` {.prettyprint .lang-sh}
+``` 
 $./prepare.sh
 $ls
 disp-00001  disp-00009  disp-00017  disp-00025  disp-00033  disp-00041  disp-00049  disp-00057  disp-00065  disp-00073  disp-00081  disp-00089  disp-00097  disp-00105     INCAR
@@ -87,34 +88,34 @@ disp-00008  disp-00016  disp-00024  disp-00032  disp-00040  disp-00048  di
 <span class="n">Taylor your run.sh script to fit into your project and
 other needs and submit all 111 calculations using submit.sh
 script</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ ./submit.sh
 ```
 <span class="n">Collecting results and post-processing with phono3py</span>
-------
+---------------------------------------------------------------------------
 <span class="n">Once all jobs are finished and vasprun.xml is created in
 each disp-XXXXX directory the collection is done by </span>
-``` {.prettyprint .lang-sh}
+``` 
 $ phono3py --cf3 disp-{00001..00111}/vasprun.xml
 ```
 <span class="n"><span class="n">and
-`disp_fc2.yaml, FORCES_FC2`{.docutils .literal}, `FORCES_FC3`{.docutils
+`disp_fc2.yaml, FORCES_FC2`, `FORCES_FC3`{.docutils
 .literal}</span> and disp_fc3.yaml should appear and put into the hdf
 format by </span>
-``` {.prettyprint .lang-sh}
+``` 
 $ phono3py --dim="2 2 2" -c POSCAR
 ```
-resulting in `fc2.hdf5`{.docutils .literal} and `fc3.hdf5`{.docutils
+resulting in `fc2.hdf5` and `fc3.hdf5`{.docutils
 .literal}
 ### Thermal conductivity
 <span class="pre">The phonon lifetime calculations takes some time,
 however is independent on grid points, so could be splitted:
 </span>
-``` {.prettyprint .lang-sh}
+``` 
 $ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" --sigma 0.1 --wgp
 ```
 ### <span class="n">Inspecting ir_grid_points.yaml</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ grep grid_point ir_grid_points.yaml
 num_reduced_ir_grid_points35
 ir_grid_points:  # [address, weight]
@@ -156,17 +157,17 @@ ir_grid_points:  # [address, weight]
 ```
 one finds which grid points needed to be calculated, for instance using
 following
-``` {.prettyprint .lang-sh}
+``` 
 $ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" -c POSCAR  --sigma 0.1 --br --write-gamma --gp="0 1 2
 ```
 <span class="n">one calculates grid points 0, 1, 2. To automize one can
 use for instance scripts to submit 5 points in series, see
 [gofree-cond1.sh](https://docs.it4i.cz/salomon/software/chemistry/phono3py-input/gofree-cond1.sh)</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ qsub gofree-cond1.sh
 ```
 <span class="n">Finally the thermal conductivity result is produced by
 grouping single conductivity per grid calculations using  </span>
-``` {.prettyprint .lang-sh}
+``` 
 $ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" --br --read_gamma
 ```
diff --git a/docs.it4i.cz/salomon/software/compilers.md b/docs.it4i.cz/salomon/software/compilers.md
index 3cf8ab3113fe1bc198e8ccddda4846af8a66b058..71e0d9099fea84503eb9deb2f0fb9a405a97e55e 100644
--- a/docs.it4i.cz/salomon/software/compilers.md
+++ b/docs.it4i.cz/salomon/software/compilers.md
@@ -1,6 +1,7 @@
 Compilers 
 =========
 Available compilers, including GNU, INTEL and UPC compilers
+
   
 There are several compilers for different programming languages
 available on the cluster:
diff --git a/docs.it4i.cz/salomon/software/comsol.md b/docs.it4i.cz/salomon/software/comsol.md
index 5a7ac7c3614a0c8b1017864d3dedb6ece1b65275..3c2ea61a831d1c0fdce7d2e0d58b269e9f2250a8 100644
--- a/docs.it4i.cz/salomon/software/comsol.md
+++ b/docs.it4i.cz/salomon/software/comsol.md
@@ -1,9 +1,10 @@
 COMSOL Multiphysics® 
 ====================
+
   
 <span><span>Introduction
 </span></span>
---
+-------------------------
 <span><span>[COMSOL](http://www.comsol.com)</span></span><span><span>
 is a powerful environment for modelling and solving various engineering
 and scientific problems based on partial differential equations. COMSOL
@@ -52,14 +53,14 @@ version. There are two variants of the release:</span></span>
     soon</span>.</span></span>
     </span></span>
 <span><span>To load the of COMSOL load the module</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ module load COMSOL/51-EDU
 ```
 <span><span>By default the </span></span><span><span>**EDU
 variant**</span></span><span><span> will be loaded. If user needs other
 version or variant, load the particular version. To obtain the list of
 available versions use</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ module avail COMSOL
 ```
 <span><span>If user needs to prepare COMSOL jobs in the interactive mode
@@ -67,7 +68,7 @@ it is recommend to use COMSOL on the compute nodes via PBS Pro
 scheduler. In order run the COMSOL Desktop GUI on Windows is recommended
 to use the [Virtual Network Computing
 (VNC)](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc).</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ xhost +
 $ qsub -I -X -A PROJECT_ID -q qprod -l select=1:ppn=24
 $ module load COMSOL
@@ -76,7 +77,7 @@ $ comsol
 <span><span>To run COMSOL in batch mode, without the COMSOL Desktop GUI
 environment, user can utilized the default (comsol.pbs) job script and
 execute it via the qsub command.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -l select=3:ppn=24
 #PBS -q qprod
@@ -92,7 +93,7 @@ text_nodes < cat $PBS_NODEFILE
 module load COMSOL
 # module load COMSOL/51-EDU
 ntask=$(wc -l $PBS_NODEFILE)
-comsol -nn ${ntask} batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tmpdir /scratch/$USER/ -inputfile name_input_f.mph -outputfile name_output_f.mph -batchlog name_log_f.log
+comsol -nn $ batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tmpdir /scratch/$USER/ -inputfile name_input_f.mph -outputfile name_output_f.mph -batchlog name_log_f.log
 ```
 <span><span>Working directory has to be created before sending the
 (comsol.pbs) job script into the queue. Input file (name_input_f.mph)
@@ -100,7 +101,7 @@ has to be in working directory or full path to input file has to be
 specified. The appropriate path to the temp directory of the job has to
 be set by command option (-tmpdir).</span></span>
 LiveLink™* *for MATLAB^®^
---
+-------------------------
 <span><span>COMSOL is the software package for the numerical solution of
 the partial differential equations. LiveLink for MATLAB allows
 connection to the
@@ -120,7 +121,7 @@ of LiveLink for MATLAB (please see the [ISV
 Licenses](https://docs.it4i.cz/salomon/software/isv_licenses))
 are available. Following example shows how to start COMSOL model from
 MATLAB via LiveLink in the interactive mode.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ xhost +
 $ qsub -I -X -A PROJECT_ID -q qexp -l select=1:ppn=24
 $ module load MATLAB
@@ -133,7 +134,7 @@ requested and this information is not requested again.</span></span>
 <span><span>To run LiveLink for MATLAB in batch mode with
 (comsol_matlab.pbs) job script you can utilize/modify the following
 script and execute it via the qsub command.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -l select=3:ppn=24
 #PBS -q qprod
@@ -149,7 +150,7 @@ text_nodes < cat $PBS_NODEFILE
 module load MATLAB
 module load COMSOL/51-EDU
 ntask=$(wc -l $PBS_NODEFILE)
-comsol -nn ${ntask} server -configuration /tmp -mpiarg -rmk -mpiarg pbs -tmpdir /scratch/work/user/$USER/work &
+comsol -nn $ server -configuration /tmp -mpiarg -rmk -mpiarg pbs -tmpdir /scratch/work/user/$USER/work &
 cd /apps/cae/COMSOL/51/mli
 matlab -nodesktop -nosplash -r "mphstart; addpath /scratch/work/user/$USER/work; test_job"
 ```
diff --git a/docs.it4i.cz/salomon/software/comsol/comsol-multiphysics.md b/docs.it4i.cz/salomon/software/comsol/comsol-multiphysics.md
index 5a7ac7c3614a0c8b1017864d3dedb6ece1b65275..3c2ea61a831d1c0fdce7d2e0d58b269e9f2250a8 100644
--- a/docs.it4i.cz/salomon/software/comsol/comsol-multiphysics.md
+++ b/docs.it4i.cz/salomon/software/comsol/comsol-multiphysics.md
@@ -1,9 +1,10 @@
 COMSOL Multiphysics® 
 ====================
+
   
 <span><span>Introduction
 </span></span>
---
+-------------------------
 <span><span>[COMSOL](http://www.comsol.com)</span></span><span><span>
 is a powerful environment for modelling and solving various engineering
 and scientific problems based on partial differential equations. COMSOL
@@ -52,14 +53,14 @@ version. There are two variants of the release:</span></span>
     soon</span>.</span></span>
     </span></span>
 <span><span>To load the of COMSOL load the module</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ module load COMSOL/51-EDU
 ```
 <span><span>By default the </span></span><span><span>**EDU
 variant**</span></span><span><span> will be loaded. If user needs other
 version or variant, load the particular version. To obtain the list of
 available versions use</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ module avail COMSOL
 ```
 <span><span>If user needs to prepare COMSOL jobs in the interactive mode
@@ -67,7 +68,7 @@ it is recommend to use COMSOL on the compute nodes via PBS Pro
 scheduler. In order run the COMSOL Desktop GUI on Windows is recommended
 to use the [Virtual Network Computing
 (VNC)](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc).</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ xhost +
 $ qsub -I -X -A PROJECT_ID -q qprod -l select=1:ppn=24
 $ module load COMSOL
@@ -76,7 +77,7 @@ $ comsol
 <span><span>To run COMSOL in batch mode, without the COMSOL Desktop GUI
 environment, user can utilized the default (comsol.pbs) job script and
 execute it via the qsub command.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -l select=3:ppn=24
 #PBS -q qprod
@@ -92,7 +93,7 @@ text_nodes < cat $PBS_NODEFILE
 module load COMSOL
 # module load COMSOL/51-EDU
 ntask=$(wc -l $PBS_NODEFILE)
-comsol -nn ${ntask} batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tmpdir /scratch/$USER/ -inputfile name_input_f.mph -outputfile name_output_f.mph -batchlog name_log_f.log
+comsol -nn $ batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tmpdir /scratch/$USER/ -inputfile name_input_f.mph -outputfile name_output_f.mph -batchlog name_log_f.log
 ```
 <span><span>Working directory has to be created before sending the
 (comsol.pbs) job script into the queue. Input file (name_input_f.mph)
@@ -100,7 +101,7 @@ has to be in working directory or full path to input file has to be
 specified. The appropriate path to the temp directory of the job has to
 be set by command option (-tmpdir).</span></span>
 LiveLink™* *for MATLAB^®^
---
+-------------------------
 <span><span>COMSOL is the software package for the numerical solution of
 the partial differential equations. LiveLink for MATLAB allows
 connection to the
@@ -120,7 +121,7 @@ of LiveLink for MATLAB (please see the [ISV
 Licenses](https://docs.it4i.cz/salomon/software/isv_licenses))
 are available. Following example shows how to start COMSOL model from
 MATLAB via LiveLink in the interactive mode.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 $ xhost +
 $ qsub -I -X -A PROJECT_ID -q qexp -l select=1:ppn=24
 $ module load MATLAB
@@ -133,7 +134,7 @@ requested and this information is not requested again.</span></span>
 <span><span>To run LiveLink for MATLAB in batch mode with
 (comsol_matlab.pbs) job script you can utilize/modify the following
 script and execute it via the qsub command.</span></span>
-``` {.prettyprint .lang-sh}
+``` 
 #!/bin/bash
 #PBS -l select=3:ppn=24
 #PBS -q qprod
@@ -149,7 +150,7 @@ text_nodes < cat $PBS_NODEFILE
 module load MATLAB
 module load COMSOL/51-EDU
 ntask=$(wc -l $PBS_NODEFILE)
-comsol -nn ${ntask} server -configuration /tmp -mpiarg -rmk -mpiarg pbs -tmpdir /scratch/work/user/$USER/work &
+comsol -nn $ server -configuration /tmp -mpiarg -rmk -mpiarg pbs -tmpdir /scratch/work/user/$USER/work &
 cd /apps/cae/COMSOL/51/mli
 matlab -nodesktop -nosplash -r "mphstart; addpath /scratch/work/user/$USER/work; test_job"
 ```
diff --git a/docs.it4i.cz/salomon/software/comsol/licensing-and-available-versions.md b/docs.it4i.cz/salomon/software/comsol/licensing-and-available-versions.md
index 66e530b4bcbddfa9b7bbb75ab4db713edddb34bd..22792c3c1bbf16e56365878b1a476b582dec2ae4 100644
--- a/docs.it4i.cz/salomon/software/comsol/licensing-and-available-versions.md
+++ b/docs.it4i.cz/salomon/software/comsol/licensing-and-available-versions.md
@@ -1,7 +1,7 @@
 Licensing and Available Versions 
 ================================
 Comsol licence can be used by:
--------
+------------------------------
 -   all persons in the carrying out of the CE IT4Innovations Project (In
     addition to the primary licensee, which is VSB - Technical
     University of Ostrava, users are CE IT4Innovations third parties -
@@ -14,11 +14,11 @@ Comsol licence can be used by:
 -   <span id="result_box" class="short_text"><span class="hps">students
     of</span> <span class="hps">the Technical University</span></span>
 Comsol EDU Network Licence
----
+--------------------------
 The licence intended to be used for science and research, publications,
 students’ projects, teaching (academic licence).
 Comsol COM Network Licence
----
+--------------------------
 The licence intended to be used for science and research, publications,
 students’ projects, commercial research with no commercial use
 restrictions. <span id="result_box"><span class="hps">E</span><span
diff --git a/docs.it4i.cz/salomon/software/debuggers.1.md b/docs.it4i.cz/salomon/software/debuggers.1.md
index ec7667feb2fd66fa8ca0de417ec15619311ed117..b29b9eee3fca5a4720634016085157be34b817df 100644
--- a/docs.it4i.cz/salomon/software/debuggers.1.md
+++ b/docs.it4i.cz/salomon/software/debuggers.1.md
@@ -1,5 +1,6 @@
 Debuggers and profilers summary 
 ===============================
+
   
 Introduction
 ------------
@@ -22,6 +23,7 @@ Read more at the [Intel
 Debugger](https://docs.it4i.cz/salomon/software/intel-suite/intel-debugger)
 page.
 Allinea Forge (DDT/MAP)
+-----------------------
 Allinea DDT, is a commercial debugger primarily for debugging parallel
 MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel
 Xeon Phi accelerators. DDT provides all the standard debugging features
@@ -35,7 +37,7 @@ Read more at the [Allinea
 DDT](https://docs.it4i.cz/salomon/software/debuggers/allinea-ddt)
 page.
 Allinea Performance Reports
-----
+---------------------------
 Allinea Performance Reports characterize the performance of HPC
 application runs. After executing your application through the tool, a
 synthetic HTML report is generated automatically, containing information
diff --git a/docs.it4i.cz/salomon/software/debuggers/aislinn.md b/docs.it4i.cz/salomon/software/debuggers/aislinn.md
index b9ce1b6c76a6eed04de7c99aeffaeec05b436a88..6f94f8b4fd45f5c349c20511e17c467c2287591b 100644
--- a/docs.it4i.cz/salomon/software/debuggers/aislinn.md
+++ b/docs.it4i.cz/salomon/software/debuggers/aislinn.md
@@ -16,7 +16,7 @@ problems, please contact the author<stanislav.bohm@vsb.cz>.
 ### Usage
 Let us have the following program that contains a bug that is not
 manifested in all runs:
-``` {.literal-block}
+``` 
 #include <mpi.h>
 #include <stdlib.h>
 int main(int argc, char **argv) {
@@ -49,25 +49,25 @@ from process 1 is received first, then the run does not expose the
 error. If a message from process 2 is received first, then invalid
 memory write occurs at line 16.
 To verify this program by Aislinn, we first load Aislinn itself:
-``` {.literal-block}
+``` 
 $ module load aislinn
 ```
 Now we compile the program by Aislinn implementation of MPI. There are
-`mpicc`{.docutils .literal} for C programs and `mpicxx`{.docutils
+`mpicc` for C programs and `mpicxx`{.docutils
 .literal} for C++ programs. Only MPI parts of the verified application
 has to be recompiled; non-MPI parts may remain untouched. Let us assume
-that our program is in `test.cpp`{.docutils .literal}.
-``` {.literal-block}
+that our program is in `test.cpp`.
+``` 
 $ mpicc -g test.cpp -o test
 ```
-The `-g`{.docutils .literal} flag is not necessary, but it puts more
+The `-g` flag is not necessary, but it puts more
 debugging information into the program, hence Aislinn may provide more
 detailed report. The command produces executable file `test`{.docutils
 .literal}.
-Now we run the Aislinn itself. The argument `-p 3`{.docutils .literal}
+Now we run the Aislinn itself. The argument `-p 3`
 specifies that we want to verify our program for the case of three MPI
 processes
-``` {.literal-block}
+``` 
 $ aislinn -p 3 ./test
 ==AN== INFOAislinn v0.3.0
 ==AN== INFOFound error 'Invalid write'
@@ -115,6 +115,6 @@ will be removed in the future:
     communication, and communicator management. Unfortunately, MPI-IO
     and one-side communication is not implemented yet.
 -   Each MPI can use only one thread (if you use OpenMP, set
-    `OMP_NUM_THREADS`{.docutils .literal} to 1).
+    `OMP_NUM_THREADS` to 1).
 -   There are some limitations for using files, but if the program just
     reads inputs and writes results, it is ok.
diff --git a/docs.it4i.cz/salomon/software/debuggers/allinea-performance-reports.md b/docs.it4i.cz/salomon/software/debuggers/allinea-performance-reports.md
index 3b0b5db6a349278f03a1699529825f293f90335f..4246ace8ef4906778ac095aa395018dfa54ab95c 100644
--- a/docs.it4i.cz/salomon/software/debuggers/allinea-performance-reports.md
+++ b/docs.it4i.cz/salomon/software/debuggers/allinea-performance-reports.md
@@ -1,6 +1,7 @@
 Allinea Performance Reports 
 ===========================
 quick application profiling
+
   
 Introduction
 ------------
diff --git a/docs.it4i.cz/salomon/software/debuggers/intel-vtune-amplifier.md b/docs.it4i.cz/salomon/software/debuggers/intel-vtune-amplifier.md
index 6b53ac22565b09b1ab906298c2cf0085f6472a23..34f85f1d14b66718c2c240a2f34f7cae935ec14f 100644
--- a/docs.it4i.cz/salomon/software/debuggers/intel-vtune-amplifier.md
+++ b/docs.it4i.cz/salomon/software/debuggers/intel-vtune-amplifier.md
@@ -1,5 +1,6 @@
 Intel VTune Amplifier XE 
 ========================
+
   
 Introduction
 ------------
diff --git a/docs.it4i.cz/salomon/software/debuggers/summary.md b/docs.it4i.cz/salomon/software/debuggers/summary.md
index ec7667feb2fd66fa8ca0de417ec15619311ed117..b29b9eee3fca5a4720634016085157be34b817df 100644
--- a/docs.it4i.cz/salomon/software/debuggers/summary.md
+++ b/docs.it4i.cz/salomon/software/debuggers/summary.md
@@ -1,5 +1,6 @@
 Debuggers and profilers summary 
 ===============================
+
   
 Introduction
 ------------
@@ -22,6 +23,7 @@ Read more at the [Intel
 Debugger](https://docs.it4i.cz/salomon/software/intel-suite/intel-debugger)
 page.
 Allinea Forge (DDT/MAP)
+-----------------------
 Allinea DDT, is a commercial debugger primarily for debugging parallel
 MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel
 Xeon Phi accelerators. DDT provides all the standard debugging features
@@ -35,7 +37,7 @@ Read more at the [Allinea
 DDT](https://docs.it4i.cz/salomon/software/debuggers/allinea-ddt)
 page.
 Allinea Performance Reports
-----
+---------------------------
 Allinea Performance Reports characterize the performance of HPC
 application runs. After executing your application through the tool, a
 synthetic HTML report is generated automatically, containing information
diff --git a/docs.it4i.cz/salomon/software/debuggers/total-view.md b/docs.it4i.cz/salomon/software/debuggers/total-view.md
index 09090e2a4425b05fba843532ef4403e41091e544..c096fe34b058ed947585ea08752b80c51e03fbb6 100644
--- a/docs.it4i.cz/salomon/software/debuggers/total-view.md
+++ b/docs.it4i.cz/salomon/software/debuggers/total-view.md
@@ -3,7 +3,7 @@ Total View
 TotalView is a GUI-based source code multi-process, multi-thread
 debugger.
 License and Limitations for cluster Users
-------------------
+-----------------------------------------
 On the cluster users can debug OpenMP or MPI code that runs up to 64
 parallel processes. These limitation means that:
     1 user can debug up 64 processes, or
@@ -12,7 +12,7 @@ Debugging of GPU accelerated codes is also supported.
 You can check the status of the licenses
 [here](https://extranet.it4i.cz/rsweb/anselm/license/totalview).
 Compiling Code to run with TotalView
--------------
+------------------------------------
 ### Modules
 Load all necessary modules to compile the code. For example:
     module load intel
@@ -29,7 +29,7 @@ includes even more debugging information. This option is available for
 GNU and INTEL C/C++ and Fortran compilers.
 **-O0** Suppress all optimizations.
 Starting a Job with TotalView
-------
+-----------------------------
 Be sure to log in with an X window forwarding enabled. This could mean
 using the -X in the ssh: 
     ssh -X username@salomon.it4i.cz 
@@ -50,8 +50,8 @@ to setup your TotalView environment: 
 **Please note:** To be able to run parallel debugging procedure from the
 command line without stopping the debugger in the mpiexec source code
 you have to add the following function to your **~/.tvdrc** file:
-    proc mpi_auto_run_starter {loaded_id} {
-        set starter_programs {mpirun mpiexec orterun}
+    proc mpi_auto_run_starter  {
+        set starter_programs 
         set executable_name [TV::symbol get $loaded_id full_pathname]
         set file_component [file tail $executable_name]
         if {[lsearch -exact $starter_programs $file_component] != -1} {
diff --git a/docs.it4i.cz/salomon/software/debuggers/valgrind.md b/docs.it4i.cz/salomon/software/debuggers/valgrind.md
index 2fef5871c7e9450843cf5aa82893a396cab38f62..b698ba1eb948841b131076a6b79ed3286be64f76 100644
--- a/docs.it4i.cz/salomon/software/debuggers/valgrind.md
+++ b/docs.it4i.cz/salomon/software/debuggers/valgrind.md
@@ -139,7 +139,7 @@ with <span class="monospace">--leak-check=full</span> option :
 Now we can see that the memory leak is due to the <span
 class="monospace">malloc()</span> at line 6.
 <span>Usage with MPI</span>
-----
+---------------------------
 Although Valgrind is not primarily a parallel debugger, it can be used
 to debug parallel applications as well. When launching your parallel
 applications, prepend the valgrind command. For example :
diff --git a/docs.it4i.cz/salomon/software/debuggers/vampir.md b/docs.it4i.cz/salomon/software/debuggers/vampir.md
index 363d24bec314eaee26b69cb81f03118c73674bb4..c021c6a3abe7cd5324a0056677c0cb7a2cf5d979 100644
--- a/docs.it4i.cz/salomon/software/debuggers/vampir.md
+++ b/docs.it4i.cz/salomon/software/debuggers/vampir.md
@@ -7,7 +7,7 @@ functionality to collect traces, you need to use a trace collection tool
 [Score-P](https://docs.it4i.cz/salomon/software/debuggers/score-p))
 first to collect the traces.
 ![Vampir screenshot](https://docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160708v12.33.35.png/@@images/42d90ce5-8468-4edb-94bb-4009853d9f65.png "Vampir screenshot")
-------
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Installed versions
 ------------------
 Version 8.5.0 is currently installed as module <span
diff --git a/docs.it4i.cz/salomon/software/intel-suite.md b/docs.it4i.cz/salomon/software/intel-suite.md
index 01daba67ed047a3ccdd229242906476624a594da..ecac6bba170048c58b23538e78dba230a1f02daa 100644
--- a/docs.it4i.cz/salomon/software/intel-suite.md
+++ b/docs.it4i.cz/salomon/software/intel-suite.md
@@ -1,10 +1,11 @@
 Intel Parallel Studio 
 =====================
+
   
 The Salomon cluster provides following elements of the Intel Parallel
 Studio XE
   Intel Parallel Studio XE
-  ---
+  -------------------------------------------------
   Intel Compilers
   Intel Debugger
   Intel MKL Library
@@ -39,7 +40,7 @@ Read more at the [Intel
 Debugger](https://docs.it4i.cz/salomon/software/intel-suite/intel-debugger)
 page.
 Intel Math Kernel Library
---
+-------------------------
 Intel Math Kernel Library (Intel MKL) is a library of math kernel
 subroutines, extensively threaded and optimized for maximum performance.
 Intel MKL unites and provides these basic componentsBLAS, LAPACK,
@@ -50,7 +51,7 @@ Read more at the [Intel
 MKL](https://docs.it4i.cz/salomon/software/intel-suite/intel-mkl)
 page.
 Intel Integrated Performance Primitives
-----------------
+---------------------------------------
 Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX
 is available, via module ipp. The IPP is a library of highly optimized
 algorithmic building blocks for media and data applications. This
@@ -62,7 +63,7 @@ Read more at the [Intel
 IPP](https://docs.it4i.cz/salomon/software/intel-suite/intel-integrated-performance-primitives)
 page.
 Intel Threading Building Blocks
---------
+-------------------------------
 Intel Threading Building Blocks (Intel TBB) is a library that supports
 scalable parallel programming using standard ISO C++ code. It does not
 require special languages or compilers. It is designed to promote
diff --git a/docs.it4i.cz/salomon/software/intel-suite/intel-advisor.md b/docs.it4i.cz/salomon/software/intel-suite/intel-advisor.md
index 043b023a1a868cfc03575cbf891fa7abc0f1fa02..8e8d7da74f4361580edb6413713fe44a50e0b253 100644
--- a/docs.it4i.cz/salomon/software/intel-suite/intel-advisor.md
+++ b/docs.it4i.cz/salomon/software/intel-suite/intel-advisor.md
@@ -7,10 +7,10 @@ parallelism.
 Installed versions
 ------------------
 The following versions are currently available on Salomon as modules:
-  --------------- 
+  --------------- -----------------------
   **Version**     **Module**
   2016 Update 2   Advisor/2016_update2
-  --------------- 
+  --------------- -----------------------
 Usage
 -----
 Your program should be compiled with -g switch to include symbol names.
diff --git a/docs.it4i.cz/salomon/software/intel-suite/intel-compilers.md b/docs.it4i.cz/salomon/software/intel-suite/intel-compilers.md
index 6017e153ae7c241fb61e7ab6948e5da5bd044ca2..2cf0227db0cf847244f26aeba7d64b0c83c39f31 100644
--- a/docs.it4i.cz/salomon/software/intel-suite/intel-compilers.md
+++ b/docs.it4i.cz/salomon/software/intel-suite/intel-compilers.md
@@ -1,5 +1,6 @@
 Intel Compilers 
 ===============
+
   
 The Intel compilers in multiple versions are available, via module
 intel. The compilers include the icc C and C++ compiler and the ifort
@@ -26,7 +27,7 @@ parallelization by the **-openmp** compiler switch.
 Read more
 at <https://software.intel.com/en-us/intel-cplusplus-compiler-16.0-user-and-reference-guide>
 Sandy Bridge/Ivy Bridge/Haswell binary compatibility
-------
+----------------------------------------------------
 Anselm nodes are currently equipped with Sandy Bridge CPUs, while
 Salomon compute nodes are equipped with Haswell based architecture. The
 UV1 SMP compute server has Ivy Bridge CPUs, which are equivalent to
diff --git a/docs.it4i.cz/salomon/software/intel-suite/intel-debugger.md b/docs.it4i.cz/salomon/software/intel-suite/intel-debugger.md
index 020094925f0cf72345b4b2076152d02b46d6ba92..231f58263a7d8ae4c9b5580d409e815f9847901d 100644
--- a/docs.it4i.cz/salomon/software/intel-suite/intel-debugger.md
+++ b/docs.it4i.cz/salomon/software/intel-suite/intel-debugger.md
@@ -1,9 +1,10 @@
 Intel Debugger 
 ==============
+
   
 IDB is no longer available since Intel Parallel Studio 2015
 Debugging serial applications
-------
+-----------------------------
 The intel debugger version  13.0 is available, via module intel. The
 debugger works for applications compiled with C and C++ compiler and the
 ifort fortran 77/90/95 compiler. The debugger provides java GUI
@@ -32,7 +33,7 @@ myprog.c with debugging options -O0 -g and run the idb debugger
 interactively on the myprog.x executable. The GUI access is via X11 port
 forwarding provided by the PBS workload manager.
 Debugging parallel applications
---------
+-------------------------------
 Intel debugger is capable of debugging multithreaded and MPI parallel
 programs as well.
 ### Small number of MPI ranks
diff --git a/docs.it4i.cz/salomon/software/intel-suite/intel-inspector.md b/docs.it4i.cz/salomon/software/intel-suite/intel-inspector.md
index eb960e36d3926c6ff025e21b7f462f9af870a207..24be3ea473f13afb0bb78d54ec6fd44b45794f89 100644
--- a/docs.it4i.cz/salomon/software/intel-suite/intel-inspector.md
+++ b/docs.it4i.cz/salomon/software/intel-suite/intel-inspector.md
@@ -7,10 +7,10 @@ conditions, deadlocks etc.
 Installed versions
 ------------------
 The following versions are currently available on Salomon as modules:
-  --------------- --
+  --------------- -------------------------
   **Version**     **Module**
   2016 Update 1   Inspector/2016_update1
-  --------------- --
+  --------------- -------------------------
 Usage
 -----
 Your program should be compiled with -g switch to include symbol names.
diff --git a/docs.it4i.cz/salomon/software/intel-suite/intel-integrated-performance-primitives.md b/docs.it4i.cz/salomon/software/intel-suite/intel-integrated-performance-primitives.md
index 19091f58369e2eb8b0adb5880a8ffd98e3e05e22..4e1a08d353fdfb57a6d72a67afce7280e0990bad 100644
--- a/docs.it4i.cz/salomon/software/intel-suite/intel-integrated-performance-primitives.md
+++ b/docs.it4i.cz/salomon/software/intel-suite/intel-integrated-performance-primitives.md
@@ -1,8 +1,9 @@
 Intel IPP 
 =========
+
   
 Intel Integrated Performance Primitives
-----------------
+---------------------------------------
 Intel Integrated Performance Primitives, version 9.0.1, compiled for
 AVX2 vector instructions is available, via module ipp. The IPP is a very
 rich library of highly optimized algorithmic building blocks for media
@@ -62,7 +63,7 @@ executable
     $ module load ipp
     $ icc testipp.c -o testipp.x -Wl,-rpath=$LIBRARY_PATH -lippi -lipps -lippcore
 Code samples and documentation
--------
+------------------------------
 Intel provides number of [Code Samples for
 IPP](https://software.intel.com/en-us/articles/code-samples-for-intel-integrated-performance-primitives-library),
 illustrating use of IPP.
diff --git a/docs.it4i.cz/salomon/software/intel-suite/intel-mkl.md b/docs.it4i.cz/salomon/software/intel-suite/intel-mkl.md
index 65c1148e0db98f4570ab59a7dbd54742e2f32bb0..492a3af6f3b817af90cb8b1181593a4e0577c2cd 100644
--- a/docs.it4i.cz/salomon/software/intel-suite/intel-mkl.md
+++ b/docs.it4i.cz/salomon/software/intel-suite/intel-mkl.md
@@ -1,8 +1,9 @@
 Intel MKL 
 =========
+
   
 Intel Math Kernel Library
---
+-------------------------
 Intel Math Kernel Library (Intel MKL) is a library of math kernel
 subroutines, extensively threaded and optimized for maximum performance.
 Intel MKL provides these basic math kernels:
@@ -61,7 +62,7 @@ integer type (necessary for indexing large arrays, with more than
 2^31^-1 elements), whereas the LP64 libraries index arrays with the
 32-bit integer type.
   Interface   Integer type
-  ----------- -
+  ----------- -----------------------------------------------
   LP64        32-bit, int, integer(kind=4), MPI_INT
   ILP64       64-bit, long int, integer(kind=8), MPI_INT64
 ### Linking
@@ -124,7 +125,7 @@ LP64 interface to threaded MKL and Intel OMP threads implementation.
 In this example, we compile, link and run the cblas_dgemm  example,
 using LP64 interface to threaded MKL and gnu OMP threads implementation.
 MKL and MIC accelerators
--
+------------------------
 The Intel MKL is capable to automatically offload the computations o the
 MIC accelerator. See section [Intel Xeon
 Phi](https://docs.it4i.cz/salomon/software/intel-xeon-phi)
diff --git a/docs.it4i.cz/salomon/software/intel-suite/intel-parallel-studio-introduction.md b/docs.it4i.cz/salomon/software/intel-suite/intel-parallel-studio-introduction.md
index 01daba67ed047a3ccdd229242906476624a594da..ecac6bba170048c58b23538e78dba230a1f02daa 100644
--- a/docs.it4i.cz/salomon/software/intel-suite/intel-parallel-studio-introduction.md
+++ b/docs.it4i.cz/salomon/software/intel-suite/intel-parallel-studio-introduction.md
@@ -1,10 +1,11 @@
 Intel Parallel Studio 
 =====================
+
   
 The Salomon cluster provides following elements of the Intel Parallel
 Studio XE
   Intel Parallel Studio XE
-  ---
+  -------------------------------------------------
   Intel Compilers
   Intel Debugger
   Intel MKL Library
@@ -39,7 +40,7 @@ Read more at the [Intel
 Debugger](https://docs.it4i.cz/salomon/software/intel-suite/intel-debugger)
 page.
 Intel Math Kernel Library
---
+-------------------------
 Intel Math Kernel Library (Intel MKL) is a library of math kernel
 subroutines, extensively threaded and optimized for maximum performance.
 Intel MKL unites and provides these basic componentsBLAS, LAPACK,
@@ -50,7 +51,7 @@ Read more at the [Intel
 MKL](https://docs.it4i.cz/salomon/software/intel-suite/intel-mkl)
 page.
 Intel Integrated Performance Primitives
-----------------
+---------------------------------------
 Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX
 is available, via module ipp. The IPP is a library of highly optimized
 algorithmic building blocks for media and data applications. This
@@ -62,7 +63,7 @@ Read more at the [Intel
 IPP](https://docs.it4i.cz/salomon/software/intel-suite/intel-integrated-performance-primitives)
 page.
 Intel Threading Building Blocks
---------
+-------------------------------
 Intel Threading Building Blocks (Intel TBB) is a library that supports
 scalable parallel programming using standard ISO C++ code. It does not
 require special languages or compilers. It is designed to promote
diff --git a/docs.it4i.cz/salomon/software/intel-suite/intel-tbb.md b/docs.it4i.cz/salomon/software/intel-suite/intel-tbb.md
index e9d6a07e7defce10acf7cdfeb98eb020ade671c6..338093bfd7b035b1136b5ce62f2b0a966aff55fb 100644
--- a/docs.it4i.cz/salomon/software/intel-suite/intel-tbb.md
+++ b/docs.it4i.cz/salomon/software/intel-suite/intel-tbb.md
@@ -1,8 +1,9 @@
 Intel TBB 
 =========
+
   
 Intel Threading Building Blocks
---------
+-------------------------------
 Intel Threading Building Blocks (Intel TBB) is a library that supports
 scalable parallel programming using standard ISO C++ code. It does not
 require special languages or compilers.  To use the library, you specify
diff --git a/docs.it4i.cz/salomon/software/intel-xeon-phi.md b/docs.it4i.cz/salomon/software/intel-xeon-phi.md
index 7dce1db70db512be0404dfff974fd27c6b55e2e9..346f667ce8d4a63521b079bb6dd2d33184603acd 100644
--- a/docs.it4i.cz/salomon/software/intel-xeon-phi.md
+++ b/docs.it4i.cz/salomon/software/intel-xeon-phi.md
@@ -1,12 +1,13 @@
 Intel Xeon Phi 
 ==============
 A guide to Intel Xeon Phi usage
+
   
 Intel Xeon Phi accelerator can be programmed in several modes. The
 default mode on the cluster is offload mode, but all modes described in
 this document are supported.
 Intel Utilities for Xeon Phi
------
+----------------------------
 To get access to a compute node with Intel Xeon Phi accelerator, use the
 PBS interactive session
     $ qsub -I -q qprod -l select=1:ncpus=24:accelerator=True:naccelerators=2:accelerator_model=phi7120 -A NONE-0-0
@@ -225,7 +226,7 @@ Performance ooptimization
   xhost - FOR HOST ONLY - to generate AVX (Advanced Vector Extensions)
 instructions.
 Automatic Offload using Intel MKL Library
-------------------
+-----------------------------------------
 Intel MKL includes an Automatic Offload (AO) feature that enables
 computationally intensive MKL functions called in user code to benefit
 from attached Intel Xeon Phi coprocessors automatically and
@@ -255,7 +256,7 @@ and load "intel" module that automatically loads "mkl" module as well.
  The code can be copied to a file and compiled without any necessary
 modification. 
     $ vim sgemm-ao-short.c
-``` {.prettyprint .lang-cpp}
+``` 
 #include <stdio.h>
 #include <stdlib.h>
 #include <malloc.h>
@@ -542,7 +543,7 @@ Or, if you are using Fortran :
 An example of basic MPI version of "hello-world" example in C language,
 that can be executed on both host and Xeon Phi is (can be directly copy
 and pasted to a .c file)
-``` {.prettyprint .lang-cpp}
+``` 
 #include <stdio.h>
 #include <mpi.h>
 int main (argc, argv)
@@ -789,11 +790,11 @@ hosts and two accelerators is:
 PBS also generates a set of node-files that can be used instead of
 manually creating a new one every time. Three node-files are genereated:
 **Host only node-file:**
- - /lscratch/${PBS_JOBID}/nodefile-cn
+ - /lscratch/$/nodefile-cn
 **MIC only node-file**:
- - /lscratch/${PBS_JOBID}/nodefile-mic
+ - /lscratch/$/nodefile-mic
 **Host and MIC node-file**:
- - /lscratch/${PBS_JOBID}/nodefile-mix
+ - /lscratch/$/nodefile-mix
 Please note each host or accelerator is listed only per files. User has
 to specify how many jobs should be executed per node using "-n"
 parameter of the mpirun command.
diff --git a/docs.it4i.cz/salomon/software/java.md b/docs.it4i.cz/salomon/software/java.md
index f691a0796ac6ba0def45375392cb1fb47a973a42..22141cc389fb4413ed42bef8256351ccce352076 100644
--- a/docs.it4i.cz/salomon/software/java.md
+++ b/docs.it4i.cz/salomon/software/java.md
@@ -1,6 +1,7 @@
 Java 
 ====
 Java on the cluster
+
   
 Java is available on the cluster. Activate java by loading the Java
 module
diff --git a/docs.it4i.cz/salomon/software/mpi-1.md b/docs.it4i.cz/salomon/software/mpi-1.md
index 553d8c78eb2098d52573a463319358226f1359db..71d25a3ab8252af8309215af5b8bc0c738997cd4 100644
--- a/docs.it4i.cz/salomon/software/mpi-1.md
+++ b/docs.it4i.cz/salomon/software/mpi-1.md
@@ -1,12 +1,13 @@
 MPI 
 ===
+
   
 Setting up MPI Environment
----
+--------------------------
 The Salomon cluster provides several implementations of the MPI library:
-  ----
+  -------------------------------------------------------------------------
   MPI Library                          Thread support
-  ------------- -------------
+  ------------------------------------ ------------------------------------
   **Intel MPI 4.1**                    Full thread support up to
                                        MPI_THREAD_MULTIPLE
   **Intel MPI 5.0**                    Full thread support up to
@@ -15,11 +16,11 @@ The Salomon cluster provides several implementations of the MPI library:
                                        MPI_THREAD_MULTIPLE, MPI-3.0
                                        support
   SGI MPT 2.12                         
-  ----
+  -------------------------------------------------------------------------
 MPI libraries are activated via the environment modules.
 Look up section modulefiles/mpi in module avail
     $ module avail
-    ------- /apps/modules/mpi --------
+    ------------------------------ /apps/modules/mpi -------------------------------
     impi/4.1.1.036-iccifort-2013.5.192
     impi/4.1.1.036-iccifort-2013.5.192-GCC-4.8.3
     impi/5.0.3.048-iccifort-2015.3.187
@@ -30,14 +31,14 @@ There are default compilers associated with any particular MPI
 implementation. The defaults may be changed, the MPI libraries may be
 used in conjunction with any compiler.
 The defaults are selected via the modules in following way
-  -----
+  --------------------------------------------------------------------------
   Module                   MPI                      Compiler suite
-  - - -
+  ------------------------ ------------------------ ------------------------
   impi-5.0.3.048-iccifort- Intel MPI 5.0.3          
   2015.3.187                                        
   OpenMP-1.8.6-GNU-5.1.0-2 OpenMPI 1.8.6            
   .25                                               
-  -----
+  --------------------------------------------------------------------------
 Examples:
     $ module load gompi/2015b
 In this example, we activate the latest OpenMPI with latest GNU
diff --git a/docs.it4i.cz/salomon/software/mpi-1/Running_OpenMPI.md b/docs.it4i.cz/salomon/software/mpi-1/Running_OpenMPI.md
index 552565ad82993be6157db2d712f2e60bc0d1b84a..5ad34ff0cdd7d3b16ef30252473a1a9d84b0f9d8 100644
--- a/docs.it4i.cz/salomon/software/mpi-1/Running_OpenMPI.md
+++ b/docs.it4i.cz/salomon/software/mpi-1/Running_OpenMPI.md
@@ -1,8 +1,9 @@
 Running OpenMPI 
 ===============
+
   
 OpenMPI program execution
---
+-------------------------
 The OpenMPI programs may be executed only via the PBS Workload manager,
 by entering an appropriate queue. On the cluster, the **OpenMPI 1.8.6**
 is OpenMPI based MPI implementation.
@@ -93,7 +94,7 @@ later) the following variables may be used for Intel or GCC:
     $ export OMP_PROC_BIND=true
     $ export OMP_PLACES=cores 
 <span>OpenMPI Process Mapping and Binding</span>
---
+------------------------------------------------
 The mpiexec allows for precise selection of how the MPI processes will
 be mapped to the computational nodes and how these processes will bind
 to particular processor sockets and cores.
diff --git a/docs.it4i.cz/salomon/software/mpi-1/mpi.md b/docs.it4i.cz/salomon/software/mpi-1/mpi.md
index 553d8c78eb2098d52573a463319358226f1359db..71d25a3ab8252af8309215af5b8bc0c738997cd4 100644
--- a/docs.it4i.cz/salomon/software/mpi-1/mpi.md
+++ b/docs.it4i.cz/salomon/software/mpi-1/mpi.md
@@ -1,12 +1,13 @@
 MPI 
 ===
+
   
 Setting up MPI Environment
----
+--------------------------
 The Salomon cluster provides several implementations of the MPI library:
-  ----
+  -------------------------------------------------------------------------
   MPI Library                          Thread support
-  ------------- -------------
+  ------------------------------------ ------------------------------------
   **Intel MPI 4.1**                    Full thread support up to
                                        MPI_THREAD_MULTIPLE
   **Intel MPI 5.0**                    Full thread support up to
@@ -15,11 +16,11 @@ The Salomon cluster provides several implementations of the MPI library:
                                        MPI_THREAD_MULTIPLE, MPI-3.0
                                        support
   SGI MPT 2.12                         
-  ----
+  -------------------------------------------------------------------------
 MPI libraries are activated via the environment modules.
 Look up section modulefiles/mpi in module avail
     $ module avail
-    ------- /apps/modules/mpi --------
+    ------------------------------ /apps/modules/mpi -------------------------------
     impi/4.1.1.036-iccifort-2013.5.192
     impi/4.1.1.036-iccifort-2013.5.192-GCC-4.8.3
     impi/5.0.3.048-iccifort-2015.3.187
@@ -30,14 +31,14 @@ There are default compilers associated with any particular MPI
 implementation. The defaults may be changed, the MPI libraries may be
 used in conjunction with any compiler.
 The defaults are selected via the modules in following way
-  -----
+  --------------------------------------------------------------------------
   Module                   MPI                      Compiler suite
-  - - -
+  ------------------------ ------------------------ ------------------------
   impi-5.0.3.048-iccifort- Intel MPI 5.0.3          
   2015.3.187                                        
   OpenMP-1.8.6-GNU-5.1.0-2 OpenMPI 1.8.6            
   .25                                               
-  -----
+  --------------------------------------------------------------------------
 Examples:
     $ module load gompi/2015b
 In this example, we activate the latest OpenMPI with latest GNU
diff --git a/docs.it4i.cz/salomon/software/mpi-1/mpi4py-mpi-for-python.md b/docs.it4i.cz/salomon/software/mpi-1/mpi4py-mpi-for-python.md
index cba79e3e2113440f9543221ad5e990cdbf5b7781..e7600d70273bc373b38274eeb569f053d69c9117 100644
--- a/docs.it4i.cz/salomon/software/mpi-1/mpi4py-mpi-for-python.md
+++ b/docs.it4i.cz/salomon/software/mpi-1/mpi4py-mpi-for-python.md
@@ -1,6 +1,7 @@
 MPI4Py (MPI for Python) 
 =======================
 OpenMPI interface to Python
+
   
 Introduction
 ------------
diff --git a/docs.it4i.cz/salomon/software/numerical-languages.1.md b/docs.it4i.cz/salomon/software/numerical-languages.1.md
index df50e0ee07651bcd6b2d1eac7e1f7be7cd2b99c0..b65255ca5c77f76124162d26eb5be04deb03fb8d 100644
--- a/docs.it4i.cz/salomon/software/numerical-languages.1.md
+++ b/docs.it4i.cz/salomon/software/numerical-languages.1.md
@@ -1,6 +1,7 @@
 Numerical languages 
 ===================
 Interpreted languages for numerical computations and analysis
+
   
 Introduction
 ------------
diff --git a/docs.it4i.cz/salomon/software/numerical-languages/introduction.md b/docs.it4i.cz/salomon/software/numerical-languages/introduction.md
index df50e0ee07651bcd6b2d1eac7e1f7be7cd2b99c0..b65255ca5c77f76124162d26eb5be04deb03fb8d 100644
--- a/docs.it4i.cz/salomon/software/numerical-languages/introduction.md
+++ b/docs.it4i.cz/salomon/software/numerical-languages/introduction.md
@@ -1,6 +1,7 @@
 Numerical languages 
 ===================
 Interpreted languages for numerical computations and analysis
+
   
 Introduction
 ------------
diff --git a/docs.it4i.cz/salomon/software/numerical-languages/matlab.md b/docs.it4i.cz/salomon/software/numerical-languages/matlab.md
index 9b0f238bbfbc8a754ca300b05bf5b6f41366e098..2cbe9421a5c2864eb4ca7a65849254545de6a8cc 100644
--- a/docs.it4i.cz/salomon/software/numerical-languages/matlab.md
+++ b/docs.it4i.cz/salomon/software/numerical-languages/matlab.md
@@ -1,5 +1,6 @@
 Matlab 
 ======
+
   
 Introduction
 ------------
@@ -36,7 +37,7 @@ use
     $ matlab -nodesktop -nosplash
 plots, images, etc... will be still available.
 []()Running parallel Matlab using Distributed Computing Toolbox / Engine
----
+------------------------------------------------------------------------
 Distributed toolbox is available only for the EDU variant
 The MPIEXEC mode available in previous versions is no longer available
 in MATLAB 2015. Also, the programming interface has changed. Refer
diff --git a/docs.it4i.cz/salomon/software/numerical-languages/octave.md b/docs.it4i.cz/salomon/software/numerical-languages/octave.md
index bdcf76803f143fe5cea2267a3d8190f9d75885a9..d994c8895e82d5a3eb8d5cb29e84967e4feb3990 100644
--- a/docs.it4i.cz/salomon/software/numerical-languages/octave.md
+++ b/docs.it4i.cz/salomon/software/numerical-languages/octave.md
@@ -1,5 +1,6 @@
 Octave 
 ======
+
   
 GNU Octave is a high-level interpreted language, primarily intended for
 numerical computations. It provides capabilities for the numerical
diff --git a/docs.it4i.cz/salomon/software/numerical-languages/r.md b/docs.it4i.cz/salomon/software/numerical-languages/r.md
index 86ab42f4ecf6b6d4cc1a642fd415efd40bf0f6d1..e6b8d039e4c5a0ec6e0ededeefcddede4b79263f 100644
--- a/docs.it4i.cz/salomon/software/numerical-languages/r.md
+++ b/docs.it4i.cz/salomon/software/numerical-languages/r.md
@@ -1,7 +1,8 @@
 R 
 =
+
   
-Introduction {#parent-fieldname-title}
+Introduction 
 ------------
 The R is a language and environment for statistical computing and
 graphics.  R provides a wide variety of statistical (linear and
@@ -288,7 +289,7 @@ using the mclapply() in place of mpi.parSapply().
 Execute the example as:
     $ mpirun -np 1 R --slave --no-save --no-restore -f pi3parSapply.R
 Combining parallel and Rmpi
-----
+---------------------------
 Currently, the two packages can not be combined for hybrid calculations.
 Parallel execution
 ------------------
diff --git a/docs.it4i.cz/salomon/software/operating-system.md b/docs.it4i.cz/salomon/software/operating-system.md
index 93d1c728811df217b1224f1f93e11596f651424a..16b7044b3fda943b54868ed1c6297056440021fb 100644
--- a/docs.it4i.cz/salomon/software/operating-system.md
+++ b/docs.it4i.cz/salomon/software/operating-system.md
@@ -1,6 +1,7 @@
 Operating System 
 ================
 The operating system, deployed on Salomon cluster
+
   
 The operating system on Salomon is Linux - CentOS 6.6.
 <span>The CentOS Linux distribution is a stable, predictable, manageable
diff --git a/docs.it4i.cz/salomon/storage.md b/docs.it4i.cz/salomon/storage.md
index 838cb9c58c384a05029c5e3b7e24a6d2a38659c4..28e2a498b80ac274910242a9a2b0e5f1cc9f71e0 100644
--- a/docs.it4i.cz/salomon/storage.md
+++ b/docs.it4i.cz/salomon/storage.md
@@ -1,5 +1,6 @@
 Storage 
 =======
+
   
 Introduction
 ------------
@@ -106,12 +107,12 @@ Use the lfs getstripe for getting the stripe parameters. Use the lfs
 setstripe command for setting the stripe parameters to get optimal I/O
 performance The correct stripe setting depends on your needs and file
 access patterns. 
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs getstripe dir|filename 
 $ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename 
 ```
 Example:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs getstripe /scratch/work/user/username
 /scratch/work/user/username
 stripe_count  1 stripe_size   1048576 stripe_offset -1
@@ -126,7 +127,7 @@ and verified. All files written to this directory will be striped over
 all (54) OSTs
 Use lfs check OSTs to see the number and status of active OSTs for each
 filesystem on Salomon. Learn more by reading the man page
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs check osts
 $ man lfs
 ```
@@ -155,14 +156,14 @@ on a single-stripe file.
 Read more on
 <http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html>
 <span>Disk usage and quota commands</span>
--------------------
+------------------------------------------
 <span>User quotas on the Lustre file systems (SCRATCH) can be checked
 and reviewed using following command:</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota dir
 ```
 Example for Lustre SCRATCH directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota /scratch
 Disk quotas for user user001 (uid 1234):
      Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
@@ -184,11 +185,11 @@ Example output:
                          28       0 250000000              10     0  500000
 To have a better understanding of where the space is exactly used, you
 can use following command to find out.
-``` {.prettyprint .lang-sh}
+``` 
 $ du -hs dir
 ```
 Example for your HOME directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ cd /home
 $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
 258M     cuda-samples
@@ -203,14 +204,14 @@ is sorted in descending order from largest to smallest
 files/directories.
 <span>To have a better understanding of previous commands, you can read
 manpages.</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ man lfs
 ```
-``` {.prettyprint .lang-sh}
+``` 
 $ man du 
 ```
 Extended Access Control List (ACL)
------------
+----------------------------------
 Extended ACLs provide another security mechanism beside the standard
 POSIX ACLs which are defined by three entries (for
 owner/group/others). Extended ACLs have more than the three basic
@@ -219,7 +220,7 @@ number of named user and named group entries.
 ACLs on a Lustre file system work exactly like ACLs on any Linux file
 system. They are manipulated with the standard tools in the standard
 manner. Below, we create a directory and allow a specific user access.
-``` {.prettyprint .lang-sh}
+``` 
 [vop999@login1.salomon ~]$ umask 027
 [vop999@login1.salomon ~]$ mkdir test
 [vop999@login1.salomon ~]$ ls -ld test
@@ -396,13 +397,13 @@ none
 **Summary
 **
 ----------
-  -------
+  -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
   Mountpoint                                     Usage                            Protocol      Net Capacity   Throughput   Limitations   Access                    Services
-   --------- ------------- -------------- ------------ ------------- -- ------
+  ---------------------------------------------- -------------------------------- ------------- -------------- ------------ ------------- ------------------------- -----------------------------
   <span class="monospace">/home</span>           home directory                   NFS, 2-Tier   0.5 PB         6 GB/s       Quota 250GB   Compute and login nodes   backed up
   <span class="monospace">/scratch/work</span>   large project files              Lustre        1.69 PB        30 GB/s      Quota        Compute and login nodes   none
                                                                                                                             1TB                                     
   <span class="monospace">/scratch/temp</span>   job temporary data               Lustre        1.69 PB        30 GB/s      Quota 100TB   Compute and login nodes   files older 90 days removed
   <span class="monospace">/ramdisk</span>        job temporary data, node local   local         120GB          90 GB/s      none          Compute nodes             purged after job ends
-  -------
+  -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  
diff --git a/docs.it4i.cz/salomon/storage/cesnet-data-storage.md b/docs.it4i.cz/salomon/storage/cesnet-data-storage.md
index 51eb708c8ba89f571f846712728ebbb61fbf8ad8..2f2c5a878f90f24358789ea4cc48115f6cba8576 100644
--- a/docs.it4i.cz/salomon/storage/cesnet-data-storage.md
+++ b/docs.it4i.cz/salomon/storage/cesnet-data-storage.md
@@ -1,5 +1,6 @@
 CESNET Data Storage 
 ===================
+
   
 Introduction
 ------------
@@ -25,7 +26,7 @@ Policy, AUP)”.
 The service is documented at
 <https://du.cesnet.cz/wiki/doku.php/en/start>. For special requirements
 please contact directly CESNET Storage Department via e-mail
-[du-support(at)cesnet.cz](mailto:du-support@cesnet.cz){.email-link}.
+[du-support(at)cesnet.cz](mailto:du-support@cesnet.cz).
 The procedure to obtain the CESNET access is quick and trouble-free.
 (source
 [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage"))
diff --git a/docs.it4i.cz/salomon/storage/storage.md b/docs.it4i.cz/salomon/storage/storage.md
index 838cb9c58c384a05029c5e3b7e24a6d2a38659c4..28e2a498b80ac274910242a9a2b0e5f1cc9f71e0 100644
--- a/docs.it4i.cz/salomon/storage/storage.md
+++ b/docs.it4i.cz/salomon/storage/storage.md
@@ -1,5 +1,6 @@
 Storage 
 =======
+
   
 Introduction
 ------------
@@ -106,12 +107,12 @@ Use the lfs getstripe for getting the stripe parameters. Use the lfs
 setstripe command for setting the stripe parameters to get optimal I/O
 performance The correct stripe setting depends on your needs and file
 access patterns. 
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs getstripe dir|filename 
 $ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename 
 ```
 Example:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs getstripe /scratch/work/user/username
 /scratch/work/user/username
 stripe_count  1 stripe_size   1048576 stripe_offset -1
@@ -126,7 +127,7 @@ and verified. All files written to this directory will be striped over
 all (54) OSTs
 Use lfs check OSTs to see the number and status of active OSTs for each
 filesystem on Salomon. Learn more by reading the man page
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs check osts
 $ man lfs
 ```
@@ -155,14 +156,14 @@ on a single-stripe file.
 Read more on
 <http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html>
 <span>Disk usage and quota commands</span>
--------------------
+------------------------------------------
 <span>User quotas on the Lustre file systems (SCRATCH) can be checked
 and reviewed using following command:</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota dir
 ```
 Example for Lustre SCRATCH directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ lfs quota /scratch
 Disk quotas for user user001 (uid 1234):
      Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
@@ -184,11 +185,11 @@ Example output:
                          28       0 250000000              10     0  500000
 To have a better understanding of where the space is exactly used, you
 can use following command to find out.
-``` {.prettyprint .lang-sh}
+``` 
 $ du -hs dir
 ```
 Example for your HOME directory:
-``` {.prettyprint .lang-sh}
+``` 
 $ cd /home
 $ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
 258M     cuda-samples
@@ -203,14 +204,14 @@ is sorted in descending order from largest to smallest
 files/directories.
 <span>To have a better understanding of previous commands, you can read
 manpages.</span>
-``` {.prettyprint .lang-sh}
+``` 
 $ man lfs
 ```
-``` {.prettyprint .lang-sh}
+``` 
 $ man du 
 ```
 Extended Access Control List (ACL)
------------
+----------------------------------
 Extended ACLs provide another security mechanism beside the standard
 POSIX ACLs which are defined by three entries (for
 owner/group/others). Extended ACLs have more than the three basic
@@ -219,7 +220,7 @@ number of named user and named group entries.
 ACLs on a Lustre file system work exactly like ACLs on any Linux file
 system. They are manipulated with the standard tools in the standard
 manner. Below, we create a directory and allow a specific user access.
-``` {.prettyprint .lang-sh}
+``` 
 [vop999@login1.salomon ~]$ umask 027
 [vop999@login1.salomon ~]$ mkdir test
 [vop999@login1.salomon ~]$ ls -ld test
@@ -396,13 +397,13 @@ none
 **Summary
 **
 ----------
-  -------
+  -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
   Mountpoint                                     Usage                            Protocol      Net Capacity   Throughput   Limitations   Access                    Services
-   --------- ------------- -------------- ------------ ------------- -- ------
+  ---------------------------------------------- -------------------------------- ------------- -------------- ------------ ------------- ------------------------- -----------------------------
   <span class="monospace">/home</span>           home directory                   NFS, 2-Tier   0.5 PB         6 GB/s       Quota 250GB   Compute and login nodes   backed up
   <span class="monospace">/scratch/work</span>   large project files              Lustre        1.69 PB        30 GB/s      Quota        Compute and login nodes   none
                                                                                                                             1TB                                     
   <span class="monospace">/scratch/temp</span>   job temporary data               Lustre        1.69 PB        30 GB/s      Quota 100TB   Compute and login nodes   files older 90 days removed
   <span class="monospace">/ramdisk</span>        job temporary data, node local   local         120GB          90 GB/s      none          Compute nodes             purged after job ends
-  -------
+  -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  
diff --git a/docs.it4i.cz/whats-new.md b/docs.it4i.cz/whats-new.md
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..8b137891791fe96927ad78e64b0aad7bded08bdc 100644
--- a/docs.it4i.cz/whats-new.md
+++ b/docs.it4i.cz/whats-new.md
@@ -0,0 +1 @@
+
diff --git a/docs.it4i.cz/whats-new/news-feed/added-basic-documentation-for-intel-advisor-and-intel-inspector.md b/docs.it4i.cz/whats-new/news-feed/added-basic-documentation-for-intel-advisor-and-intel-inspector.md
index 01daba67ed047a3ccdd229242906476624a594da..ecac6bba170048c58b23538e78dba230a1f02daa 100644
--- a/docs.it4i.cz/whats-new/news-feed/added-basic-documentation-for-intel-advisor-and-intel-inspector.md
+++ b/docs.it4i.cz/whats-new/news-feed/added-basic-documentation-for-intel-advisor-and-intel-inspector.md
@@ -1,10 +1,11 @@
 Intel Parallel Studio 
 =====================
+
   
 The Salomon cluster provides following elements of the Intel Parallel
 Studio XE
   Intel Parallel Studio XE
-  ---
+  -------------------------------------------------
   Intel Compilers
   Intel Debugger
   Intel MKL Library
@@ -39,7 +40,7 @@ Read more at the [Intel
 Debugger](https://docs.it4i.cz/salomon/software/intel-suite/intel-debugger)
 page.
 Intel Math Kernel Library
---
+-------------------------
 Intel Math Kernel Library (Intel MKL) is a library of math kernel
 subroutines, extensively threaded and optimized for maximum performance.
 Intel MKL unites and provides these basic componentsBLAS, LAPACK,
@@ -50,7 +51,7 @@ Read more at the [Intel
 MKL](https://docs.it4i.cz/salomon/software/intel-suite/intel-mkl)
 page.
 Intel Integrated Performance Primitives
-----------------
+---------------------------------------
 Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX
 is available, via module ipp. The IPP is a library of highly optimized
 algorithmic building blocks for media and data applications. This
@@ -62,7 +63,7 @@ Read more at the [Intel
 IPP](https://docs.it4i.cz/salomon/software/intel-suite/intel-integrated-performance-primitives)
 page.
 Intel Threading Building Blocks
---------
+-------------------------------
 Intel Threading Building Blocks (Intel TBB) is a library that supports
 scalable parallel programming using standard ISO C++ code. It does not
 require special languages or compilers. It is designed to promote
diff --git a/docs.it4i.cz/whats-new/news-feed/allinea-forge-documentation-updated.md b/docs.it4i.cz/whats-new/news-feed/allinea-forge-documentation-updated.md
index c48c182fe3c2c928d01951b9581cbe278b4196ff..f4ed11136419f9e3c32b9baf9044ae1743d5f38e 100644
--- a/docs.it4i.cz/whats-new/news-feed/allinea-forge-documentation-updated.md
+++ b/docs.it4i.cz/whats-new/news-feed/allinea-forge-documentation-updated.md
@@ -1,5 +1,6 @@
 Allinea Forge (DDT,MAP) 
 =======================
+
   
 Allinea Forge consist of two tools - debugger DDT and profiler MAP.
 Allinea DDT, is a commercial debugger primarily for debugging parallel
@@ -12,6 +13,7 @@ implementation.
 Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed
 for profiling parallel code, which uses pthreads, OpenMP or MPI.
 License and Limitations for the clusters Users
+----------------------------------------------
 On the clusters users can debug OpenMP or MPI code that runs up to 64
 parallel processes. In case of debugging GPU or Xeon Phi accelerated
 codes the limit is 8 accelerators. These limitation means that:
@@ -21,7 +23,7 @@ In case of debugging on accelerators:
 -   1 user can debug on up to 8 accelerators, or 
 -   8 users can debug on single accelerator. 
 Compiling Code to run with Forge
----------
+--------------------------------
 ### Modules
 Load all necessary modules to compile the code. For example: 
     $ module load intel
@@ -29,7 +31,7 @@ Load all necessary modules to compile the code. For example: 
 Load the Allinea DDT module:
     $ module load Forge
 Compile the code:
-``` {.code-basic style="text-alignstart; "}
+``` 
 $ mpicc -g -O0 -o test_debug test.c
 $ mpif90 -g -O0 -o test_debug test.f
 ```
@@ -41,7 +43,7 @@ GNU and INTEL C/C++ and Fortran compilers.
 **-O0** Suppress all optimizations.
  
 Direct starting a Job with Forge
----------
+--------------------------------
 Be sure to log in with an [<span class="internal-link">X window
 forwarding</span>
 enabled](https://docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc).
diff --git a/exceptions_filter_auto b/exceptions_filter_auto
new file mode 100644
index 0000000000000000000000000000000000000000..272081f16bf26ed8e50f20a1c2528a5576afbc10
--- /dev/null
+++ b/exceptions_filter_auto
@@ -0,0 +1,16 @@
+00001..00111
+#10-after-you-end-your-visualization-session
+#1-connect-to-a-login-node
+#1-in-your-vnc-session-open-a-terminal-and-allocate-a-node-using-pbspro-qsub-command
+#2-in-your-vnc-session-open-another-terminal-keep-the-one-with-interactive-pbspro-job-open
+#2-run-your-own-instance-of-turbovnc-server
+#3-load-the-virtualgl-module
+#3-remember-which-display-number-your-vnc-server-runs-you-will-need-it-in-the-future-to-stop-the-server
+#4-remember-the-exact-login-node-where-your-vnc-server-runs
+#4-run-your-desired-opengl-accelerated-application-using-virtualgl-script-vglrun
+#5-after-you-end-your-work-with-the-opengl-application
+#5-remember-on-which-tcp-port-your-own-vnc-server-is-running
+#6-connect-to-the-login-node-where-your-vnc-server-runs-with-ssh-to-tunnel-your-vnc-session
+#7-if-you-don-t-have-turbo-vnc-installed-on-your-workstation
+#8-run-turbovnc-viewer-from-your-workstation
+#9-proceed-to-the-chapter-access-the-visualization-node
diff --git a/filter.txt b/filter.txt
deleted file mode 100644
index c8050f2e678937d45fb404999b904322664b9d8a..0000000000000000000000000000000000000000
--- a/filter.txt
+++ /dev/null
@@ -1,8 +0,0 @@
- {#parent-fieldname-title .documentFirstHeading}
-{.internal-link}
-{.anchor-link}
-{.external-link}
-{.image-inline}
-{.prettyprint .lang-sh .prettyprinted}
------------------------
-Obsah
diff --git a/filter_other b/filter_other
new file mode 100644
index 0000000000000000000000000000000000000000..2006b6f31117c822f68a169373aa01e2c6644360
--- /dev/null
+++ b/filter_other
@@ -0,0 +1 @@
+^Obsah
diff --git a/html_md.sh b/html_md.sh
index 5c80c21ad50dd5bafb0f1b5feeb2cf1ef1892516..e598e89319bd5db4ab959983c9ee290c777797fb 100755
--- a/html_md.sh
+++ b/html_md.sh
@@ -70,20 +70,48 @@ else
 
 		# folder info, file strukture, list of all files and his addres into folders
 		echo "${i%.*}" >> ./info/files_md.txt;
+
+		# create filter_auto
+		cat "${i%.*}.md" | grep -o -P '(?<={).*(?=})' | sort -u | sed '/{/d' | sed '/\$/d' >> filter_auto;
+		sort -u filter_auto -o filter_auto; 
+
+		# exceptions filter_auto
+		cat exceptions_filter_auto | 
+		while read y; 
+		do 
+			# search and delete according with filter_auto
+			cat filter_auto | sed -e 's/'"$y"'//g' > filter_autoTMP;
+			cat filter_autoTMP > filter_auto;
+		done
 	
-		# text filtering of html, css, ..
-		echo "\t\tfiltering text..."
-		cat filter.txt | 
+		# text filtering of html, css, ...
+		echo "\t\tautomatic filter..."
+		cat filter_auto | 
 		while read y; 
 		do 
-			# search and delete according with filter
-			cat "${i%.*}.md" | sed -e 's/'"$y"'//g' | sed -e 's/\\//g' | sed -e 's/: //g' | sed -e 's/<\/div>//g' | sed '/^<div/d'  | sed '/^$/d' > "${i%.*}TMP.md";
+			# search and delete according with filter_auto
+			cat "${i%.*}.md" | sed -e 's/{'"$y"'}//g' | sed -e 's/\\//g' | sed -e 's/: //g' | sed -e 's/<\/div>//g' | sed '/^<div/d'  | sed '/^$/d' > "${i%.*}TMP.md";
+			cat "${i%.*}TMP.md" > "${i%.*}.md";
+		done
+
+		echo "\t\tother filter..."
+		cat filter_other | 
+		while read a; 
+		do 
+			# search and delete according with filter_other
+			cat "${i%.*}.md" | sed -e 's/'"$a"'//g'  > "${i%.*}TMP.md";
 			cat "${i%.*}TMP.md" > "${i%.*}.md";
 		done
 
 		# delete temporary files
 		rm "${i%.*}TMP.md";
+		
 	done
+	rm filter_autoTMP
+	rm filter_auto
 fi
 
 
+
+
+
diff --git a/info/files_md.txt b/info/files_md.txt
index 85d02eef7edf1eb302d89bc3d9483274690e7766..1f7b8446817e887f84c3992b11d3bff7cba5139a 100644
--- a/info/files_md.txt
+++ b/info/files_md.txt
@@ -67,6 +67,75 @@
 ./docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface
 ./docs.it4i.cz/salomon/software/debuggers/summary
 ./docs.it4i.cz/salomon/software/debuggers/valgrind
+./docs.it4i.cz/whats-new
+./docs.it4i.cz/salomon
+./docs.it4i.cz/whats-new/downtimes_history
+./docs.it4i.cz/whats-new/news-feed.1
+./docs.it4i.cz/whats-new/news-feed/salomon-pbs-changes
+./docs.it4i.cz/whats-new/news-feed/new-method-to-execute-parallel-matlab-jobs
+./docs.it4i.cz/whats-new/news-feed/new-versions-of-allinea-forge-and-performance-version
+./docs.it4i.cz/whats-new/news-feed/new-bioinformatic-tools-installed-fastqc-0-11-3-gatk-3-5-java-1-7-0_79-picard-2-1-0-samtools-1-3-foss-2015g-snpeff-4-1_g-trimmomatic-0-35-java-1-7.0_79
+./docs.it4i.cz/whats-new/news-feed/new-modules-for-parallel-programming-in-modern-fortran-course
+./docs.it4i.cz/whats-new/news-feed/allinea-forge-5-1-installed-on-anselm
+./docs.it4i.cz/whats-new/news-feed/intel-vtune-is-working
+./docs.it4i.cz/whats-new/news-feed/vampir-installed
+./docs.it4i.cz/whats-new/news-feed/issue-with-intel-mpi-4-1-1-on-salomon
+./docs.it4i.cz/whats-new/news-feed/added-basic-documentation-for-intel-advisor-and-intel-inspector
+./docs.it4i.cz/whats-new/news-feed/ansys-17-0-installed
+./docs.it4i.cz/whats-new/news-feed/allinea-forge-documentation-updated
+./docs.it4i.cz/whats-new/news-feed/intel-vtune-amplifier-support-for-xeon-phi-on-salomon
+./docs.it4i.cz/whats-new/news-feed/anselm-downtime-has-been-extended-to-feb-26th
+./docs.it4i.cz/whats-new/news-feed/octave-updated-to-4-0-1-on-anselm
+./docs.it4i.cz/whats-new/news-feed/allinea-tools-updated-to-6-0.6
+./docs.it4i.cz/whats-new/news-feed/mono-4-2-2-and-mpi-net-1-2-on-salomon
+./docs.it4i.cz/whats-new/news-feed/allinea-forge-6.0
+./docs.it4i.cz/whats-new/news-feed/cuda-7-5-is-now-installed-on-anselm
+./docs.it4i.cz/whats-new/news-feed/intel-parallel-studio-2016-update-3
+./docs.it4i.cz/whats-new/news-feed/matlab-2015b
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/vnc
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.1
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/graphical-user-interface
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/x-window-and-vnc
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/cygwin-and-x11-forwarding
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty.1
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/accessing-the-clusters
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/pageant
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/puttygen
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/putty
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface
+./docs.it4i.cz/get-started-with-it4innovations/changelog
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters
+./docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/certificates-faq
+./docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials
+./docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials
+./docs.it4i.cz/get-started-with-it4innovations/applying-for-resources
+./docs.it4i.cz/get-started-with-it4innovations/introduction
+./docs.it4i.cz/salomon/hardware-overview-1.1
+./docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-priority
+./docs.it4i.cz/salomon/resource-allocation-and-job-execution/resources-allocation-policy
+./docs.it4i.cz/salomon/resource-allocation-and-job-execution/job-submission-and-execution
+./docs.it4i.cz/salomon/resource-allocation-and-job-execution/introduction
+./docs.it4i.cz/salomon/resource-allocation-and-job-execution/capacity-computing
+./docs.it4i.cz/salomon/storage
+./docs.it4i.cz/salomon/compute-nodes
+./docs.it4i.cz/salomon/resource-allocation-and-job-execution
+./docs.it4i.cz/salomon/network-1
+./docs.it4i.cz/salomon/prace
+./docs.it4i.cz/salomon/network-1/IB single-plane topology - Accelerated nodes.pdf/view
+./docs.it4i.cz/salomon/network-1/ib-single-plane-topology/IB single-plane topology - ICEX Mcell.pdf/view
+./docs.it4i.cz/salomon/network-1/ib-single-plane-topology/schematic-representation-of-the-salomon-cluster-ib-single-plain-topology-hypercube-dimension-0
+./docs.it4i.cz/salomon/network-1/ib-single-plane-topology
+./docs.it4i.cz/salomon/network-1/network
+./docs.it4i.cz/salomon/network-1/7d-enhanced-hypercube
+./docs.it4i.cz/salomon/accessing-the-cluster/vpn-access
+./docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface/vnc
+./docs.it4i.cz/salomon/accessing-the-cluster/shell-and-data-access/shell-and-data-access
+./docs.it4i.cz/salomon/accessing-the-cluster/outgoing-connections
+./docs.it4i.cz/salomon/accessing-the-cluster/graphical-user-interface
+./docs.it4i.cz/salomon/software/debuggers/summary
+./docs.it4i.cz/salomon/software/debuggers/valgrind
 ./docs.it4i.cz/salomon/software/debuggers/total-view
 ./docs.it4i.cz/salomon/software/debuggers/allinea-performance-reports
 ./docs.it4i.cz/salomon/software/debuggers/mympiprog_32p_2014-10-15_16-56
@@ -286,6 +355,18 @@
 ./docs.it4i.cz/whats-new/news-feed/allinea-forge-5-1-installed-on-anselm
 ./docs.it4i.cz/whats-new/news-feed/intel-vtune-is-working
 ./docs.it4i.cz/whats-new/news-feed/vampir-installed
+./docs.it4i.cz/whats-new
+./docs.it4i.cz/salomon
+./docs.it4i.cz/whats-new/downtimes_history
+./docs.it4i.cz/whats-new/news-feed.1
+./docs.it4i.cz/whats-new/news-feed/salomon-pbs-changes
+./docs.it4i.cz/whats-new/news-feed/new-method-to-execute-parallel-matlab-jobs
+./docs.it4i.cz/whats-new/news-feed/new-versions-of-allinea-forge-and-performance-version
+./docs.it4i.cz/whats-new/news-feed/new-bioinformatic-tools-installed-fastqc-0-11-3-gatk-3-5-java-1-7-0_79-picard-2-1-0-samtools-1-3-foss-2015g-snpeff-4-1_g-trimmomatic-0-35-java-1-7.0_79
+./docs.it4i.cz/whats-new/news-feed/new-modules-for-parallel-programming-in-modern-fortran-course
+./docs.it4i.cz/whats-new/news-feed/allinea-forge-5-1-installed-on-anselm
+./docs.it4i.cz/whats-new/news-feed/intel-vtune-is-working
+./docs.it4i.cz/whats-new/news-feed/vampir-installed
 ./docs.it4i.cz/whats-new/news-feed/issue-with-intel-mpi-4-1-1-on-salomon
 ./docs.it4i.cz/whats-new/news-feed/added-basic-documentation-for-intel-advisor-and-intel-inspector
 ./docs.it4i.cz/whats-new/news-feed/ansys-17-0-installed
diff --git a/info/list_jpg.txt b/info/list_jpg.txt
index 59a233ee796753049ff3bc4543171874ade4fd02..2e86dd926f0233e1d5fa145db48e2e1aa51886d1 100644
--- a/info/list_jpg.txt
+++ b/info/list_jpg.txt
@@ -22,3 +22,75 @@
 ./docs.it4i.cz/anselm-cluster-documentation/firstrun.jpg
 ./docs.it4i.cz/anselm-cluster-documentation/login.jpg
 ./docs.it4i.cz/anselm-cluster-documentation/successfullconnection.jpg
+./docs.it4i.cz/salomon/gnome_screen.jpg
+./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_2.jpg
+./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_4.jpg
+./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_1.jpg
+./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_3.jpg
+./docs.it4i.cz/anselm-cluster-documentation/Anselmprofile.jpg
+./docs.it4i.cz/anselm-cluster-documentation/loginwithprofile.jpg
+./docs.it4i.cz/anselm-cluster-documentation/instalationfile.jpg
+./docs.it4i.cz/anselm-cluster-documentation/anyconnecticon.jpg
+./docs.it4i.cz/anselm-cluster-documentation/successfullinstalation.jpg
+./docs.it4i.cz/anselm-cluster-documentation/anyconnectcontextmenu.jpg
+./docs.it4i.cz/anselm-cluster-documentation/logingui.jpg
+./docs.it4i.cz/anselm-cluster-documentation/java_detection.jpg
+./docs.it4i.cz/anselm-cluster-documentation/executionaccess.jpg
+./docs.it4i.cz/anselm-cluster-documentation/icon.jpg
+./docs.it4i.cz/anselm-cluster-documentation/downloadfilesuccessfull.jpg
+./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_2.jpg
+./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_4.jpg
+./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_1.jpg
+./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_3.jpg
+./docs.it4i.cz/anselm-cluster-documentation/executionaccess2.jpg
+./docs.it4i.cz/anselm-cluster-documentation/firstrun.jpg
+./docs.it4i.cz/anselm-cluster-documentation/login.jpg
+./docs.it4i.cz/anselm-cluster-documentation/successfullconnection.jpg
+./docs.it4i.cz/salomon/gnome_screen.jpg
+./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_2.jpg
+./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_4.jpg
+./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_1.jpg
+./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_3.jpg
+./docs.it4i.cz/anselm-cluster-documentation/Anselmprofile.jpg
+./docs.it4i.cz/anselm-cluster-documentation/loginwithprofile.jpg
+./docs.it4i.cz/anselm-cluster-documentation/instalationfile.jpg
+./docs.it4i.cz/anselm-cluster-documentation/anyconnecticon.jpg
+./docs.it4i.cz/anselm-cluster-documentation/successfullinstalation.jpg
+./docs.it4i.cz/anselm-cluster-documentation/anyconnectcontextmenu.jpg
+./docs.it4i.cz/anselm-cluster-documentation/logingui.jpg
+./docs.it4i.cz/anselm-cluster-documentation/java_detection.jpg
+./docs.it4i.cz/anselm-cluster-documentation/executionaccess.jpg
+./docs.it4i.cz/anselm-cluster-documentation/icon.jpg
+./docs.it4i.cz/anselm-cluster-documentation/downloadfilesuccessfull.jpg
+./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_2.jpg
+./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_4.jpg
+./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_1.jpg
+./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_3.jpg
+./docs.it4i.cz/anselm-cluster-documentation/executionaccess2.jpg
+./docs.it4i.cz/anselm-cluster-documentation/firstrun.jpg
+./docs.it4i.cz/anselm-cluster-documentation/login.jpg
+./docs.it4i.cz/anselm-cluster-documentation/successfullconnection.jpg
+./docs.it4i.cz/salomon/gnome_screen.jpg
+./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_2.jpg
+./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_4.jpg
+./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_1.jpg
+./docs.it4i.cz/salomon/software/ansys/Fluent_Licence_3.jpg
+./docs.it4i.cz/anselm-cluster-documentation/Anselmprofile.jpg
+./docs.it4i.cz/anselm-cluster-documentation/loginwithprofile.jpg
+./docs.it4i.cz/anselm-cluster-documentation/instalationfile.jpg
+./docs.it4i.cz/anselm-cluster-documentation/anyconnecticon.jpg
+./docs.it4i.cz/anselm-cluster-documentation/successfullinstalation.jpg
+./docs.it4i.cz/anselm-cluster-documentation/anyconnectcontextmenu.jpg
+./docs.it4i.cz/anselm-cluster-documentation/logingui.jpg
+./docs.it4i.cz/anselm-cluster-documentation/java_detection.jpg
+./docs.it4i.cz/anselm-cluster-documentation/executionaccess.jpg
+./docs.it4i.cz/anselm-cluster-documentation/icon.jpg
+./docs.it4i.cz/anselm-cluster-documentation/downloadfilesuccessfull.jpg
+./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_2.jpg
+./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_4.jpg
+./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_1.jpg
+./docs.it4i.cz/anselm-cluster-documentation/software/ansys/Fluent_Licence_3.jpg
+./docs.it4i.cz/anselm-cluster-documentation/executionaccess2.jpg
+./docs.it4i.cz/anselm-cluster-documentation/firstrun.jpg
+./docs.it4i.cz/anselm-cluster-documentation/login.jpg
+./docs.it4i.cz/anselm-cluster-documentation/successfullconnection.jpg
diff --git a/info/list_png.txt b/info/list_png.txt
index b7c02a0b4b17b59ddcbad6b8ad681b5b2cea1dda..efc16bacc52d8b7832df42fe4210c1253e952b67 100644
--- a/info/list_png.txt
+++ b/info/list_png.txt
@@ -113,3 +113,348 @@
 ./docs.it4i.cz/portal_css/Sunburst Theme/pb_close.png
 ./docs.it4i.cz/touch_icon.png
 ./docs.it4i.cz/sh.png
+./docs.it4i.cz/download_icon.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/TightVNC_login.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/putty-tunnel.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gnome-terminal.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gdmscreensaver.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gdmscreensaver.png/@@images/44048cfa-e854-4cb4-902b-c173821c2db1.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gnome-compute-nodes-over-vnc.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gdmdisablescreensaver.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/cygwinX11forwarding.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/cygwinX11forwarding.png/@@images/0f5b58e3-253c-4f87-a3b2-16f75cbf090f.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/XWinlistentcp.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_004V.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/20150312_143443.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PageantV.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_001V.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuTTY_host_Salomon.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuTTY_keyV.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_005V.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuTTY_save_Salomon.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygeneratorV.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuTTY_open_Salomon.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_002V.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_006V.png
+./docs.it4i.cz/salomon/copy_of_vpn_web_install_3.png
+./docs.it4i.cz/salomon/vpn_contacting.png
+./docs.it4i.cz/salomon/resource-allocation-and-job-execution/rswebsalomon.png
+./docs.it4i.cz/salomon/vpn_successfull_connection.png
+./docs.it4i.cz/salomon/vpn_web_install_2.png
+./docs.it4i.cz/salomon/vpn_web_login_2.png
+./docs.it4i.cz/salomon/gnome_screen.jpg/@@images/7758b792-24eb-48dc-bf72-618cda100fda.png
+./docs.it4i.cz/salomon/network-1/ib-single-plane-topology/IBsingleplanetopologyAcceleratednodessmall.png
+./docs.it4i.cz/salomon/network-1/ib-single-plane-topology/IBsingleplanetopologyICEXMcellsmall.png
+./docs.it4i.cz/salomon/network-1/Salomon_IB_topology.png
+./docs.it4i.cz/salomon/network-1/7D_Enhanced_hypercube.png
+./docs.it4i.cz/salomon/vpn_web_login.png
+./docs.it4i.cz/salomon/vpn_login.png
+./docs.it4i.cz/salomon/software/debuggers/totalview2.png
+./docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160211v14.27.45.png
+./docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160211v14.27.45.png/@@images/3550e4ae-2eab-4571-8387-11a112dd6ca8.png
+./docs.it4i.cz/salomon/software/debuggers/ddt1.png
+./docs.it4i.cz/salomon/software/debuggers/totalview1.png
+./docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160708v12.33.35.png
+./docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160708v12.33.35.png/@@images/42d90ce5-8468-4edb-94bb-4009853d9f65.png
+./docs.it4i.cz/salomon/software/intel-suite/Snmekobrazovky20151204v15.35.12.png
+./docs.it4i.cz/salomon/software/intel-suite/Snmekobrazovky20151204v15.35.12.png/@@images/fb3b3ac2-a88f-4e55-a25e-23f1da2200cb.png
+./docs.it4i.cz/salomon/software/ansys/AMsetPar1.png
+./docs.it4i.cz/salomon/software/ansys/AMsetPar1.png/@@images/a34a45cc-9385-4f05-b12e-efadf1bd93bb.png
+./docs.it4i.cz/salomon/vpn_contacting_https_cluster.png
+./docs.it4i.cz/salomon/vpn_web_download.png
+./docs.it4i.cz/salomon/vpn_web_download_2.png
+./docs.it4i.cz/salomon/vpn_contacting_https.png
+./docs.it4i.cz/salomon/vpn_web_install_4.png
+./docs.it4i.cz/anselm-cluster-documentation/vncviewer.png
+./docs.it4i.cz/anselm-cluster-documentation/vncviewer.png/@@images/bb4cedff-4cb6-402b-ac79-039186fe5df3.png
+./docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job_sort_formula.png
+./docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/fairshare_formula.png
+./docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/rsweb.png
+./docs.it4i.cz/anselm-cluster-documentation/quality2.png
+./docs.it4i.cz/anselm-cluster-documentation/turbovncclientsetting.png
+./docs.it4i.cz/anselm-cluster-documentation/Authorization_chain.png
+./docs.it4i.cz/anselm-cluster-documentation/scheme.png
+./docs.it4i.cz/anselm-cluster-documentation/quality3.png
+./docs.it4i.cz/anselm-cluster-documentation/legend.png
+./docs.it4i.cz/anselm-cluster-documentation/bullxB510.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vtune-amplifier/@@images/3d4533af-8ce5-4aed-9bac-09fbbcd2650a.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/totalview2.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/Snmekobrazovky20141204v12.56.36.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/ddt1.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/totalview1.png
+./docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/Matlab.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig2.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig5.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig6.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig3.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig7.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig1.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig8.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/table1.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig4.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig7x.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig9.png
+./docs.it4i.cz/anselm-cluster-documentation/quality1.png
+./docs.it4i.cz/logo.png
+./docs.it4i.cz/pdf.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_75_aaaaaa_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_205c90_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_50_75ad0a_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_ffffff_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_100_ffffff_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_45_ffddcc_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_444444_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_45_205c90_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_55_ffdd77_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_dd0000_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_dd8800_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_55_999999_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_75_dddddd_40x100.png
+./docs.it4i.cz/background.png
+./docs.it4i.cz/vpnuiV.png
+./docs.it4i.cz/png.png
+./docs.it4i.cz/search_icon.png
+./docs.it4i.cz/application.png
+./docs.it4i.cz/portal_css/Sunburst Theme/polaroid-multi.png
+./docs.it4i.cz/portal_css/Sunburst Theme/arrowRight.png
+./docs.it4i.cz/portal_css/Sunburst Theme/contenttypes-sprite.png
+./docs.it4i.cz/portal_css/Sunburst Theme/treeCollapsed.png
+./docs.it4i.cz/portal_css/Sunburst Theme/treeExpanded.png
+./docs.it4i.cz/portal_css/Sunburst Theme/link_icon_external.png
+./docs.it4i.cz/portal_css/Sunburst Theme/link_icon.png
+./docs.it4i.cz/portal_css/Sunburst Theme/polaroid-single.png
+./docs.it4i.cz/portal_css/Sunburst Theme/arrowDown.png
+./docs.it4i.cz/portal_css/Sunburst Theme/required.png
+./docs.it4i.cz/portal_css/Sunburst Theme/pb_close.png
+./docs.it4i.cz/touch_icon.png
+./docs.it4i.cz/sh.png
+./docs.it4i.cz/download_icon.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/TightVNC_login.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/putty-tunnel.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gnome-terminal.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gdmscreensaver.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gdmscreensaver.png/@@images/44048cfa-e854-4cb4-902b-c173821c2db1.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gnome-compute-nodes-over-vnc.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gdmdisablescreensaver.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/cygwinX11forwarding.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/cygwinX11forwarding.png/@@images/0f5b58e3-253c-4f87-a3b2-16f75cbf090f.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/XWinlistentcp.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_004V.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/20150312_143443.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PageantV.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_001V.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuTTY_host_Salomon.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuTTY_keyV.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_005V.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuTTY_save_Salomon.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygeneratorV.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuTTY_open_Salomon.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_002V.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_006V.png
+./docs.it4i.cz/salomon/copy_of_vpn_web_install_3.png
+./docs.it4i.cz/salomon/vpn_contacting.png
+./docs.it4i.cz/salomon/resource-allocation-and-job-execution/rswebsalomon.png
+./docs.it4i.cz/salomon/vpn_successfull_connection.png
+./docs.it4i.cz/salomon/vpn_web_install_2.png
+./docs.it4i.cz/salomon/vpn_web_login_2.png
+./docs.it4i.cz/salomon/gnome_screen.jpg/@@images/7758b792-24eb-48dc-bf72-618cda100fda.png
+./docs.it4i.cz/salomon/network-1/ib-single-plane-topology/IBsingleplanetopologyAcceleratednodessmall.png
+./docs.it4i.cz/salomon/network-1/ib-single-plane-topology/IBsingleplanetopologyICEXMcellsmall.png
+./docs.it4i.cz/salomon/network-1/Salomon_IB_topology.png
+./docs.it4i.cz/salomon/network-1/7D_Enhanced_hypercube.png
+./docs.it4i.cz/salomon/vpn_web_login.png
+./docs.it4i.cz/salomon/vpn_login.png
+./docs.it4i.cz/salomon/software/debuggers/totalview2.png
+./docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160211v14.27.45.png
+./docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160211v14.27.45.png/@@images/3550e4ae-2eab-4571-8387-11a112dd6ca8.png
+./docs.it4i.cz/salomon/software/debuggers/ddt1.png
+./docs.it4i.cz/salomon/software/debuggers/totalview1.png
+./docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160708v12.33.35.png
+./docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160708v12.33.35.png/@@images/42d90ce5-8468-4edb-94bb-4009853d9f65.png
+./docs.it4i.cz/salomon/software/intel-suite/Snmekobrazovky20151204v15.35.12.png
+./docs.it4i.cz/salomon/software/intel-suite/Snmekobrazovky20151204v15.35.12.png/@@images/fb3b3ac2-a88f-4e55-a25e-23f1da2200cb.png
+./docs.it4i.cz/salomon/software/ansys/AMsetPar1.png
+./docs.it4i.cz/salomon/software/ansys/AMsetPar1.png/@@images/a34a45cc-9385-4f05-b12e-efadf1bd93bb.png
+./docs.it4i.cz/salomon/vpn_contacting_https_cluster.png
+./docs.it4i.cz/salomon/vpn_web_download.png
+./docs.it4i.cz/salomon/vpn_web_download_2.png
+./docs.it4i.cz/salomon/vpn_contacting_https.png
+./docs.it4i.cz/salomon/vpn_web_install_4.png
+./docs.it4i.cz/anselm-cluster-documentation/vncviewer.png
+./docs.it4i.cz/anselm-cluster-documentation/vncviewer.png/@@images/bb4cedff-4cb6-402b-ac79-039186fe5df3.png
+./docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job_sort_formula.png
+./docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/fairshare_formula.png
+./docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/rsweb.png
+./docs.it4i.cz/anselm-cluster-documentation/quality2.png
+./docs.it4i.cz/anselm-cluster-documentation/turbovncclientsetting.png
+./docs.it4i.cz/anselm-cluster-documentation/Authorization_chain.png
+./docs.it4i.cz/anselm-cluster-documentation/scheme.png
+./docs.it4i.cz/anselm-cluster-documentation/quality3.png
+./docs.it4i.cz/anselm-cluster-documentation/legend.png
+./docs.it4i.cz/anselm-cluster-documentation/bullxB510.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vtune-amplifier/@@images/3d4533af-8ce5-4aed-9bac-09fbbcd2650a.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/totalview2.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/Snmekobrazovky20141204v12.56.36.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/ddt1.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/totalview1.png
+./docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/Matlab.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig2.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig5.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig6.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig3.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig7.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig1.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig8.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/table1.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig4.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig7x.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig9.png
+./docs.it4i.cz/anselm-cluster-documentation/quality1.png
+./docs.it4i.cz/logo.png
+./docs.it4i.cz/pdf.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_75_aaaaaa_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_205c90_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_50_75ad0a_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_ffffff_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_100_ffffff_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_45_ffddcc_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_444444_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_45_205c90_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_55_ffdd77_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_dd0000_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_dd8800_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_55_999999_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_75_dddddd_40x100.png
+./docs.it4i.cz/background.png
+./docs.it4i.cz/vpnuiV.png
+./docs.it4i.cz/png.png
+./docs.it4i.cz/search_icon.png
+./docs.it4i.cz/application.png
+./docs.it4i.cz/portal_css/Sunburst Theme/polaroid-multi.png
+./docs.it4i.cz/portal_css/Sunburst Theme/arrowRight.png
+./docs.it4i.cz/portal_css/Sunburst Theme/contenttypes-sprite.png
+./docs.it4i.cz/portal_css/Sunburst Theme/treeCollapsed.png
+./docs.it4i.cz/portal_css/Sunburst Theme/treeExpanded.png
+./docs.it4i.cz/portal_css/Sunburst Theme/link_icon_external.png
+./docs.it4i.cz/portal_css/Sunburst Theme/link_icon.png
+./docs.it4i.cz/portal_css/Sunburst Theme/polaroid-single.png
+./docs.it4i.cz/portal_css/Sunburst Theme/arrowDown.png
+./docs.it4i.cz/portal_css/Sunburst Theme/required.png
+./docs.it4i.cz/portal_css/Sunburst Theme/pb_close.png
+./docs.it4i.cz/touch_icon.png
+./docs.it4i.cz/sh.png
+./docs.it4i.cz/download_icon.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/TightVNC_login.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/putty-tunnel.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gnome-terminal.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gdmscreensaver.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gdmscreensaver.png/@@images/44048cfa-e854-4cb4-902b-c173821c2db1.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gnome-compute-nodes-over-vnc.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc/gdmdisablescreensaver.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/cygwinX11forwarding.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/cygwinX11forwarding.png/@@images/0f5b58e3-253c-4f87-a3b2-16f75cbf090f.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/cygwin-and-x11-forwarding/XWinlistentcp.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_004V.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/20150312_143443.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PageantV.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_001V.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuTTY_host_Salomon.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuTTY_keyV.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_005V.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuTTY_save_Salomon.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygeneratorV.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuTTY_open_Salomon.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_002V.png
+./docs.it4i.cz/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/putty/PuttyKeygenerator_006V.png
+./docs.it4i.cz/salomon/copy_of_vpn_web_install_3.png
+./docs.it4i.cz/salomon/vpn_contacting.png
+./docs.it4i.cz/salomon/resource-allocation-and-job-execution/rswebsalomon.png
+./docs.it4i.cz/salomon/vpn_successfull_connection.png
+./docs.it4i.cz/salomon/vpn_web_install_2.png
+./docs.it4i.cz/salomon/vpn_web_login_2.png
+./docs.it4i.cz/salomon/gnome_screen.jpg/@@images/7758b792-24eb-48dc-bf72-618cda100fda.png
+./docs.it4i.cz/salomon/network-1/ib-single-plane-topology/IBsingleplanetopologyAcceleratednodessmall.png
+./docs.it4i.cz/salomon/network-1/ib-single-plane-topology/IBsingleplanetopologyICEXMcellsmall.png
+./docs.it4i.cz/salomon/network-1/Salomon_IB_topology.png
+./docs.it4i.cz/salomon/network-1/7D_Enhanced_hypercube.png
+./docs.it4i.cz/salomon/vpn_web_login.png
+./docs.it4i.cz/salomon/vpn_login.png
+./docs.it4i.cz/salomon/software/debuggers/totalview2.png
+./docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160211v14.27.45.png
+./docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160211v14.27.45.png/@@images/3550e4ae-2eab-4571-8387-11a112dd6ca8.png
+./docs.it4i.cz/salomon/software/debuggers/ddt1.png
+./docs.it4i.cz/salomon/software/debuggers/totalview1.png
+./docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160708v12.33.35.png
+./docs.it4i.cz/salomon/software/debuggers/Snmekobrazovky20160708v12.33.35.png/@@images/42d90ce5-8468-4edb-94bb-4009853d9f65.png
+./docs.it4i.cz/salomon/software/intel-suite/Snmekobrazovky20151204v15.35.12.png
+./docs.it4i.cz/salomon/software/intel-suite/Snmekobrazovky20151204v15.35.12.png/@@images/fb3b3ac2-a88f-4e55-a25e-23f1da2200cb.png
+./docs.it4i.cz/salomon/software/ansys/AMsetPar1.png
+./docs.it4i.cz/salomon/software/ansys/AMsetPar1.png/@@images/a34a45cc-9385-4f05-b12e-efadf1bd93bb.png
+./docs.it4i.cz/salomon/vpn_contacting_https_cluster.png
+./docs.it4i.cz/salomon/vpn_web_download.png
+./docs.it4i.cz/salomon/vpn_web_download_2.png
+./docs.it4i.cz/salomon/vpn_contacting_https.png
+./docs.it4i.cz/salomon/vpn_web_install_4.png
+./docs.it4i.cz/anselm-cluster-documentation/vncviewer.png
+./docs.it4i.cz/anselm-cluster-documentation/vncviewer.png/@@images/bb4cedff-4cb6-402b-ac79-039186fe5df3.png
+./docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/job_sort_formula.png
+./docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/fairshare_formula.png
+./docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/rsweb.png
+./docs.it4i.cz/anselm-cluster-documentation/quality2.png
+./docs.it4i.cz/anselm-cluster-documentation/turbovncclientsetting.png
+./docs.it4i.cz/anselm-cluster-documentation/Authorization_chain.png
+./docs.it4i.cz/anselm-cluster-documentation/scheme.png
+./docs.it4i.cz/anselm-cluster-documentation/quality3.png
+./docs.it4i.cz/anselm-cluster-documentation/legend.png
+./docs.it4i.cz/anselm-cluster-documentation/bullxB510.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/vtune-amplifier/@@images/3d4533af-8ce5-4aed-9bac-09fbbcd2650a.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/totalview2.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/Snmekobrazovky20141204v12.56.36.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/ddt1.png
+./docs.it4i.cz/anselm-cluster-documentation/software/debuggers/totalview1.png
+./docs.it4i.cz/anselm-cluster-documentation/software/numerical-languages/Matlab.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig2.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig5.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig6.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig3.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig7.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig1.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig8.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/table1.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig4.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig7x.png
+./docs.it4i.cz/anselm-cluster-documentation/software/omics-master-1/images/fig9.png
+./docs.it4i.cz/anselm-cluster-documentation/quality1.png
+./docs.it4i.cz/logo.png
+./docs.it4i.cz/pdf.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_75_aaaaaa_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_205c90_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_50_75ad0a_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_ffffff_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_100_ffffff_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_45_ffddcc_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_444444_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_45_205c90_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_55_ffdd77_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_dd0000_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-icons_dd8800_256x240.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_55_999999_40x100.png
+./docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/ui-bg_flat_75_dddddd_40x100.png
+./docs.it4i.cz/background.png
+./docs.it4i.cz/vpnuiV.png
+./docs.it4i.cz/png.png
+./docs.it4i.cz/search_icon.png
+./docs.it4i.cz/application.png
+./docs.it4i.cz/portal_css/Sunburst Theme/polaroid-multi.png
+./docs.it4i.cz/portal_css/Sunburst Theme/arrowRight.png
+./docs.it4i.cz/portal_css/Sunburst Theme/contenttypes-sprite.png
+./docs.it4i.cz/portal_css/Sunburst Theme/treeCollapsed.png
+./docs.it4i.cz/portal_css/Sunburst Theme/treeExpanded.png
+./docs.it4i.cz/portal_css/Sunburst Theme/link_icon_external.png
+./docs.it4i.cz/portal_css/Sunburst Theme/link_icon.png
+./docs.it4i.cz/portal_css/Sunburst Theme/polaroid-single.png
+./docs.it4i.cz/portal_css/Sunburst Theme/arrowDown.png
+./docs.it4i.cz/portal_css/Sunburst Theme/required.png
+./docs.it4i.cz/portal_css/Sunburst Theme/pb_close.png
+./docs.it4i.cz/touch_icon.png
+./docs.it4i.cz/sh.png