diff --git a/docs.it4i/software/compilers.md b/docs.it4i/software/compilers.md
new file mode 100644
index 0000000000000000000000000000000000000000..8e0d8dee43fd99bb058dc3d5f8b125c1f74891a8
--- /dev/null
+++ b/docs.it4i/software/compilers.md
@@ -0,0 +1,194 @@
+# Compilers
+
+## Available compilers, including GNU, INTEL and UPC compilers
+
+There are several compilers for different programming languages available on the cluster:
+
+* C/C++
+* Fortran 77/90/95/HPF
+* Unified Parallel C
+* Java
+* NVIDIA CUDA (only on Anselm)
+
+The C/C++ and Fortran compilers are provided by:
+
+Opensource:
+
+* GNU GCC
+* Clang/LLVM
+
+Commercial licenses:
+
+* Intel
+* PGI
+
+## Intel Compilers
+
+For information about the usage of Intel Compilers and other Intel products, please read the [Intel Parallel studio](intel-suite/intel-compilers/) page.
+
+## PGI Compilers (only on Salomon)
+
+The Portland Group Cluster Development Kit (PGI CDK) is available on Salomon.
+
+```console
+$ module load PGI
+$ pgcc -v
+$ pgc++ -v
+$ pgf77 -v
+$ pgf90 -v
+$ pgf95 -v
+$ pghpf -v
+```
+
+The PGI CDK also incudes tools for debugging and profiling.
+
+PGDBG OpenMP/MPI debugger and PGPROF OpenMP/MPI profiler are available
+
+```console
+$ module load PGI
+$ module load Java
+$ pgdbg &
+$ pgprof &
+```
+
+For more information, see the [PGI page](http://www.pgroup.com/products/pgicdk.htm).
+
+## GNU
+
+For compatibility reasons there are still available the original (old 4.4.7-11) versions of GNU compilers as part of the OS. These are accessible in the search path by default.
+
+It is strongly recommended to use the up to date version which comes with the module GCC:
+
+```console
+$ module load GCC
+$ gcc -v
+$ g++ -v
+$ gfortran -v
+```
+
+With the module loaded two environment variables are predefined. One for maximum optimizations on the cluster's architecture, and the other for debugging purposes:
+
+```console
+$ echo $OPTFLAGS
+-O3 -march=native
+
+$ echo $DEBUGFLAGS
+-O0 -g
+```
+
+For more information about the possibilities of the compilers, please see the man pages.
+
+## Unified Parallel C
+
+UPC is supported by two compiler/runtime implementations:
+
+* GNU - SMP/multi-threading support only
+* Berkley - multi-node support as well as SMP/multi-threading support
+
+### GNU UPC Compiler
+
+To use the GNU UPC compiler and run the compiled binaries use the module gupc
+
+```console
+$ module add gupc
+$ gupc -v
+$ g++ -v
+```
+
+Simple program to test the compiler
+
+```cpp
+$ cat count.upc
+
+/* hello.upc - a simple UPC example */
+#include <upc.h>
+#include <stdio.h>
+
+int main() {
+  if (MYTHREAD == 0) {
+    printf("Welcome to GNU UPC!!!n");
+  }
+  upc_barrier;
+  printf(" - Hello from thread %in", MYTHREAD);
+  return 0;
+}
+```
+
+To compile the example use
+
+```console
+$ gupc -o count.upc.x count.upc
+```
+
+To run the example with 5 threads issue
+
+```console
+$ ./count.upc.x -fupc-threads-5
+```
+
+For more information see the man pages.
+
+### Berkley UPC Compiler
+
+To use the Berkley UPC compiler and runtime environment to run the binaries use the module bupc
+
+```console
+$ module add BerkeleyUPC/2.16.2-gompi-2015b   # on Anselm: ml bupc
+$ upcc -version
+```
+
+As default UPC network the "smp" is used. This is very quick and easy way for testing/debugging, but limited to one node only.
+
+For production runs, it is recommended to use the native InfiniBand implementation of UPC network "ibv". For testing/debugging using multiple nodes, the "mpi" UPC network is recommended.
+
+!!! warning
+    Selection of the network is done at the compile time and not at runtime (as expected)!
+
+Example UPC code:
+
+```cpp
+$ cat hello.upc
+
+/* hello.upc - a simple UPC example */
+#include <upc.h>
+#include <stdio.h>
+
+int main() {
+  if (MYTHREAD == 0) {
+    printf("Welcome to Berkeley UPC!!!n");
+  }
+  upc_barrier;
+  printf(" - Hello from thread %in", MYTHREAD);
+  return 0;
+}
+```
+
+To compile the example with the "ibv" UPC network use
+
+```console
+$ upcc -network=ibv -o hello.upc.x hello.upc
+```
+
+To run the example with 5 threads issue
+
+```console
+$ upcrun -n 5 ./hello.upc.x
+```
+
+To run the example on two compute nodes using all 48 cores, with 48 threads, issue (on Anselm compute on 32 cores)
+
+```console
+$ qsub -I -q qprod -A PROJECT_ID -l select=2:ncpus=24
+$ module add bupc
+$ upcrun -n 48 ./hello.upc.x
+```
+
+For more information see the man pages.
+
+## Java
+
+For information how to use Java (runtime and/or compiler), please read the [Java page](java/).
+
+## NVIDIA CUDA
+
+For information how to work with NVIDIA CUDA, please read the [NVIDIA CUDA page](../anselm/software/nvidia-cuda/).
diff --git a/docs.it4i/software/debuggers/scalasca.md b/docs.it4i/software/debuggers/scalasca.md
index 653c0f78de238809d9ac2f038d5de361065157fa..f8f1db9d0249bf7f25b448f3e017aadddff08181 100644
--- a/docs.it4i/software/debuggers/scalasca.md
+++ b/docs.it4i/software/debuggers/scalasca.md
@@ -43,7 +43,7 @@ Some notable Scalasca options are:
 * **-e &lt;directory> Specify a directory to save the collected data to. By default, Scalasca saves the data to a directory with prefix scorep\_, followed by name of the executable and launch configuration.**
 
 !!! note
-    Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../storage/storage/).
+    Scalasca can generate a huge amount of data, especially if tracing is enabled. Please consider saving the data to a [scratch directory](../../salomon/storage/).
 
 ### Analysis of Reports
 
diff --git a/docs.it4i/software/intel-suite/intel-debugger.md b/docs.it4i/software/intel-suite/intel-debugger.md
index 9bd08cdcf1be6d33c9633105315f79e0c87f5e07..ac7cec6ad56acbc3705fcdc478531e2cade64c47 100644
--- a/docs.it4i/software/intel-suite/intel-debugger.md
+++ b/docs.it4i/software/intel-suite/intel-debugger.md
@@ -4,10 +4,10 @@ IDB is no longer available since Intel Parallel Studio 2015
 
 ## Debugging Serial Applications
 
-The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/) for running the GUI.
+The intel debugger version is available, via module intel/13.5.192. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/) for running the GUI.
 
 ```console
-$ ml intel
+$ ml intel/13.5.192
 $ ml Java
 $ idb
 ```
@@ -18,7 +18,7 @@ The debugger may run in text mode. To debug in text mode, use
 $ idbc
 ```
 
-To debug on the compute nodes, module intel must be loaded. The GUI on compute nodes may be accessed using the same way as in [the GUI section](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)
+To debug on the compute nodes, module intel must be loaded. The GUI on compute nodes may be accessed using the same way as in [the GUI section](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)
 
 Example:
 
@@ -40,7 +40,7 @@ In this example, we allocate 1 full compute node, compile program myprog.c with
 
 ### Small Number of MPI Ranks
 
-For debugging small number of MPI ranks, you may execute and debug each rank in separate xterm terminal (do not forget the [X display](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)). Using Intel MPI, this may be done in following way:
+For debugging small number of MPI ranks, you may execute and debug each rank in separate xterm terminal (do not forget the [X display](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)). Using Intel MPI, this may be done in following way:
 
 ```console
 $ qsub -q qexp -l select=2:ncpus=24 -X -I
diff --git a/docs.it4i/software/intel-suite/intel-trace-analyzer-and-collector.md b/docs.it4i/software/intel-suite/intel-trace-analyzer-and-collector.md
index 9cae361ca43dccb382bd5b09f5c5a9d270e0414c..b7bf6c92d3a03112392a86078037aeff28e8623f 100644
--- a/docs.it4i/software/intel-suite/intel-trace-analyzer-and-collector.md
+++ b/docs.it4i/software/intel-suite/intel-trace-analyzer-and-collector.md
@@ -21,7 +21,7 @@ The trace will be saved in file myapp.stf in the current directory.
 
 ## Viewing Traces
 
-To view and analyze the trace, open ITAC GUI in a [graphical environment](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/):
+To view and analyze the trace, open ITAC GUI in a [graphical environment](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/):
 
 ```console
 $ ml itac/9.1.2.024
@@ -30,7 +30,7 @@ $ traceanalyzer
 
 The GUI will launch and you can open the produced `*`.stf file.
 
-![](../../../img/Snmekobrazovky20151204v15.35.12.png)
+![](../../img/Snmekobrazovky20151204v15.35.12.png)
 
 Please refer to Intel documenation about usage of the GUI tool.
 
diff --git a/docs.it4i/software/machine-learning/introduction.md b/docs.it4i/software/machine-learning/introduction.md
index d9ac0000720aabaf3f55a3d43c6c37d2b08c1b6b..9b1db80372f81bc32683e03df033c9c4aee75846 100644
--- a/docs.it4i/software/machine-learning/introduction.md
+++ b/docs.it4i/software/machine-learning/introduction.md
@@ -16,4 +16,4 @@ Test module:
 $ ml Tensorflow
 ```
 
-Read more about available versions at the [TensorFlow page](tensorflow).
+Read more about available versions at the [TensorFlow page](tensorflow/).
diff --git a/docs.it4i/software/mpi/mpi.md b/docs.it4i/software/mpi/mpi.md
index 8f3b580fb1169197bceaa4d8a3e614b79b764a3e..b307a96223a47fd3b8ff86681e2e8b0f7a483d60 100644
--- a/docs.it4i/software/mpi/mpi.md
+++ b/docs.it4i/software/mpi/mpi.md
@@ -39,7 +39,7 @@ Examples:
 $ ml gompi/2015b
 ```
 
-In this example, we activate the latest OpenMPI with latest GNU compilers (OpenMPI 1.8.6 and GCC 5.1). Please see more information about toolchains in section [Environment and Modules](../../environment-and-modules/) .
+In this example, we activate the latest OpenMPI with latest GNU compilers (OpenMPI 1.8.6 and GCC 5.1). Please see more information about toolchains in section [Environment and Modules](../../modules-matrix/) .
 
 To use OpenMPI with the intel compiler suite, use
 
diff --git a/docs.it4i/software/paraview.md b/docs.it4i/software/paraview.md
new file mode 100644
index 0000000000000000000000000000000000000000..7e2bae9a95bc33c6f83756188a5c1c54e4037892
--- /dev/null
+++ b/docs.it4i/software/paraview.md
@@ -0,0 +1,88 @@
+# ParaView
+
+Open-Source, Multi-Platform Data Analysis and Visualization Application
+
+## Introduction
+
+**ParaView** is an open-source, multi-platform data analysis and visualization application. ParaView users can quickly build visualizations to analyze their data using qualitative and quantitative techniques. The data exploration can be done interactively in 3D or programmatically using ParaView's batch processing capabilities.
+
+ParaView was developed to analyze extremely large datasets using distributed memory computing resources. It can be run on supercomputers to analyze datasets of exascale size as well as on laptops for smaller data.
+
+Homepage : <http://www.paraview.org/>
+
+## Installed Version
+
+Currently, version 5.1.2 compiled with intel/2017a against intel MPI library and OSMesa 12.0.2 is installed on the clusters.
+
+## Usage
+
+On the clusters, ParaView is to be used in client-server mode. A parallel ParaView server is launched on compute nodes by the user, and client is launched on your desktop PC to control and view the visualization. Download ParaView client application for your OS here: <http://paraview.org/paraview/resources/software.php>.
+
+!!!Warning
+    Your version must match the version number installed on the cluster.
+
+### Launching Server
+
+To launch the server, you must first allocate compute nodes, for example
+
+```console
+$ qsub -I -q qprod -A OPEN-0-0 -l select=2
+```
+
+to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution](../salomon/job-submission-and-execution/) for details.
+
+After the interactive session is opened, load the ParaView module (following examples for Salomon, Anselm instructions in comments):
+
+```console
+$ ml ParaView/5.1.2-intel-2017a-mpi
+```
+
+Now launch the parallel server, with number of nodes times 24 (16 on Anselm) processes:
+
+```console
+$ mpirun -np 48 pvserver --use-offscreen-rendering
+    Waiting for client...
+    Connection URL: cs://r37u29n1006:11111
+    Accepting connection(s): r37u29n1006:11111i
+
+Anselm:
+$ mpirun -np 32 pvserver --use-offscreen-rendering
+    Waiting for client...
+    Connection URL: cs://cn77:11111
+    Accepting connection(s): cn77:11111
+```
+
+Note the that the server is listening on compute node r37u29n1006 in this case, we shall use this information later.
+
+### Client Connection
+
+Because a direct connection is not allowed to compute nodes on Salomon, you must establish a SSH tunnel to connect to the server. Choose a port number on your PC to be forwarded to ParaView server, for example 12345. If your PC is running Linux, use this command to establish a SSH tunnel:
+
+```console
+Salomon: $ ssh -TN -L 12345:r37u29n1006:11111 username@salomon.it4i.cz
+Anselm: $ ssh -TN -L 12345:cn77:11111 username@anselm.it4i.cz
+```
+
+replace username with your login and r37u29n1006 (cn77) with the name of compute node your ParaView server is running on (see previous step).
+
+If you use PuTTY on Windows, load Salomon connection configuration, then go to *Connection* -> *SSH* -> *Tunnels* to set up the port forwarding.
+
+Fill the Source port and Destination fields. **Do not forget to click the Add button.**
+
+![](../../img/paraview_ssh_tunnel_salomon.png "SSH Tunnel in PuTTY")
+
+Now launch ParaView client installed on your desktop PC. Select *File* -> *Connect...* and fill in the following :
+
+![](../../img/paraview_connect_salomon.png "ParaView - Connect to server")
+
+The configuration is now saved for later use. Now click Connect to connect to the ParaView server. In your terminal where you have interactive session with ParaView server launched, you should see:
+
+```console
+Client connected.
+```
+
+You can now use Parallel ParaView.
+
+### Close Server
+
+Remember to close the interactive session after you finish working with ParaView server, as it will remain launched even after your client is disconnected and will continue to consume resources.
diff --git a/mkdocs.yml b/mkdocs.yml
index 354fdc9972cad8b853bc72f8050a749231fa1efa..ae243b0ea26398ff3c29b74d929b42fbd8d5ccf3 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -79,6 +79,7 @@ pages:
       - Orca: software/chemistry/orca.md
       - NWChem: software/chemistry/nwchem.md
       - Phono3py: software/chemistry/phono3py.md
+    - Compilers: software/compilers.md
     - 'Debuggers':
       - Introduction: software/debuggers/Introduction.md
       - Aislinn: software/debuggers/aislinn.md
@@ -130,6 +131,7 @@ pages:
       - Trilinos: software/numerical-libraries/trilinos.md
     - OpenFOAM: software/openfoam.md
     - Operating System: software/operating-system.md
+    - ParaView: software/paraview.md
     - 'Tools':
       - EasyBuild: software/easybuild.md
       - Singularity Container: software/singularity.md
@@ -137,12 +139,9 @@ pages:
     - Salomon Software:
       - Available Modules: modules-salomon.md
       - Available Modules on UV: modules-salomon-uv.md
-      - Compilers: salomon/software/compilers.md
       - Intel Xeon Phi: salomon/software/intel-xeon-phi.md
-      - ParaView: salomon/software/paraview.md
     - Anselm Software:
       - Available Modules: modules-anselm.md
-      - Compilers: anselm/software/compilers.md
       - GPI-2: anselm/software/gpi2.md
       - Intel Xeon Phi: anselm/software/intel-xeon-phi.md
       - NVIDIA CUDA: anselm/software/nvidia-cuda.md
@@ -150,7 +149,6 @@ pages:
         - Diagnostic Component (TEAM): anselm/software/omics-master/diagnostic-component-team.md
         - Priorization Component (BiERApp): anselm/software/omics-master/priorization-component-bierapp.md
         - Overview: anselm/software/omics-master/overview.md
-      - ParaView: anselm/software/paraview.md
       - Virtualization: anselm/software/virtualization.md
   - PBS Pro Documentation: pbspro.md
 
diff --git a/todelete b/todelete
index 67463a2a3515b52cdb17c3346e75eab7f6018f99..b42e29bda03bfe0d52ee37f5493b08f4938c2208 100644
--- a/todelete
+++ b/todelete
@@ -77,3 +77,7 @@ docs.it4i/salomon/software/numerical-languages/r.md
 ./docs.it4i/salomon/software/intel-suite/intel-parallel-studio-introduction.md
 ./docs.it4i/salomon/software/intel-suite/intel-tbb.md
 ./docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md
+./docs.it4i/anselm/software/paraview.md
+./docs.it4i/anselm/software/compilers.md
+./docs.it4i/salomon/software/compilers.md
+./docs.it4i/salomon/software/paraview.md