diff --git a/docs.it4i/software/chemistry/molpro.md b/docs.it4i/software/chemistry/molpro.md
index 08dedda92e5b31246e20597f69cd3dc35ceecdd8..2fb61643afd70154ca9870375bae76ff27188805 100644
--- a/docs.it4i/software/chemistry/molpro.md
+++ b/docs.it4i/software/chemistry/molpro.md
@@ -35,7 +35,7 @@ Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molp
 !!! note
     The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
 
-You are advised to use the -d option to point to a directory in [SCRATCH file system](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
+You are advised to use the -d option to point to a directory in [SCRATCH file system - Salomon](../../salomon/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
 
 ### Example jobscript
 
diff --git a/docs.it4i/software/chemistry/nwchem.md b/docs.it4i/software/chemistry/nwchem.md
index 7e7b62b2380841eb308bb36e67bccd3c68e5d936..41c2006e414243c979e987bfcbcfb85e932df72c 100644
--- a/docs.it4i/software/chemistry/nwchem.md
+++ b/docs.it4i/software/chemistry/nwchem.md
@@ -33,4 +33,4 @@ mpirun nwchem h2o.nw
 Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
 
 * MEMORY : controls the amount of memory NWChem will use
-* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
+* SCRATCH_DIR : set this to a directory in [SCRATCH filesystem - Salomon](../../salomon/storage/) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
diff --git a/docs.it4i/software/chemistry/phono3py.md b/docs.it4i/software/chemistry/phono3py.md
index 5f366baa1e6acb0cb948cd473a9acb65243691c8..884f25cbf89a4b43afa3afc9a02fdbec171c9383 100644
--- a/docs.it4i/software/chemistry/phono3py.md
+++ b/docs.it4i/software/chemistry/phono3py.md
@@ -18,7 +18,7 @@ $ ml phono3py
 
 ### Calculating Force Constants
 
-One needs to calculate second order and third order force constants using the diamond structure of silicon stored in [POSCAR](poscar-si)  (the same form as in VASP) using single displacement calculations within supercell.
+One needs to calculate second order and third order force constants using the diamond structure of silicon stored in [POSCAR](POSCAR)  (the same form as in VASP) using single displacement calculations within supercell.
 
 ```console
 $ cat POSCAR
diff --git a/docs.it4i/software/comsol/comsol-multiphysics.md b/docs.it4i/software/comsol/comsol-multiphysics.md
index bf9e92704f01d6bcb6f337acbbfa1b3bb64b26fe..c5170bfcffbafc2e8e744cf97ff1b7501f2c6b0b 100644
--- a/docs.it4i/software/comsol/comsol-multiphysics.md
+++ b/docs.it4i/software/comsol/comsol-multiphysics.md
@@ -76,7 +76,7 @@ Working directory has to be created before sending the (comsol.pbs) job script i
 
 COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB.
 
-LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On the clusters 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../../../anselm/software/isv_licenses/)) are available. Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode (on Anselm use 16 threads).
+LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On the clusters 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../isv_licenses/)) are available. Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode (on Anselm use 16 threads).
 
 ```console
 $ xhost +
diff --git a/docs.it4i/software/numerical-languages/matlab.md b/docs.it4i/software/numerical-languages/matlab.md
index c1d52e46fc6c21e669fc5e5488d4004d743af1dc..e3bccc1a9ae9f976509dea53ad6cf4b1ac11302a 100644
--- a/docs.it4i/software/numerical-languages/matlab.md
+++ b/docs.it4i/software/numerical-languages/matlab.md
@@ -21,9 +21,9 @@ $ ml av MATLAB
 
 If you need to use the Matlab GUI to prepare your Matlab programs, you can use Matlab directly on the login nodes. But for all computations use Matlab on the compute nodes via PBS Pro scheduler.
 
-If you require the Matlab GUI, please follow the general information about [running graphical applications](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
+If you require the Matlab GUI, please follow the general information about [running graphical applications](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
 
-Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)) is recommended.
+Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)) is recommended.
 
 To run Matlab with GUI, use
 
@@ -68,7 +68,7 @@ With the new mode, MATLAB itself launches the workers via PBS, so you can either
 
 ### Parallel Matlab Interactive Session
 
-Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this page](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
+Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this page](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
 
 ```console
 $ xhost +
@@ -249,11 +249,11 @@ delete(pool)
 
 ### Non-Interactive Session and Licenses
 
-If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the `-l __feature__matlab__MATLAB=1` for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../../../anselm/software/isv_licenses/).
+If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the `-l __feature__matlab__MATLAB=1` for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../isv_licenses/).
 
 The licensing feature of PBS is currently disabled.
 
-In case of non-interactive session please read the [following information](../../../anselm/software/isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.
+In case of non-interactive session please read the [following information](../isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.
 
 ### Matlab Distributed Computing Engines Start Up Time
 
diff --git a/docs.it4i/software/numerical-languages/matlab_1314.md b/docs.it4i/software/numerical-languages/matlab_1314.md
index 5c0c1bc7e2004a0ab0b86d342fc92a7a6e9af5cf..1c2d29d3a3053d9a7bec0c5fc777fb024f0be369 100644
--- a/docs.it4i/software/numerical-languages/matlab_1314.md
+++ b/docs.it4i/software/numerical-languages/matlab_1314.md
@@ -46,7 +46,7 @@ Plots, images, etc... will be still available.
 
 Recommended parallel mode for running parallel Matlab on Anselm is MPIEXEC mode. In this mode user allocates resources through PBS prior to starting Matlab. Once resources are granted the main Matlab instance is started on the first compute node assigned to job by PBS and workers are started on all remaining nodes. User can use both interactive and non-interactive PBS sessions. This mode guarantees that the data processing is not performed on login nodes, but all processing is on compute nodes.
 
-![Parallel Matlab](../../../img/Matlab.png "Parallel Matlab")
+![Parallel Matlab](../../img/Matlab.png "Parallel Matlab")
 
 For the performance reasons Matlab should use system MPI. On Anselm the supported MPI implementation for Matlab is Intel MPI. To switch to system MPI user has to override default Matlab setting by creating new configuration file in its home directory. The path and file name has to be exactly the same as in the following listing:
 
diff --git a/docs.it4i/software/numerical-languages/octave.md b/docs.it4i/software/numerical-languages/octave.md
index e41a465ae87d98cfca8a3ff59d21f3ee2dbf5a83..4d96754cc4eccec854c1d5c88d8b43161e65e0d4 100644
--- a/docs.it4i/software/numerical-languages/octave.md
+++ b/docs.it4i/software/numerical-languages/octave.md
@@ -48,7 +48,7 @@ cp output.out $PBS_O_WORKDIR/.
 exit
 ```
 
-This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../job-submission-and-execution/).
+This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../salomon/job-submission-and-execution/).
 
 The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment.
 
@@ -60,7 +60,7 @@ Octave may use MPI for interprocess communication This functionality is currentl
 
 ## Xeon Phi Support
 
-Octave may take advantage of the Xeon Phi accelerators. This will only work on the  [Intel Xeon Phi](../intel-xeon-phi/) [accelerated nodes](../../compute-nodes/).
+Octave may take advantage of the Xeon Phi accelerators. This will only work on the  [Intel Xeon Phi](../../salomon/software/intel-xeon-phi/)  [accelerated nodes](../../salomon/compute-nodes/).
 
 ### Automatic Offload Support
 
diff --git a/docs.it4i/software/numerical-languages/opencoarrays.md b/docs.it4i/software/numerical-languages/opencoarrays.md
index ada43753a67e0a949cd8d2b6995cf3f56e2b736f..5949756f08be6ef5bb3d32cd9843282b875f5078 100644
--- a/docs.it4i/software/numerical-languages/opencoarrays.md
+++ b/docs.it4i/software/numerical-languages/opencoarrays.md
@@ -124,4 +124,4 @@ $ mpiexec -np 4 ./synchronization_test.x
 
 **-np 4** is number of images to run. The parameters of **cafrun** and **mpiexec** are the same.
 
-For more information about running CAF program please follow [Running OpenMPI](../mpi/Running_OpenMPI.md)
+For more information about running CAF program please follow [Running OpenMPI - Salomon](../../salomon/software/mpi/Running_OpenMPI.md)
diff --git a/docs.it4i/software/numerical-languages/r.md b/docs.it4i/software/numerical-languages/r.md
index 3c8dd20c5eb8ecaf376d6b5a686d326e9c4a440d..75ec9aaa8e8e951f5ebe7481d9251bcf11eae95b 100644
--- a/docs.it4i/software/numerical-languages/r.md
+++ b/docs.it4i/software/numerical-languages/r.md
@@ -66,7 +66,7 @@ cp routput.out $PBS_O_WORKDIR/.
 exit
 ```
 
-This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../job-submission-and-execution/).
+This script may be submitted directly to the PBS workload manager via the qsub command.  The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section - Anselm](../../anselm/job-submission-and-execution/).
 
 ## Parallel R
 
@@ -392,7 +392,7 @@ cp routput.out $PBS_O_WORKDIR/.
 exit
 ```
 
-For more information about jobscripts and MPI execution refer to the [Job submission](../../anselm/job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections.
+For more information about jobscripts and MPI execution refer to the [Job submission](../../anselm/job-submission-and-execution/) and general [MPI](../../anselm/software/mpi/mpi/) sections.
 
 ## Xeon Phi Offload
 
@@ -402,4 +402,4 @@ By leveraging MKL, R can accelerate certain computations, most notably linear al
 $ export MKL_MIC_ENABLE=1
 ```
 
-[Read more about automatic offload](../intel-xeon-phi/)
+[Read more about automatic offload](../../anselm/software/intel-xeon-phi/)
diff --git a/pathcheck.sh b/pathcheck.sh
index d571c3c9d7ae7bcaff9972cd27f6b2bea75331e5..c760ecb7151f079bc0dd155060d03d5659a330bf 100644
--- a/pathcheck.sh
+++ b/pathcheck.sh
@@ -4,7 +4,7 @@
 
 
 for file in $@; do
-check=$(cat $file | grep -Po "\[.*?\]\(.*?\)" | grep -v "#" | grep -vE "http|www|ftp|none" | sed 's/\[.*\]//g' | sed 's/[()]//g' | sed 's/\/$/.md/g')
+check=$(cat $file | grep -Po "\[.*?\]\([^ ]*?\)" | grep -v "#" | grep -vE "http|www|ftp|none" | sed 's/\[.*\]//g' | sed 's/[()]//g' | sed 's/\/$/.md/g')
 if [ ! -z "$check" ]; then
 #	echo "\n+++++ $file +++++\n"