diff --git a/docs.it4i/software/numerical-languages/matlab.md b/docs.it4i/software/numerical-languages/matlab.md
index 87cb17e016a428a7a0b459052f6948cac52261c6..2f16bc4c26fbbb4ff6dd976bee523cc2bada1a34 100644
--- a/docs.it4i/software/numerical-languages/matlab.md
+++ b/docs.it4i/software/numerical-languages/matlab.md
@@ -2,10 +2,6 @@
 
 ## Introduction
 
-!!! notes
-    Since 2016, the MATLAB module is not updated anymore and purchase of new licenses is not planned.
-    However, due to [e-infra integration][b], IT4Innovations may have access to recent MATLAB versions from cooperating organizations in the future.
-
 MATLAB is available in versions R2015a and R2015b. There are always two variants of the release:
 
 * Non-commercial or so-called EDU variant, which can be used for common research and educational purposes.
@@ -21,8 +17,13 @@ The EDU variant is marked as default. If you need other version or variant, load
 
 ```console
 $ ml av MATLAB
+------------------------------------------------ /apps/modules/math -------------------------------------------------
+   MATLAB/R2015b    MATLAB/2021a (D)
 ```
 
+!!! info
+    Version 2021a is e-infra licence, without cluster licenses - only basic functionality.
+
 If you need to use the MATLAB GUI to prepare your MATLAB programs, you can use MATLAB directly on the login nodes. However, for all computations, use MATLAB on the compute nodes via PBS Pro scheduler.
 
 If you require the MATLAB GUI, follow the general information about [running graphical applications][1].
@@ -53,17 +54,21 @@ Delete previously used file mpiLibConf.m, we have observed crashes when using In
 
 To use Distributed Computing, you first need to setup a parallel profile. We have provided the profile for you, you can either import it in the MATLAB command line:
 
+* Salomon cluster - SalomonPBSPro.settings
+* Karolina cluster - KarolinaPBSPro.settings
+* Barbora cluster - BarboraPBSPro.settings
+
 ```console
-> parallel.importProfile('/apps/all/MATLAB/2015b-EDU/SalomonPBSPro.settings')
+> parallel.importProfile('/apps/all/MATLAB/R2015b/KarolinaPBSPro.settings')
 
 ans =
 
-SalomonPBSPro
+KarolinaPBSPro
 ```
 
 or in the GUI, go to tab *HOME -> Parallel -> Manage Cluster Profiles...*, click *Import* and navigate to:
 
-/apps/all/MATLAB/2015b-EDU/SalomonPBSPro.settings
+/apps/all/MATLAB/R2015b/KarolinaPBSPro.settings
 
 With the new mode, MATLAB itself launches the workers via PBS, so you can use either an interactive mode or a batch mode on one node, but the actual parallel processing will be done in a separate job started by MATLAB itself. Alternatively, you can use a "local" mode to run parallel code on just a single node.
 
@@ -72,8 +77,7 @@ With the new mode, MATLAB itself launches the workers via PBS, so you can use ei
 The following example shows how to start the interactive session with support for MATLAB GUI. For more information about GUI based applications, see [this page][1].
 
 ```console
-$ xhost +
-$ qsub -I -v DISPLAY=$(uname -n):$(echo $DISPLAY | cut -d ':' -f 2) -A NONE-0-0 -q qexp -l select=1 -l walltime=00:30:00 -l license__matlab-edu__MATLAB=1
+$ qsub -I -X -q qexp -l select=1 -l walltime=00:30:00 -l license__matlab-edu__MATLAB=1
 ```
 
 This `qsub` command example shows how to run MATLAB on a single node.
@@ -83,7 +87,7 @@ The second part of the command shows how to request all necessary licenses. In t
 Once the access to compute nodes is granted by PBS, the user can load following modules and start MATLAB:
 
 ```console
-$ ml MATLAB/2015a-EDU
+$ ml MATLAB/R2015b
 $ matlab &
 ```
 
@@ -95,17 +99,18 @@ To run MATLAB in a batch mode, write a MATLAB script, then write a bash jobscrip
 #!/bin/bash
 #PBS -A PROJECT ID
 #PBS -q qprod
-#PBS -l select=1:ncpus=24:mpiprocs=24:ompthreads=1
+#PBS -l select=1:ncpus=128:mpiprocs=128:ompthreads=1
 
 # change to shared scratch directory
-SCR=/scratch/.../$USER/$PBS_JOBID # change path in according to the cluster
-mkdir -p $SCR ; cd $SCR || exit
+DIR=/scratch/project/PROJECT_ID/$PBS_JOBID
+mkdir -p "$DIR"
+cd "$DIR" || exit
 
 # copy input file to scratch
 cp $PBS_O_WORKDIR/matlabcode.m .
 
 # load modules
-ml MATLAB/2015a-EDU
+ml MATLAB/R2015b
 
 # execute the calculation
 matlab -nodisplay -r matlabcode > output.out
@@ -141,10 +146,10 @@ This script creates the scheduler object *cluster* of the type *local* that star
 !!! hint
     Every MATLAB script that needs to initialize/use `matlabpool` has to contain these three lines prior to calling the `parpool(sched, ...)` function.
 
-The last step is to start `matlabpool` with the *cluster* object and a correct number of workers. We have 24 cores per node, so we start 24 workers.
+The last step is to start `matlabpool` with the *cluster* object and a correct number of workers. We have 128 cores per node, so we start 128 workers.
 
 ```console
-parpool(cluster,24);
+parpool(cluster,128);
 
 
 ... parallel code ...
@@ -159,7 +164,7 @@ The complete example showing how to use Distributed Computing Toolbox in local m
 cluster = parcluster('local');
 cluster
 
-parpool(cluster,24);
+parpool(cluster,128);
 
 n=2000;
 
@@ -184,17 +189,17 @@ You can copy and paste the example in a .m file and execute. Note that the `parp
 
 ### Parallel MATLAB Batch Job Using PBS Mode (Workers Spawned in a Separate Job)
 
-This mode uses the PBS scheduler to launch the parallel pool. It uses the SalomonPBSPro profile that needs to be imported to Cluster Manager, as mentioned before. This method uses MATLAB's PBS Scheduler interface - it spawns the workers in a separate job submitted by MATLAB using qsub.
+This mode uses the PBS scheduler to launch the parallel pool. It uses the KarolinaPBSPro profile that needs to be imported to Cluster Manager, as mentioned before. This method uses MATLAB's PBS Scheduler interface - it spawns the workers in a separate job submitted by MATLAB using qsub.
 
 This is an example of an m-script using the PBS mode:
 
 ```matlab
-cluster = parcluster('SalomonPBSPro');
+cluster = parcluster('KarolinaPBSPro');
 set(cluster, 'SubmitArguments', '-A OPEN-0-0');
-set(cluster, 'ResourceTemplate', '-q qprod -l select=10:ncpus=24');
-set(cluster, 'NumWorkers', 240);
+set(cluster, 'ResourceTemplate', '-q qprod -l select=10:ncpus=128');
+set(cluster, 'NumWorkers', 1280);
 
-pool = parpool(cluster,240);
+pool = parpool(cluster,1280);
 
 n=2000;
 
@@ -219,73 +224,12 @@ Note that we first construct a cluster object using the imported profile, then s
 
 You can start this script using the batch mode the same way as in the Local mode example.
 
-### Parallel MATLAB Batch With Direct Launch (Workers Spawned Within the Existing Job)
-
-This method is a "hack" invented by us to emulate the `mpiexec` functionality found in previous MATLAB versions. We leverage the MATLAB Generic Scheduler interface, but instead of submitting the workers to PBS, we launch the workers directly within the running job, thus we avoid the issues with master script and workers running in separate jobs (issues with license not available, waiting for the worker's job to spawn, etc.)
-
-!!! warning
-    This method is experimental.
-
-For this method, you need to use the SalomonDirect profile, import it using [the same way as SalomonPBSPro][2].
-
-This is an example of an m-script using direct mode:
-
-```matlab
-parallel.importProfile('/apps/all/MATLAB/2015b-EDU/SalomonDirect.settings')
-cluster = parcluster('SalomonDirect');
-set(cluster, 'NumWorkers', 48);
-
-pool = parpool(cluster, 48);
-
-n=2000;
-
-W = rand(n,n);
-W = distributed(W);
-x = (1:n)';
-x = distributed(x);
-spmd
-[~, name] = system('hostname')
-
-    T = W*x; % Calculation performed on labs, in parallel.
-             % T and W are both codistributed arrays here.
-end
-whos         % T and W are both distributed arrays here.
-
-% shut down parallel pool
-delete(gcp('nocreate'))
-```
-
 ### Non-Interactive Session and Licenses
 
 If you want to run batch jobs with MATLAB, be sure to request appropriate license features with the PBS Pro scheduler, at least the `-l license__matlab-edu__MATLAB=1` for the EDU variant of MATLAB. For more information about how to check the license features states and how to request them with PBS Pro, [look here][3].
 
 In case of non-interactive session, read the [following information][3] on how to modify the `qsub` command to test for available licenses prior getting the resource allocation.
 
-### MATLAB Distributed Computing Engines Start Up Time
-
-Starting MATLAB workers is an expensive process that requires certain amount of time. For more information, see the following table:
-
-| compute nodes | number of workers | start-up time[s] |
-| ------------- | ----------------- | ---------------- |
-| 16            | 384               | 831              |
-| 8             | 192               | 807              |
-| 4             | 96                | 483              |
-| 2             | 48                | 16               |
-
-## MATLAB on UV2000
-
-The UV2000 machine available in the qfat queue can be used for MATLAB computations. This is an SMP NUMA machine with a large amount of RAM, which can be beneficial for certain types of MATLAB jobs. CPU cores are allocated in chunks of 8 for this machine.
-
-You can use MATLAB on UV2000 in two parallel modes:
-
-### Threaded Mode
-
-Since this is an SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set `maxNumCompThreads` accordingly and certain operations, such as fft, eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you do not need to modify your existing sequential codes.
-
-### Local Cluster Mode
-
-You can also use Parallel Toolbox on UV2000. Use [local cluster mode][4], the "SalomonPBSPro" profile will not work.
-
 [1]: ../../general/accessing-the-clusters/graphical-user-interface/vnc.md#gui-applications-on-compute-nodes-over-vnc
 [2]: #running-parallel-matlab-using-distributed-computing-toolbox---engine
 [3]: ../isv_licenses.md
diff --git a/docs.it4i/software/numerical-languages/matlab_1314.md b/docs.it4i/software/numerical-languages/matlab_1314.md
deleted file mode 100644
index c0408ae57c5cb5aeee05bb72a3b3d464a1600e0d..0000000000000000000000000000000000000000
--- a/docs.it4i/software/numerical-languages/matlab_1314.md
+++ /dev/null
@@ -1,211 +0,0 @@
-# MATLAB 2013-2014
-
-## Introduction
-
-!!! note
-    This document relates to the old versions R2013 and R2014. For MATLAB 2015 use [this documentation instead][1].
-
-MATLAB is available in the latest stable version. There are always two variants of the release:
-
-* Non commercial or so called EDU variant, which can be used for common research and educational purposes.
-* Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
-
-To load the latest version of MATLAB load the module, use:
-
-```console
-$ ml matlab
-```
-
-The EDU variant is marked as default. If you need other version or variant, load the particular version. To obtain the list of available versions, use:
-
-```console
-$ ml matlab
-```
-
-If you need to use the MATLAB GUI to prepare your MATLAB programs, you can use MATLAB directly on the login nodes. However, for all computations, use MATLAB on the compute nodes via the PBS Pro scheduler.
-
-If you require the MATLAB GUI, follow the general information about [running graphical applications][2].
-
-MATLAB GUI is quite slow using the X forwarding built in the PBS (`qsub -X`), so using X11 display redirection either via SSH or directly by `xauth` (see the [GUI Applications on Compute Nodes over VNC][3] section) is recommended.
-
-To run MATLAB with GUI, use:
-
-```console
-$ matlab
-```
-
-To run MATLAB in text mode, without the MATLAB Desktop GUI environment, use:
-
-```console
-$ matlab -nodesktop -nosplash
-```
-
-plots, images, etc. will be still available.
-
-## Running Parallel MATLAB Using Distributed Computing Toolbox / Engine
-
-A recommended parallel mode for running parallel MATLAB is the MPIEXEC mode. In this mode, the user allocates resources through PBS prior to starting MATLAB. Once resources are granted, the main MATLAB instance is started on the first compute node assigned to job by PBS and workers are started on all remaining nodes. The user can use both interactive and non-interactive PBS sessions. This mode guarantees that the data processing is not performed on login nodes, but all processing is on compute nodes.
-
-![Parallel Matlab](../../img/Matlab.png)
-
-For performance reasons, MATLAB should use system MPI. On our clusters, the supported MPI implementation for MATLAB is Intel MPI. To switch to system MPI, the user has to override default MATLAB setting by creating a new configuration file in its home directory. The path and file name has to be the same as in the following listing:
-
-```matlab
-$ vim ~/matlab/mpiLibConf.m
-
-function [lib, extras] = mpiLibConf
-%MATLAB MPI Library overloading for Infiniband Networks
-
-mpich = '/opt/intel/impi/4.1.1.036/lib64/';
-
-disp('Using Intel MPI 4.1.1.036 over Infiniband')
-
-lib = strcat(mpich, 'libmpich.so');
-mpl = strcat(mpich, 'libmpl.so');
-opa = strcat(mpich, 'libopa.so');
-
-extras = {};
-```
-
-The system MPI library allows MATLAB to communicate through 40 Gbit/s InfiniBand QDR interconnect instead of a slower 1 Gbit Ethernet network.
-
-!!! note
-    The path to MPI library in "mpiLibConf.m" has to match with version of the loaded Intel MPI module. In this example, version 4.1.1.036 of Intel MPI is used by MATLAB and therefore the `impi/4.1.1.036` module has to be loaded prior to starting MATLAB.
-
-### Parallel MATLAB Interactive Session
-
-Once this file is in place, the user can request resources from PBS. The following example shows how to start an interactive session with support for MATLAB GUI. For more information about GUI-based applications, see:
-
-```console
-$ xhost +
-$ qsub -I -v DISPLAY=$(uname -n):$(echo $DISPLAY | cut -d ':' -f 2) -A NONE-0-0 -q qexp -l select=2:ncpus=16:mpiprocs=16 -l walltime=00:30:00 -l license__matlab-edu__MATLAB=1
-```
-
-This qsub command example shows how to run MATLAB with 32 workers in the following configuration: 2 nodes (use 16 cores per node) and 16 workers = `mpiprocs` per node (`-l select=2:ncpus=16:mpiprocs=16`). If the user requires to run smaller number of workers per node then the `mpiprocs` parameter has to be changed.
-
-The second part of the command shows how to request all necessary licenses. In this case, 1 MATLAB-EDU license and 32 Distributed Computing Engines licenses.
-
-Once the access to compute nodes is granted by PBS, the user can load the following modules and start MATLAB:
-
-```console
-$ ml matlab/R2013a-EDU
-$ ml impi/4.1.1.036
-$ matlab &
-```
-
-### Parallel MATLAB Batch Job
-
-To run MATLAB in a batch mode, write a MATLAB script, then write a bash jobscript and execute it via the `qsub` command. By default, MATLAB will execute one MATLAB worker instance per allocated core.
-
-```bash
-#!/bin/bash
-#PBS -A PROJECT ID
-#PBS -q qprod
-#PBS -l select=2:ncpus=16:mpiprocs=16:ompthreads=1
-
-# change to shared scratch directory
-SCR=/scratch/$USER/$PBS_JOBID
-mkdir -p $SCR ; cd $SCR || exit
-
-# copy input file to scratch
-cp $PBS_O_WORKDIR/matlabcode.m .
-
-# load modules
-ml matlab/R2013a-EDU
-ml impi/4.1.1.036
-
-# execute the calculation
-matlab -nodisplay -r matlabcode > output.out
-
-# copy output file to home
-cp output.out $PBS_O_WORKDIR/.
-```
-
-This script may be submitted directly to the PBS workload manager via the `qsub` command.  The inputs and MATLAB script are in the matlabcode.m file, outputs in the output.out file. Note the missing .m extension in the `matlab -r matlabcodefile` call, **the .m must not be included**.  Note that the **shared /scratch must be used**. Further, it is **important to include the `quit`** statement at the end of the matlabcode.m script.
-
-Submit the jobscript using qsub:
-
-```console
-$ qsub ./jobscript
-```
-
-### Parallel MATLAB Program Example
-
-The last part of the configuration is done directly in the user's MATLAB script before Distributed Computing Toolbox is started.
-
-```matlab
-sched = findResource('scheduler', 'type', 'mpiexec');
-set(sched, 'MpiexecFileName', '/apps/intel/impi/4.1.1/bin/mpirun');
-set(sched, 'EnvironmentSetMethod', 'setenv');
-```
-
-This script creates a `sched` scheduler object of the type `mpiexec` that starts workers using the `mpirun` tool. To use a correct version of `mpirun`, the second line specifies the path to correct version of the system Intel MPI library.
-
-!!! note
-    Every MATLAB script that needs to initialize/use `matlabpool` has to contain these three lines prior to calling the `matlabpool(sched, ...)` function.
-
-The last step is to start `matlabpool` with the `sched` object and a correct number of workers. In this case, `qsub` asked for the total number of 32 cores, therefore the number of workers is also set to `32`.
-
-```console
-matlabpool(sched,32);
-
-
-... parallel code ...
-
-
-matlabpool close
-```
-
-The complete example showing how to use Distributed Computing Toolbox is show here:
-
-```matlab
-sched = findResource('scheduler', 'type', 'mpiexec');
-set(sched, 'MpiexecFileName', '/apps/intel/impi/4.1.1/bin/mpirun')
-set(sched, 'EnvironmentSetMethod', 'setenv')
-set(sched, 'SubmitArguments', '')
-sched
-
-matlabpool(sched,32);
-
-n=2000;
-
-W = rand(n,n);
-W = distributed(W);
-x = (1:n)';
-x = distributed(x);
-spmd
-[~, name] = system('hostname')
-
-    T = W*x; % Calculation performed on labs, in parallel.
-             % T and W are both codistributed arrays here.
-end
-T;
-whos         % T and W are both distributed arrays here.
-
-matlabpool close
-quit
-```
-
-You can copy and paste the example in a .m file and execute. Note that the `matlabpool` size should correspond to the **total number of cores** available on allocated nodes.
-
-### Non-Interactive Session and Licenses
-
-If you want to run batch jobs with MATLAB, be sure to request appropriate license features with the PBS Pro scheduler, at least the `-l license__matlab-edu__MATLAB=1` for EDU variant of MATLAB. For more information about how to check the license features states and how to request them with PBS Pro, [look here][4].
-
-In the case of non-interactive session, read the [following information][4] on how to modify the qsub command to test for available licenses prior getting the resource allocation.
-
-### MATLAB Distributed Computing Engines Start Up Time
-
-Starting MATLAB workers is an expensive process that requires certain amount of time. For more information, see the following table:
-
-| compute nodes | number of workers | start-up time[s] |
-| ------------- | ----------------- | ---------------- |
-| 16            | 256               | 1008             |
-| 8             | 128               | 534              |
-| 4             | 64                | 333              |
-| 2             | 32                | 210              |
-
-[1]: matlab.md
-[2]: ../../general/accessing-the-clusters/graphical-user-interface/x-window-system.md
-[3]: ../../general/accessing-the-clusters/graphical-user-interface/vnc.md#gui-applications-on-compute-nodes-over-vnc
-[4]: ../isv_licenses.md
diff --git a/docs.it4i/software/numerical-languages/octave.md b/docs.it4i/software/numerical-languages/octave.md
index 02a340bc9775154c5a510b7c1742ab1272d8ff76..32b1c4d4fdea7ffaf28facee94907ac0ae2f94d4 100644
--- a/docs.it4i/software/numerical-languages/octave.md
+++ b/docs.it4i/software/numerical-languages/octave.md
@@ -12,6 +12,8 @@ $ ml av octave
 
 ## Modules and Execution
 
+To load the latest version of Octave load the module:
+
 ```console
 $ ml Octave
 ```
@@ -30,13 +32,15 @@ To run Octave in batch mode, write an Octave script, then write a bash jobscript
 #!/bin/bash
 
 # change to local scratch directory
-cd /lscratch/$PBS_JOBID || exit
+DIR=/scratch/project/PROJECT_ID/$PBS_JOBID
+mkdir -p "$DIR"
+cd "$DIR" || exit
 
 # copy input file to scratch
 cp $PBS_O_WORKDIR/octcode.m .
 
 # load octave module
-ml octave
+ml  Octave/6.3.0-intel-2020b-without-X11
 
 # execute the calculation
 octave -q --eval octcode > output.out
@@ -50,67 +54,16 @@ exit
 
 This script may be submitted directly to the PBS workload manager via the `qsub` command. The inputs are in the octcode.m file, outputs in the output.out file. See the single node jobscript example in the [Job execution section][1].
 
-The Octave c compiler `mkoctfile` calls the GNU GCC 4.8.1 for compiling native C code. This is very useful for running native C subroutines in Octave environment.
+The Octave c compiler `mkoctfile` calls the GNU GCC 6.3.0 for compiling native C code. This is very useful for running native C subroutines in Octave environment.
 
 ```console
 $ mkoctfile -v
+mkoctfile, version 6.3.0
 ```
 
 Octave may use MPI for interprocess communication This functionality is currently not supported on the clusters. In case you require the Octave interface to MPI, contact [support][b].
 
-## Xeon Phi Support
-
-<!--- not tested -->
-Octave may take advantage of the Xeon Phi accelerators. This will only work on the [Intel Xeon Phi][2] [accelerated nodes][3].
-
-### Automatic Offload Support
-
-Octave can accelerate BLAS type operations (in particular the Matrix Matrix multiplications] on the Xeon Phi accelerator, via [Automatic Offload using the MKL library][2].
-
-Example
-
-```octave
-$ export OFFLOAD_REPORT=2
-$ export MKL_MIC_ENABLE=1
-$ ml octave
-$ octave -q
-    octave:1> A=rand(10000); B=rand(10000);
-    octave:2> tic; C=A*B; toc
-    [MKL] [MIC --] [AO Function]    DGEMM
-    [MKL] [MIC --] [AO DGEMM Workdivision]    0.32 0.68
-    [MKL] [MIC 00] [AO DGEMM CPU Time]    2.896003 seconds
-    [MKL] [MIC 00] [AO DGEMM MIC Time]    1.967384 seconds
-    [MKL] [MIC 00] [AO DGEMM CPU->MIC Data]    1347200000 bytes
-    [MKL] [MIC 00] [AO DGEMM MIC->CPU Data]    2188800000 bytes
-    Elapsed time is 2.93701 seconds.
-```
-
-In this example, the calculation was automatically divided among the CPU cores and the Xeon Phi MIC accelerator, reducing the total runtime from 6.3 secs down to 2.9 secs.
-
-### Native Support
-
-<!--- not tested -->
-A version of [native][2] Octave is compiled for Xeon Phi accelerators. Some limitations apply for this version:
-
-* Only command line support. GUI, graph plotting, etc. is not supported.
-* Command history in interactive mode is not supported.
-
-Octave is linked with parallel Intel MKL, so it is best suited for batch processing of tasks that utilize BLAS, LAPACK, and FFT operations. By default, the number of threads is set to 120, you can control this with the `OMP_NUM_THREADS` environment variable.
-
-!!! note
-    Calculations that do not employ parallelism (either by using parallel MKL e.g. via matrix operations, `fork()` function, [parallel package][c], or other mechanism) will actually run slower than on host CPU.
-
-To use Octave on a node with Xeon Phi:
-
-```console
-$ ssh mic0                                               # login to the MIC card
-$ source /apps/tools/octave/3.8.2-mic/bin/octave-env.sh  # set up environment variables
-$ octave -q /apps/tools/octave/3.8.2-mic/example/test0.m # run an example
-```
-
 [1]: ../../general/job-submission-and-execution.md
-[2]: ../intel/intel-xeon-phi-salomon.md
-[3]: ../../salomon/compute-nodes.md
 
 [a]: https://www.gnu.org/software/octave/
 [b]: https://support.it4i.cz/rt/
diff --git a/docs.it4i/software/numerical-languages/opencoarrays.md b/docs.it4i/software/numerical-languages/opencoarrays.md
index 5ba87725800465c6e4a26bdd92302278a865b1f3..ab3a4107d704cdbf1b1a45b8b2237944615ecbaf 100644
--- a/docs.it4i/software/numerical-languages/opencoarrays.md
+++ b/docs.it4i/software/numerical-languages/opencoarrays.md
@@ -79,10 +79,10 @@ end program synchronization_test
 
 ## Compile and Run
 
-Currently, version 1.8.10 compiled with the OpenMPI 1.10.7 library is installed on the cluster. To load the `OpenCoarrays` module, type:
+Currently, version 2.9.2 compiled with the OpenMPI 4.0.5 library is installed on the cluster. To load the `OpenCoarrays` module, type:
 
 ```console
-$ ml OpenCoarrays/1.8.10-GCC-6.3.0-2.27
+$ ml OpenCoarrays/2.9.2-gompi-2020b
 ```
 
 ### Compile CAF Program
@@ -106,7 +106,7 @@ $ mpif90 hello_world.f90 -o hello_world.x -fcoarray=lib -lcaf_mpi
 
 ### Run CAF Program
 
-A CAF program can be run by invoking the `cafrun` wrapper or directly by the `mpiexec`:
+A CAF program can be run by invoking the `cafrun` wrapper or directly by the `mpirun`:
 
 ```console
 $ cafrun -np 4 ./hello_world.x
@@ -115,17 +115,13 @@ $ cafrun -np 4 ./hello_world.x
     Hello world from image            3 of           4
     Hello world from image            4 of           4
 
-$ mpiexec -np 4 ./synchronization_test.x
+$ mpirun -np 4 ./synchronization_test.x
     The random number is         242
     The random number is         242
     The random number is         242
     The random number is         242
 ```
 
-`-np 4` is the number of images to run. The parameters of `cafrun` and `mpiexec` are the same.
-
-For more information about running CAF program, follow [Running OpenMPI - Salomon][1].
-
-[1]: ../mpi/running_openmpi.md
+`-np 4` is the number of images to run. The parameters of `cafrun` and `mpirun` are the same.
 
 [a]: http://www.opencoarrays.org/
diff --git a/docs.it4i/software/numerical-languages/r.md b/docs.it4i/software/numerical-languages/r.md
index 10f79da413d0d1800c784fd05b98d53c318339f1..9e862a47ad138a123ab61132d3b929c8d0de05fc 100644
--- a/docs.it4i/software/numerical-languages/r.md
+++ b/docs.it4i/software/numerical-languages/r.md
@@ -19,7 +19,6 @@ R version 3.1.1 is available on the cluster, along with GUI interface RStudio
 | Application | Version           | module              |
 | ----------- | ----------------- | ------------------- |
 | **R**       | R 3.1.1           | R/3.1.1-intel-2015b |
-| **RStudio** | RStudio 0.98.1103 | RStudio             |
 
 ```console
 $ ml R
@@ -48,7 +47,9 @@ Example jobscript:
 #!/bin/bash
 
 # change to local scratch directory
-cd /lscratch/$PBS_JOBID || exit
+DIR=/scratch/project/PROJECT_ID/$PBS_JOBID
+mkdir -p "$DIR"
+cd "$DIR" || exit
 
 # copy input file to scratch
 cp $PBS_O_WORKDIR/rscript.R .
@@ -370,18 +371,18 @@ An example jobscript for [static Rmpi][4] parallel R execution, running 1 proces
 #PBS -l select=100:ncpus=24:mpiprocs=24:ompthreads=1
 
 # change to scratch directory
-SCRDIR=/scratch/work/user/$USER/myjob
-cd $SCRDIR || exit
+DIR=/scratch/project/PROJECT_ID/$PBS_JOBID
+mkdir -p "$DIR"
+cd "$DIR" || exit
 
 # copy input file to scratch
 cp $PBS_O_WORKDIR/rscript.R .
 
 # load R and openmpi module
-ml R
-ml OpenMPI
+ml R OpenMPI
 
 # execute the calculation
-mpiexec -bycore -bind-to-core R --slave --no-save --no-restore -f rscript.R
+mpirun -bycore -bind-to-core R --slave --no-save --no-restore -f rscript.R
 
 # copy output file to home
 cp routput.out $PBS_O_WORKDIR/.
@@ -392,22 +393,10 @@ exit
 
 For more information about jobscripts and MPI execution, refer to the [Job submission][1] and general [MPI][5] sections.
 
-## Xeon Phi Offload
-
-By leveraging MKL, R can accelerate certain computations, most notably linear algebra operations on the Xeon Phi accelerator by using Automated Offload. To use MKL Automated Offload, you need to first set this environment variable before R execution:
-
-```console
-$ export MKL_MIC_ENABLE=1
-```
-
-Read more about automatic offload [here][6].
-
 [1]: ../../general/job-submission-and-execution.md
 [2]: #interactive-execution
-[3]: ../mpi/running_openmpi.md
 [4]: #static-rmpi
 [5]: ../mpi/mpi.md
-[6]: ../intel/intel-xeon-phi-salomon.md
 
 [a]: http://www.r-project.org/
 [b]: http://cran.r-project.org/doc/manuals/r-release/R-lang.html
diff --git a/docs.it4i/software/numerical-libraries/fftw.md b/docs.it4i/software/numerical-libraries/fftw.md
index 23d33bdb0eb76ca9cd8a660f9d549961ea02238c..4fb7fef9a33c626c6b1e9db2117da567f318b23d 100644
--- a/docs.it4i/software/numerical-libraries/fftw.md
+++ b/docs.it4i/software/numerical-libraries/fftw.md
@@ -4,21 +4,18 @@ The discrete Fourier transform in one or more dimensions, MPI parallel
 
 FFTW is a C subroutine library for computing the discrete Fourier transform in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, e.g. the discrete cosine/sine transforms or DCT/DST). The FFTW library allows for MPI parallel, in-place discrete Fourier transform, with data distributed over number of nodes.
 
-Two versions, **3.3.x** and **2.1.5** of FFTW are available, each compiled for **Intel MPI** and **OpenMPI** using **Intel** and **gnu** compilers. These are available via modules:
-
-| Version        | Parallelization | module              | linker options                      |
-| -------------- | --------------- | ------------------- | ----------------------------------- |
-| FFTW3 gcc3.3.3 | pthread, OpenMP | fftw3/3.3.3-gcc     | -lfftw3, -lfftw3_threads-lfftw3_omp |
-| FFTW3 icc3.3.3 | pthread, OpenMP | fftw3               | -lfftw3, -lfftw3_threads-lfftw3_omp |
-| FFTW2 gcc2.1.5 | pthread         | fftw2/2.1.5-gcc     | -lfftw, -lfftw_threads              |
-| FFTW2 icc2.1.5 | pthread         | fftw2               | -lfftw, -lfftw_threads              |
-| FFTW3 gcc3.3.3 | OpenMPI         | fftw-mpi3/3.3.3-gcc | -lfftw3_mpi                         |
-| FFTW3 icc3.3.3 | Intel MPI       | fftw3-mpi           | -lfftw3_mpi                         |
-| FFTW2 gcc2.1.5 | OpenMPI         | fftw2-mpi/2.1.5-gcc | -lfftw_mpi                          |
-| FFTW2 gcc2.1.5 | IntelMPI        | fftw2-mpi/2.1.5-gcc | -lfftw_mpi                          |
+```console
+$ ml av FFTW
+
+---------------------------------------------------- /apps/modules/numlib -----------------------------------------------------
+   FFTW/3.3.7-gompi-2018a        FFTW/3.3.8-gompi-2020a    FFTW/3.3.8-gompic-2020b           FFTW/3.3.8
+   FFTW/3.3.8-gompi-2020a-amd    FFTW/3.3.8-gompi-2020b    FFTW/3.3.8-iccifort-2020.4.304    FFTW/3.3.9-gompi-2021a (D)
+```
+
+To load the latest version of Octave load the module:
 
 ```console
-$ ml fftw3 **or** ml FFTW
+$ ml FFTW
 ```
 
 The module sets up environment variables, required for linking and running FFTW enabled applications. Make sure that the choice of FFTW module is consistent with your choice of MPI library. Mixing MPI of different implementations may have unpredictable results.
@@ -63,14 +60,10 @@ The module sets up environment variables, required for linking and running FFTW
 Load modules and compile:
 
 ```console
-$ ml intel
-$ ml fftw3-mpi
+$ ml intel/2020b 3.3.8-iccifort-2020.4.304
 $ mpicc testfftw3mpi.c -o testfftw3mpi.x -Wl,-rpath=$LIBRARY_PATH -lfftw3_mpi
 ```
 
-Run the example as [Intel MPI program][1].
-
 Read more on FFTW usage on the [FFTW website][a].
 
-[1]: ../mpi/running-mpich2.md
-[a]: http://www.fftw.org/fftw3_doc/
+[a]: http://www.fftw.org/fftw3_doc/
\ No newline at end of file
diff --git a/docs.it4i/software/numerical-libraries/gsl.md b/docs.it4i/software/numerical-libraries/gsl.md
index 2154200ad33e5298fa84dd0b12b8018c8609f303..07853b4364414f39f78a5bc74a3ee489422f1d0f 100644
--- a/docs.it4i/software/numerical-libraries/gsl.md
+++ b/docs.it4i/software/numerical-libraries/gsl.md
@@ -48,6 +48,9 @@ For the list of available gsl modules, use the command:
 
 ```console
 $ ml av gsl
+---------------- /apps/modules/numlib -------------------
+   GSL/2.5-intel-2017c    GSL/2.6-iccifort-2020.1.217    GSL/2.7-GCC-10.3.0 (D)
+   GSL/2.6-GCC-10.2.0     GSL/2.6-iccifort-2020.4.304
 ```
 
 ## Linking
@@ -57,17 +60,14 @@ Load an appropriate `gsl` module. Use the `-lgsl` switch to link your code again
 ### Compiling and Linking With Intel Compilers
 
 ```console
-$ ml intel
-$ ml gsl
+$ ml intel/2020b gsl/2.6-iccifort-2020.4.304
 $ icc myprog.c -o myprog.x -Wl,-rpath=$LIBRARY_PATH -mkl -lgsl
 ```
 
 ### Compiling and Linking With GNU Compilers
 
 ```console
-$ ml gcc
-$ ml imkl **or** ml mkl
-$ ml gsl/1.16-gcc
+$ ml ml GCC/10.2.0 imkl/2020.4.304-iimpi-2020b GSL/2.6-iccifort-2020.4.304
 $ gcc myprog.c -o myprog.x -Wl,-rpath=$LIBRARY_PATH -lmkl_intel_lp64 -lmkl_gnu_thread -lmkl_core -lgomp -lgsl
 ```
 
@@ -130,8 +130,7 @@ Following is an example of a discrete wavelet transform implemented by GSL:
 Load modules and compile:
 
 ```console
-$ ml intel
-$ ml gsl
+$ ml intel/2020b gsl/GSL/2.6-iccifort-2020.4.304
 $ icc dwt.c -o dwt.x -Wl,-rpath=$LIBRARY_PATH -mkl -lgsl
 ```
 
diff --git a/docs.it4i/software/numerical-libraries/hdf5.md b/docs.it4i/software/numerical-libraries/hdf5.md
index c11c15f42f2efd73caf63f901c97a3d4d59d3118..4f199d1d3a228a6e9e2b629aef86d61086396cfa 100644
--- a/docs.it4i/software/numerical-libraries/hdf5.md
+++ b/docs.it4i/software/numerical-libraries/hdf5.md
@@ -9,7 +9,12 @@ Hierarchical Data Format library. Serial and MPI parallel version.
 For the current list of installed versions, use:
 
 ```console
-$ ml av HDF
+$ ml av HDF5
+----------------------------------------------------- /apps/modules/data ------------------------------------------------------
+   HDF5/1.10.6-foss-2020b-parallel     HDF5/1.10.6-intel-2020a             HDF5/1.10.7-gompi-2021a
+   HDF5/1.10.6-iimpi-2020a             HDF5/1.10.6-intel-2020b-parallel    HDF5/1.10.7-gompic-2020b
+   HDF5/1.10.6-intel-2020a-parallel    HDF5/1.10.7-gompi-2020b             HDF5/1.10.7-iimpi-2020b  (D)
+
 ```
 
 To load the module, use the `ml` command.
@@ -67,15 +72,10 @@ The module sets up environment variables required for linking and running HDF5 e
 Load modules and compile:
 
 ```console
-$ ml intel
-$ ml hdf5-parallel
+$ ml intel/2020b HDF5/1.10.6-intel-2020b-parallel
 $ mpicc hdf5test.c -o hdf5test.x -Wl,-rpath=$LIBRARY_PATH $HDF5_INC $HDF5_SHLIB
 ```
 
-Run the example as [Intel MPI program][1].
-
 For further information, see the [website][a].
 
-[1]: ../mpi/running-mpich2.md
-
 [a]: http://www.hdfgroup.org/HDF5/
diff --git a/docs.it4i/software/numerical-libraries/intel-numerical-libraries.md b/docs.it4i/software/numerical-libraries/intel-numerical-libraries.md
index d4730f8c360a38c17991da466c37b21b6782e3ba..37f5bf743bff1b42fa60088c2578c7990bcfe303 100644
--- a/docs.it4i/software/numerical-libraries/intel-numerical-libraries.md
+++ b/docs.it4i/software/numerical-libraries/intel-numerical-libraries.md
@@ -7,9 +7,16 @@ Intel libraries for high performance in numerical computing.
 Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, extensively threaded and optimized for maximum performance. Intel MKL unites and provides these basic components: BLAS, LAPACK, ScaLapack, PARDISO, FFT, VML, VSL, Data fitting, Feast Eigensolver, and many more.
 
 ```console
-$ ml mkl **or** ml imkl
+$ ml av mkl
+------------------- /apps/modules/numlib -------------------
+   imkl/2017.4.239-iimpi-2017c    imkl/2020.1.217-iimpi-2020a        imkl/2021.2.0-iimpi-2021a (D)
+   imkl/2018.4.274-iimpi-2018a    imkl/2020.4.304-iimpi-2020b (L)    mkl/2020.4.304
+   imkl/2019.1.144-iimpi-2019a    imkl/2020.4.304-iompi-2020b
 ```
 
+!!! info
+    `imkl` ... with intel toolchain. `mkl` with system toolchain.
+
 For more information, see the [Intel MKL][1] section.
 
 ## Intel Integrated Performance Primitives
@@ -17,7 +24,9 @@ For more information, see the [Intel MKL][1] section.
 Intel Integrated Performance Primitives version 7.1.1, compiled for AVX is available via the `ipp` module. IPP is a library of highly optimized algorithmic building blocks for media and data applications. This includes signal, image, and frame processing algorithms, such as FFT, FIR, Convolution, Optical Flow, Hough transform, Sum, MinMax, and many more.
 
 ```console
-$ ml ipp
+$ ml av ipp
+------------------- /apps/modules/perf -------------------
+   ipp/2020.3.304
 ```
 
 For more information, see the [Intel IPP][2] section.
@@ -27,11 +36,28 @@ For more information, see the [Intel IPP][2] section.
 Intel Threading Building Blocks (Intel TBB) is a library that supports scalable parallel programming using standard ISO C++ code. It does not require special languages or compilers. It is designed to promote scalable data parallel programming. Additionally, it fully supports nested parallelism, so you can build larger parallel components from smaller parallel components. To use the library, you specify tasks, not threads, and let the library map tasks onto threads in an efficient manner.
 
 ```console
-$ ml tbb
+$ ml av tbb
+------------------- /apps/modules/lib -------------------
+   tbb/2020.3-GCCcore-10.2.0
+
 ```
 
 Read more at the [Intel TBB][3].
 
+## Python Hooks for Intel Math Kernel Library
+
+Python hooks for Intel(R) Math Kernel Library runtime control settings.
+
+```console
+$ ml av mkl-service
+------------------- /apps/modules/data -------------------
+   mkl-service/2.3.0-intel-2020b
+```
+
+Read more at the [hooks][a].
+
 [1]: ../intel/intel-suite/intel-mkl.md
 [2]: ../intel/intel-suite/intel-integrated-performance-primitives.md
 [3]: ../intel/intel-suite/intel-tbb.md
+
+[a]: https://github.com/IntelPython/mkl-service
diff --git a/docs.it4i/software/numerical-libraries/magma-for-intel-xeon-phi.md b/docs.it4i/software/numerical-libraries/magma-for-intel-xeon-phi.md
deleted file mode 100644
index 4e4a550c4ec870e2f8185601c337d813f5181f7a..0000000000000000000000000000000000000000
--- a/docs.it4i/software/numerical-libraries/magma-for-intel-xeon-phi.md
+++ /dev/null
@@ -1,83 +0,0 @@
-# MAGMA for Intel Xeon Phi
-
-A next generation dense algebra library for heterogeneous systems with accelerators.
-
-## Compiling and Linking With MAGMA
-
-To compile and link code with the MAGMA library, load the following module:
-
-```console
-$ ml magma/1.3.0-mic
-```
-
-To make compilation more user-friendly, the module also sets these two environment variables:
-
-* `MAGMA_INC` - contains paths to the MAGMA header files (to be used for compilation step).
-* `MAGMA_LIBS` - contains paths to the MAGMA libraries (to be used for linking step).
-
-Compilation example:
-
-```console
-$ icc -mkl -O3 -DHAVE_MIC -DADD_ -Wall $MAGMA_INC -c testing_dgetrf_mic.cpp -o testing_dgetrf_mic.o
-$ icc -mkl -O3 -DHAVE_MIC -DADD_ -Wall -fPIC -Xlinker -zmuldefs -Wall -DNOCHANGE -DHOST testing_dgetrf_mic.o  -o testing_dgetrf_mic $MAGMA_LIBS
-```
-
-### Running MAGMA Code
-
-MAGMA implementation for Intel MIC requires a MAGMA server running on accelerator prior to executing the user application. To start or stop the server, use the following scripts:
-
-To start MAGMA server use:
-
-```console
-$MAGMAROOT/start_magma_server
-```
-
-To stop the server use:
-
-```console
-$MAGMAROOT/stop_magma_server
-```
-
-For more information about how the MAGMA server is started, see the following script:
-
-```console
-$MAGMAROOT/launch_anselm_from_mic.sh
-```
-
-To test if the MAGMA server runs properly, we can run one of the examples that are part of the MAGMA installation:
-
-```console
-[user@cn204 ~]$ $MAGMAROOT/testing/testing_dgetrf_mic
-[user@cn204 ~]$ export OMP_NUM_THREADS=16
-[lriha@cn204 ~]$ $MAGMAROOT/testing/testing_dgetrf_mic
-    Usage: /apps/libs/magma-mic/magmamic-1.3.0/testing/testing_dgetrf_mic [options] [-h|--help]
-
-      M     N     CPU GFlop/s (sec)   MAGMA GFlop/s (sec)   ||PA-LU||/(||A||*N)
-    =========================================================================
-     1088 1088     ---   (  ---  )     13.93 (   0.06)     ---
-     2112 2112     ---   (  ---  )     77.85 (   0.08)     ---
-     3136 3136     ---   (  ---  )    183.21 (   0.11)     ---
-     4160 4160     ---   (  ---  )    227.52 (   0.21)     ---
-     5184 5184     ---   (  ---  )    258.61 (   0.36)     ---
-     6208 6208     ---   (  ---  )    333.12 (   0.48)     ---
-     7232 7232     ---   (  ---  )    416.52 (   0.61)     ---
-     8256 8256     ---   (  ---  )    446.97 (   0.84)     ---
-     9280 9280     ---   (  ---  )    461.15 (   1.16)     ---
-    10304 10304     ---   (  ---  )    500.70 (   1.46)     ---
-```
-
-!!! hint
-    MAGMA contains several benchmarks and examples in `$MAGMAROOT/testing/`.
-
-!!! note
-    MAGMA relies on the performance of all CPU cores as well as on the performance of the accelerator. Therefore on Anselm, the number of CPU OpenMP threads has to be set to 16 with `export OMP_NUM_THREADS=16`.
-
-See more details at [MAGMA home page][a].
-
-## References
-
-[1] [MAGMA MIC: Linear Algebra Library for Intel Xeon Phi Coprocessors][1], Jack Dongarra et. al
-
-[1]: http://icl.utk.edu/projectsfiles/magma/pubs/24-MAGMA_MIC_03.pdf
-
-[a]: http://icl.cs.utk.edu/magma/
diff --git a/docs.it4i/software/numerical-libraries/petsc.md b/docs.it4i/software/numerical-libraries/petsc.md
index 9352e7b620674c6a5cede62bb32418f8cf42a672..53af08d0624fa99ffab3922cee18e2800cf7bf92 100644
--- a/docs.it4i/software/numerical-libraries/petsc.md
+++ b/docs.it4i/software/numerical-libraries/petsc.md
@@ -16,13 +16,12 @@ PETSc (Portable, Extensible Toolkit for Scientific Computation) is a suite of bu
 
 ## Modules
 
-You can start using PETSc by loading the `petsc` module. Module names follox the pattern `petsc/version-compiler-mpi-blas-variant` where `variant` is replaced by one of `{dbg, opt, threads-dbg, threads-opt}`, for example:
+For the current list of installed versions, use:
 
 ```console
-$ ml petsc/3.4.4-icc-impi-mkl-opt
-```
+$ ml av petsc
 
- The `opt` variant is compiled without debugging information (no `-g` option) and with aggressive compiler optimizations (`-O3 -xAVX`). This variant is suitable for performance measurements and production runs. In all other cases, use the debug (`dbg`) variant, because it contains debugging information, performs validations and self-checks, and provides a clear stack trace and message in case of an error. The other two variants `threads-dbg` and `threads-opt` are `dbg` and `opt`, respectively, built with [OpenMP and pthreads threading support][j].
+```
 
 ## External Libraries