Interpreted languages for numerical computations and analysis

## Introduction

This section contains a collection of high-level interpreted languages, primarily intended for numerical computations.

## Matlab

MATLAB® is a high-level language and interactive environment for numerical computation, visualization, and programming.

```console

$ml MATLAB

$matlab

```

Read more at the [Matlab page](matlab/).

## Octave

GNU Octave is a high-level interpreted language, primarily intended for numerical computations. The Octave language is quite similar to Matlab so that most programs are easily portable.

```console

$ml Octave

$octave

```

Read more at the [Octave page](octave/).

## R

The R is an interpreted language and environment for statistical computing and graphics.

Matlab is available in versions R2015a and R2015b. There are always two variants of the release:

* Non commercial or so called EDU variant, which can be used for common research and educational purposes.

* Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.

To load the latest version of Matlab load the module

```console

$ml MATLAB

```

By default the EDU variant is marked as default. If you need other version or variant, load the particular version. To obtain the list of available versions use

```console

$ml av MATLAB

```

If you need to use the Matlab GUI to prepare your Matlab programs, you can use Matlab directly on the login nodes. But for all computations use Matlab on the compute nodes via PBS Pro scheduler.

If you require the Matlab GUI, please follow the general information about [running graphical applications](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).

Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)) is recommended.

To run Matlab with GUI, use

```console

$matlab

```

To run Matlab in text mode, without the Matlab Desktop GUI environment, use

```console

$matlab -nodesktop-nosplash

```

plots, images, etc... will be still available.

## Running Parallel Matlab Using Distributed Computing Toolbox / Engine

Distributed toolbox is available only for the EDU variant

The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1).

Delete previously used file mpiLibConf.m, we have observed crashes when using Intel MPI.

To use Distributed Computing, you first need to setup a parallel profile. We have provided the profile for you, you can either import it in MATLAB command line:

Or in the GUI, go to tab HOME -> Parallel -> Manage Cluster Profiles..., click Import and navigate to :

/apps/all/MATLAB/2015b-EDU/SalomonPBSPro.settings

With the new mode, MATLAB itself launches the workers via PBS, so you can either use interactive mode or a batch mode on one node, but the actual parallel processing will be done in a separate job started by MATLAB itself. Alternatively, you can use "local" mode to run parallel code on just a single node.

!!! note

The profile is confusingly named Salomon, but you can use it also on Anselm.

### Parallel Matlab Interactive Session

Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this page](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/).

This qsub command example shows how to run Matlab on a single node.

The second part of the command shows how to request all necessary licenses. In this case 1 Matlab-EDU license and 48 Distributed Computing Engines licenses.

Once the access to compute nodes is granted by PBS, user can load following modules and start Matlab:

```console

$ml MATLAB/2015a-EDU

$matlab &

```

### Parallel Matlab Batch Job in Local Mode

To run matlab in batch mode, write an matlab script, then write a bash jobscript and execute via the qsub command. By default, matlab will execute one matlab worker instance per allocated core.

SCR=/scratch/.../$USER/$PBS_JOBID# change path in according to the cluster

mkdir-p$SCR;cd$SCR||exit

# copy input file to scratch

cp$PBS_O_WORKDIR/matlabcode.m .

# load modules

module load MATLAB/2015a-EDU

# execute the calculation

matlab -nodisplay-r matlabcode > output.out

# copy output file to home

cp output.out $PBS_O_WORKDIR/.

```

This script may be submitted directly to the PBS workload manager via the qsub command. The inputs and matlab script are in matlabcode.m file, outputs in output.out file. Note the missing .m extension in the matlab -r matlabcodefile call, **the .m must not be included**. Note that the **shared /scratch must be used**. Further, it is **important to include quit** statement at the end of the matlabcode.m script.

Submit the jobscript using qsub

```console

$qsub ./jobscript

```

### Parallel Matlab Local Mode Program Example

The last part of the configuration is done directly in the user Matlab script before Distributed Computing Toolbox is started.

```console

cluster = parcluster('local')

```

This script creates scheduler object "cluster" of type "local" that starts workers locally.

!!! hint

Every Matlab script that needs to initialize/use matlabpool has to contain these three lines prior to calling parpool(sched, ...) function.

The last step is to start matlabpool with "cluster" object and correct number of workers. We have 24 cores per node, so we start 24 workers.

```console

parpool(cluster,24);# Anselm: parpool(cluster,24)

... parallel code ...

parpool close

```

The complete example showing how to use Distributed Computing Toolbox in local mode is shown here.

```matlab

cluster=parcluster('local');

cluster

parpool(cluster,24);

n=2000;

W=rand(n,n);

W=distributed(W);

x=(1:n)';

x=distributed(x);

spmd

[~,name]=system('hostname')

T=W*x;% Calculation performed on labs, in parallel.

% T and W are both codistributed arrays here.

end

T;

whos% T and W are both distributed arrays here.

parpoolclose

quit

```

You can copy and paste the example in a .m file and execute. Note that the parpool size should correspond to **total number of cores** available on allocated nodes.

### Parallel Matlab Batch Job Using PBS Mode (Workers Spawned in a Separate Job)

This mode uses PBS scheduler to launch the parallel pool. It uses the SalomonPBSPro profile that needs to be imported to Cluster Manager, as mentioned before. This methodod uses MATLAB's PBS Scheduler interface - it spawns the workers in a separate job submitted by MATLAB using qsub.

T=W*x;% Calculation performed on labs, in parallel.

% T and W are both codistributed arrays here.

end

whos% T and W are both distributed arrays here.

% shut down parallel pool

delete(pool)

```

Note that we first construct a cluster object using the imported profile, then set some important options, namely : SubmitArguments, where you need to specify accounting id, and ResourceTemplate, where you need to specify number of nodes to run the job.

You can start this script using batch mode the same way as in Local mode example.

### Parallel Matlab Batch With Direct Launch (Workers Spawned Within the Existing Job)

This method is a "hack" invented by us to emulate the mpiexec functionality found in previous MATLAB versions. We leverage the MATLAB Generic Scheduler interface, but instead of submitting the workers to PBS, we launch the workers directly within the running job, thus we avoid the issues with master script and workers running in separate jobs (issues with license not available, waiting for the worker's job to spawn etc.)

!!! warning

This method is experimental.

For this method, you need to use SalomonDirect profile, import it using [the same way as SalomonPBSPro](matlab.md#running-parallel-matlab-using-distributed-computing-toolbox---engine)

T=W*x;% Calculation performed on labs, in parallel.

% T and W are both codistributed arrays here.

end

whos% T and W are both distributed arrays here.

% shut down parallel pool

delete(pool)

```

### Non-Interactive Session and Licenses

If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the `-l __feature__matlab__MATLAB=1` for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../../../anselm/software/isv_licenses/).

The licensing feature of PBS is currently disabled.

In case of non-interactive session please read the [following information](../../../anselm/software/isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.

### Matlab Distributed Computing Engines Start Up Time

Starting Matlab workers is an expensive process that requires certain amount of time. For your information please see the following table:

| compute nodes | number of workers | start-up time[s] |

UV2000 machine available in queue "qfat" can be used for MATLAB computations. This is a SMP NUMA machine with large amount of RAM, which can be beneficial for certain types of MATLAB jobs. CPU cores are allocated in chunks of 8 for this machine.

You can use MATLAB on UV2000 in two parallel modes:

### Threaded Mode

Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes.

### Local Cluster Mode

You can also use Parallel Toolbox on UV2000. Use [local cluster mode](matlab/#parallel-matlab-batch-job-in-local-mode), "SalomonPBSPro" profile will not work.

This document relates to the old versions R2013 and R2014. For MATLAB 2015, please use [this documentation instead](matlab/).

Matlab is available in the latest stable version. There are always two variants of the release:

* Non commercial or so called EDU variant, which can be used for common research and educational purposes.

* Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.

To load the latest version of Matlab load the module

```console

$ml matlab

```

By default the EDU variant is marked as default. If you need other version or variant, load the particular version. To obtain the list of available versions use

```console

$ml matlab

```

If you need to use the Matlab GUI to prepare your Matlab programs, you can use Matlab directly on the login nodes. But for all computations use Matlab on the compute nodes via PBS Pro scheduler.

If you require the Matlab GUI, please follow the general information about running graphical applications

Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part) is recommended.

To run Matlab with GUI, use

```console

$matlab

```

To run Matlab in text mode, without the Matlab Desktop GUI environment, use

```console

$matlab -nodesktop-nosplash

```

Plots, images, etc... will be still available.

## Running Parallel Matlab Using Distributed Computing Toolbox / Engine

Recommended parallel mode for running parallel Matlab on Anselm is MPIEXEC mode. In this mode user allocates resources through PBS prior to starting Matlab. Once resources are granted the main Matlab instance is started on the first compute node assigned to job by PBS and workers are started on all remaining nodes. User can use both interactive and non-interactive PBS sessions. This mode guarantees that the data processing is not performed on login nodes, but all processing is on compute nodes.

For the performance reasons Matlab should use system MPI. On Anselm the supported MPI implementation for Matlab is Intel MPI. To switch to system MPI user has to override default Matlab setting by creating new configuration file in its home directory. The path and file name has to be exactly the same as in the following listing:

```matlab

$vim~/matlab/mpiLibConf.m

function[lib,extras]=mpiLibConf

%MATLAB MPI Library overloading for Infiniband Networks

mpich='/opt/intel/impi/4.1.1.036/lib64/';

disp('Using Intel MPI 4.1.1.036 over Infiniband')

lib=strcat(mpich,'libmpich.so');

mpl=strcat(mpich,'libmpl.so');

opa=strcat(mpich,'libopa.so');

extras={};

```

System MPI library allows Matlab to communicate through 40 Gbit/s InfiniBand QDR interconnect instead of slower 1 Gbit Ethernet network.

!!! note

The path to MPI library in "mpiLibConf.m" has to match with version of loaded Intel MPI module. In this example the version 4.1.1.036 of Intel MPI is used by Matlab and therefore module impi/4.1.1.036 has to be loaded prior to starting Matlab.

### Parallel Matlab Interactive Session

Once this file is in place, user can request resources from PBS. Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see.

This qsub command example shows how to run Matlab with 32 workers in following configuration: 2 nodes (use all 16 cores per node) and 16 workers = mpirocs per node (-l select=2:ncpus=16:mpiprocs=16). If user requires to run smaller number of workers per node then the "mpiprocs" parameter has to be changed.

The second part of the command shows how to request all necessary licenses. In this case 1 Matlab-EDU license and 32 Distributed Computing Engines licenses.

Once the access to compute nodes is granted by PBS, user can load following modules and start Matlab:

```console

$ml matlab/R2013a-EDU

$ml impi/4.1.1.036

$matlab &

```

### Parallel Matlab Batch Job

To run matlab in batch mode, write an matlab script, then write a bash jobscript and execute via the qsub command. By default, matlab will execute one matlab worker instance per allocated core.

This script may be submitted directly to the PBS workload manager via the qsub command. The inputs and matlab script are in matlabcode.m file, outputs in output.out file. Note the missing .m extension in the matlab -r matlabcodefile call, **the .m must not be included**. Note that the **shared /scratch must be used**. Further, it is **important to include quit** statement at the end of the matlabcode.m script.

Submit the jobscript using qsub

```console

$qsub ./jobscript

```

### Parallel Matlab Program Example

The last part of the configuration is done directly in the user Matlab script before Distributed Computing Toolbox is started.

This script creates scheduler object "sched" of type "mpiexec" that starts workers using mpirun tool. To use correct version of mpirun, the second line specifies the path to correct version of system Intel MPI library.

!!! note

Every Matlab script that needs to initialize/use matlabpool has to contain these three lines prior to calling matlabpool(sched, ...) function.

The last step is to start matlabpool with "sched" object and correct number of workers. In this case qsub asked for total number of 32 cores, therefore the number of workers is also set to 32.

```console

matlabpool(sched,32);

... parallel code ...

matlabpool close

```

The complete example showing how to use Distributed Computing Toolbox is show here.

T=W*x;% Calculation performed on labs, in parallel.

% T and W are both codistributed arrays here.

end

T;

whos% T and W are both distributed arrays here.

matlabpoolclose

quit

```

You can copy and paste the example in a .m file and execute. Note that the matlabpool size should correspond to **total number of cores** available on allocated nodes.

### Non-Interactive Session and Licenses

If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the ` -l __feature__matlab__MATLAB=1` for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../isv_licenses/).

In case of non-interactive session please read the [following information](../isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.

### Matlab Distributed Computing Engines Start Up Time

Starting Matlab workers is an expensive process that requires certain amount of time. For your information please see the following table:

| compute nodes | number of workers | start-up time[s] |

GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable. Read more on <http://www.gnu.org/software/octave/>

For looking for avaible modules, type:

```console

$ml av octave

```

## Modules and Execution

```console

$ml Octave

```

The octave on clusters is linked to highly optimized MKL mathematical library. This provides threaded parallelization to many octave kernels, notably the linear algebra subroutines. Octave runs these heavy calculation kernels without any penalty. By default, octave would parallelize to 16 (Anselm) or 24 (Salomon) threads. You may control the threads by setting the OMP_NUM_THREADS environment variable.

To run octave interactively, log in with ssh -X parameter for X11 forwarding. Run octave:

```console

$octave

```

To run octave in batch mode, write an octave script, then write a bash jobscript and execute via the qsub command. By default, octave will use 16 (Anselm) or 24 (Salomon) threads when running MKL kernels.

```bash

#!/bin/bash

# change to local scratch directory

cd /lscratch/$PBS_JOBID||exit

# copy input file to scratch

cp$PBS_O_WORKDIR/octcode.m .

# load octave module

module load octave

# execute the calculation

octave -q--eval octcode > output.out

# copy output file to home

cp output.out $PBS_O_WORKDIR/.

#exit

exit

```

This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../job-submission-and-execution/).

The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment.

```console

$mkoctfile -v

```

Octave may use MPI for interprocess communication This functionality is currently not supported on Anselm cluster. In case you require the octave interface to MPI, please contact [Anselm support](https://support.it4i.cz/rt/).

## Xeon Phi Support

Octave may take advantage of the Xeon Phi accelerators. This will only work on the [Intel Xeon Phi](../intel-xeon-phi/)[accelerated nodes](../../compute-nodes/).

### Automatic Offload Support

Octave can accelerate BLAS type operations (in particular the Matrix Matrix multiplications] on the Xeon Phi accelerator, via [Automatic Offload using the MKL library](../intel-xeon-phi/#section-3)

Example

```octave

$ export OFFLOAD_REPORT=2

$ export MKL_MIC_ENABLE=1

$ ml octave

$ octave -q

octave:1> A=rand(10000); B=rand(10000);

octave:2> tic; C=A*B; toc

[MKL] [MIC --] [AO Function] DGEMM

[MKL] [MIC --] [AO DGEMM Workdivision] 0.32 0.68

[MKL] [MIC 00] [AO DGEMM CPU Time] 2.896003 seconds

[MKL] [MIC 00] [AO DGEMM MIC Time] 1.967384 seconds

In this example, the calculation was automatically divided among the CPU cores and the Xeon Phi MIC accelerator, reducing the total runtime from 6.3 secs down to 2.9 secs.