diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md index 9ca3aca6ae23e907518d7192529b5923a43dcb25..016aadd2e1613be5250b588b5943d825ebe3f790 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md @@ -31,44 +31,48 @@ Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so u To run Matlab with GUI, use ```bash - $ matlab +$ matlab ``` To run Matlab in text mode, without the Matlab Desktop GUI environment, use -`bash`bash - $ matlab -nodesktop -nosplash +```bash +$ matlab -nodesktop -nosplash +``` - plots, images, etc... will be still available. +Plots, images, etc... will be still available. - Running parallel Matlab using Distributed Computing Toolbox / Engine - -------------------------------------------------------------------- - Recommended parallel mode for running parallel Matlab on Anselm is MPIEXEC mode. In this mode user allocates resources through PBS prior to starting Matlab. Once resources are granted the main Matlab instance is started on the first compute node assigned to job by PBS and workers are started on all remaining nodes. User can use both interactive and non-interactive PBS sessions. This mode guarantees that the data processing is not performed on login nodes, but all processing is on compute nodes. -  +## Running parallel Matlab using Distributed Computing Toolbox / Engine - For the performance reasons Matlab should use system MPI. On Anselm the supported MPI implementation for Matlab is Intel MPI. To switch to system MPI user has to override default Matlab setting by creating new configuration file in its home directory. The path and file name has to be exactly the same as in the following listing: +Recommended parallel mode for running parallel Matlab on Anselm is MPIEXEC mode. In this mode user allocates resources through PBS prior to starting Matlab. Once resources are granted the main Matlab instance is started on the first compute node assigned to job by PBS and workers are started on all remaining nodes. User can use both interactive and non-interactive PBS sessions. This mode guarantees that the data processing is not performed on login nodes, but all processing is on compute nodes. - ```bash - $ vim ~/matlab/mpiLibConf.m + - function [lib, extras] = mpiLibConf - %MATLAB MPI Library overloading for Infiniband Networks +For the performance reasons Matlab should use system MPI. On Anselm the supported MPI implementation for Matlab is Intel MPI. To switch to system MPI user has to override default Matlab setting by creating new configuration file in its home directory. The path and file name has to be exactly the same as in the following listing: - mpich = '/opt/intel/impi/4.1.1.036/lib64/'; +```bash +$ vim ~/matlab/mpiLibConf.m +``` +```bash +function [lib, extras] = mpiLibConf +%MATLAB MPI Library overloading for Infiniband Networks + +mpich = '/opt/intel/impi/4.1.1.036/lib64/'; - disp('Using Intel MPI 4.1.1.036 over Infiniband') +disp('Using Intel MPI 4.1.1.036 over Infiniband') - lib = strcat(mpich, 'libmpich.so'); - mpl = strcat(mpich, 'libmpl.so'); - opa = strcat(mpich, 'libopa.so'); +lib = strcat(mpich, 'libmpich.so'); +mpl = strcat(mpich, 'libmpl.so'); +opa = strcat(mpich, 'libopa.so'); - extras = {}; +extras = {}; +``` System MPI library allows Matlab to communicate through 40 Gbit/s InfiniBand QDR interconnect instead of slower 1 Gbit Ethernet network. !!! Note "Note" - Please note: The path to MPI library in "mpiLibConf.m" has to match with version of loaded Intel MPI module. In this example the version 4.1.1.036 of Intel MPI is used by Matlab and therefore module impi/4.1.1.036 has to be loaded prior to starting Matlab. + The path to MPI library in "mpiLibConf.m" has to match with version of loaded Intel MPI module. In this example the version 4.1.1.036 of Intel MPI is used by Matlab and therefore module impi/4.1.1.036 has to be loaded prior to starting Matlab. ### Parallel Matlab interactive session @@ -97,27 +101,27 @@ Once the access to compute nodes is granted by PBS, user can load following modu To run matlab in batch mode, write an matlab script, then write a bash jobscript and execute via the qsub command. By default, matlab will execute one matlab worker instance per allocated core. ```bash - #!/bin/bash - #PBS -A PROJECT ID - #PBS -q qprod - #PBS -l select=2:ncpus=16:mpiprocs=16:ompthreads=1 +#!/bin/bash +#PBS -A PROJECT ID +#PBS -q qprod +#PBS -l select=2:ncpus=16:mpiprocs=16:ompthreads=1 - # change to shared scratch directory - SCR=/scratch/$USER/$PBS_JOBID - mkdir -p $SCR ; cd $SCR || exit +# change to shared scratch directory +SCR=/scratch/$USER/$PBS_JOBID +mkdir -p $SCR ; cd $SCR || exit - # copy input file to scratch - cp $PBS_O_WORKDIR/matlabcode.m . +# copy input file to scratch +cp $PBS_O_WORKDIR/matlabcode.m . - # load modules - module load matlab/R2013a-EDU - module load impi/4.1.1.036 +# load modules +module load matlab/R2013a-EDU +module load impi/4.1.1.036 - # execute the calculation - matlab -nodisplay -r matlabcode > output.out +# execute the calculation +matlab -nodisplay -r matlabcode > output.out - # copy output file to home - cp output.out $PBS_O_WORKDIR/. +# copy output file to home +cp output.out $PBS_O_WORKDIR/. ``` This script may be submitted directly to the PBS workload manager via the qsub command. The inputs and matlab script are in matlabcode.m file, outputs in output.out file. Note the missing .m extension in the matlab -r matlabcodefile call, **the .m must not be included**. Note that the **shared /scratch must be used**. Further, it is **important to include quit** statement at the end of the matlabcode.m script. @@ -125,7 +129,7 @@ This script may be submitted directly to the PBS workload manager via the qsub c Submit the jobscript using qsub ```bash - $ qsub ./jobscript +$ qsub ./jobscript ``` ### Parallel Matlab program example @@ -133,63 +137,63 @@ Submit the jobscript using qsub The last part of the configuration is done directly in the user Matlab script before Distributed Computing Toolbox is started. ```bash - sched = findResource('scheduler', 'type', 'mpiexec'); - set(sched, 'MpiexecFileName', '/apps/intel/impi/4.1.1/bin/mpirun'); - set(sched, 'EnvironmentSetMethod', 'setenv'); +sched = findResource('scheduler', 'type', 'mpiexec'); +set(sched, 'MpiexecFileName', '/apps/intel/impi/4.1.1/bin/mpirun'); +set(sched, 'EnvironmentSetMethod', 'setenv'); ``` This script creates scheduler object "sched" of type "mpiexec" that starts workers using mpirun tool. To use correct version of mpirun, the second line specifies the path to correct version of system Intel MPI library. !!! Note "Note" - Please note: Every Matlab script that needs to initialize/use matlabpool has to contain these three lines prior to calling matlabpool(sched, ...) function. + Every Matlab script that needs to initialize/use matlabpool has to contain these three lines prior to calling matlabpool(sched, ...) function. The last step is to start matlabpool with "sched" object and correct number of workers. In this case qsub asked for total number of 32 cores, therefore the number of workers is also set to 32. ```bash - matlabpool(sched,32); +matlabpool(sched,32); - ... parallel code ... +... parallel code ... - matlabpool close +matlabpool close ``` The complete example showing how to use Distributed Computing Toolbox is show here. ```bash - sched = findResource('scheduler', 'type', 'mpiexec'); - set(sched, 'MpiexecFileName', '/apps/intel/impi/4.1.1/bin/mpirun') - set(sched, 'EnvironmentSetMethod', 'setenv') - set(sched, 'SubmitArguments', '') - sched - - matlabpool(sched,32); - - n=2000; - - W = rand(n,n); - W = distributed(W); - x = (1:n)'; - x = distributed(x); - spmd - [~, name] = system('hostname') - - T = W*x; % Calculation performed on labs, in parallel. - % T and W are both codistributed arrays here. - end - T; - whos % T and W are both distributed arrays here. - - matlabpool close - quit +sched = findResource('scheduler', 'type', 'mpiexec'); +set(sched, 'MpiexecFileName', '/apps/intel/impi/4.1.1/bin/mpirun') +set(sched, 'EnvironmentSetMethod', 'setenv') +set(sched, 'SubmitArguments', '') +sched + +matlabpool(sched,32); + +n=2000; + +W = rand(n,n); +W = distributed(W); +x = (1:n)'; +x = distributed(x); +spmd +[~, name] = system('hostname') + + T = W*x; % Calculation performed on labs, in parallel. + % T and W are both codistributed arrays here. +end +T; +whos % T and W are both distributed arrays here. + +matlabpool close +quit ``` You can copy and paste the example in a .m file and execute. Note that the matlabpool size should correspond to **total number of cores** available on allocated nodes. ### Non-interactive Session and Licenses -If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the " -l \_feature_matlab_MATLAB=1" for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../isv_licenses/). +If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the "` -l __feature__matlab__MATLAB=1`" for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../isv_licenses/). In case of non-interactive session please read the [following information](../isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.