Commit 4f249622 authored by Lukáš Krupčík's avatar Lukáš Krupčík

Merge branch 'gajdusek_clean' into 'master'

Gajdusek cleaning

See merge request !161
parents c8119615 44d6e063
......@@ -39,7 +39,7 @@ ext_links:
image: davidhrbac/docker-mdcheck:latest
allow_failure: true
after_script:
# remove JSON results
# remove JSON results
- rm *.json
script:
#- find docs.it4i/ -name '*.md' -exec grep --color -l http {} + | xargs awesome_bot -t 10
......@@ -63,7 +63,7 @@ mkdocs:
#- apt-get -y install git
# add version to footer
- bash scripts/add_version.sh
# get modules list from clusters
# get modules list from clusters
- bash scripts/get_modules.sh
# regenerate modules matrix
- python scripts/modules-matrix.py > docs.it4i/modules-matrix.md
......@@ -75,7 +75,7 @@ mkdocs:
# replace broken links in 404.html
- sed -i 's,href="" title=",href="/" title=",g' site/404.html
# compress sitemap
- gzip < site/sitemap.xml > site/sitemap.xml.gz
- gzip < site/sitemap.xml > site/sitemap.xml.gz
artifacts:
paths:
- site
......@@ -90,11 +90,11 @@ shellcheck:
- find . -name *.sh -not -path "./docs.it4i/*" -not -path "./site/*" -exec shellcheck {} +
deploy to stage:
environment: stage
environment: stage
stage: deploy
image: davidhrbac/docker-mkdocscheck:latest
before_script:
# install ssh-agent
# install ssh-agent
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
# run ssh-agent
......@@ -117,7 +117,7 @@ deploy to production:
stage: deploy
image: davidhrbac/docker-mkdocscheck:latest
before_script:
# install ssh-agent
# install ssh-agent
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
# run ssh-agent
......@@ -127,7 +127,7 @@ deploy to production:
# disable host key checking (NOTE: makes you susceptible to man-in-the-middle attacks)
# WARNING: use only in docker container, if you use it with shell you will overwrite your user's ssh config
- mkdir -p ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- useradd -lM nginx
script:
- chown nginx:nginx site -R
......
......@@ -29,8 +29,8 @@ Mellanox
### Formulas are made with:
* https://facelessuser.github.io/pymdown-extensions/extensions/arithmatex/
* https://www.mathjax.org/
* [https://facelessuser.github.io/pymdown-extensions/extensions/arithmatex/](https://facelessuser.github.io/pymdown-extensions/extensions/arithmatex/)
* [https://www.mathjax.org/](https://www.mathjax.org/)
You can add formula to page like this:
......
# Resource Allocation and Job Execution
To run a [job](ob-submission-and-execution/), [computational resources](resources-allocation-policy/) for this particular job must be allocated. This is done via the PBS Pro job workload manager software, which efficiently distributes workloads across the supercomputer. Extensive information about PBS Pro can be found in the [official documentation here](../pbspro/), especially in the PBS Pro User's Guide.
To run a [job](job-submission-and-execution/), [computational resources](resources-allocation-policy/) for this particular job must be allocated. This is done via the PBS Pro job workload manager software, which efficiently distributes workloads across the supercomputer. Extensive information about PBS Pro can be found in the [official documentation here](../pbspro/), especially in the PBS Pro User's Guide.
## Resources Allocation Policy
......
......@@ -108,6 +108,4 @@ Options:
---8<--- "resource_accounting.md"
---8<--- "mathjax.md"
# Molpro
Molpro is a complete system of ab initio programs for molecular electronic structure calculations.
## About Molpro
Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage](http://www.molpro.net/).
## License
Molpro software package is available only to users that have a valid license. Please contact support to enable access to Molpro if you have a valid license appropriate for running on our cluster (eg. academic research group licence, parallel execution).
To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee).
## Installed Version
Currently on Anselm is installed version 2010.1, patch level 45, parallel version compiled with Intel compilers and Intel MPI.
Compilation parameters are default:
| Parameter | Value |
| ---------------------------------- | ------------ |
| max number of atoms | 200 |
| max number of valence orbitals | 300 |
| max number of basis functions | 4095 |
| max number of states per symmmetry | 20 |
| max number of state symmetries | 16 |
| max number of records | 200 |
| max number of primitives | maxbfn x [2] |
## Running
Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details.
!!! note
The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH file system](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
### Example jobscript
```bash
#PBS -A IT4I-0-0
#PBS -q qprod
#PBS -l select=1:ncpus=16:mpiprocs=16:ompthreads=1
cd $PBS_O_WORKDIR
# load Molpro module
module add molpro
# create a directory in the SCRATCH filesystem
mkdir -p /scratch/$USER/$PBS_JOBID
# copy an example input
cp /apps/chem/molpro/2010.1/molprop_2010_1_Linux_x86_64_i8/examples/caffeine_opt_diis.com .
# run Molpro with default options
molpro -d /scratch/$USER/$PBS_JOBID caffeine_opt_diis.com
# delete scratch directory
rm -rf /scratch/$USER/$PBS_JOBID
```
# NWChem
## Introduction
NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.
[Homepage](http://www.nwchem-sw.org/index.php/Main_Page)
## Installed Versions
The following versions are currently installed:
* 6.1.1, not recommended, problems have been observed with this version
* 6.3-rev2-patch1, current release with QMD patch applied. Compiled with Intel compilers, MKL and Intel MPI
* 6.3-rev2-patch1-openmpi, same as above, but compiled with OpenMPI and NWChem provided BLAS instead of MKL. This version is expected to be slower
* 6.3-rev2-patch1-venus, this version contains only libraries for VENUS interface linking. Does not provide standalone NWChem executable
For a current list of installed versions, execute:
```console
$ ml av nwchem
```
## Running
NWChem is compiled for parallel MPI execution. Normal procedure for MPI jobs applies. Sample jobscript:
```bash
#PBS -A IT4I-0-0
#PBS -q qprod
#PBS -l select=1:ncpus=16
module add nwchem/6.3-rev2-patch1
mpirun -np 16 nwchem h2o.nw
```
## Options
Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
* MEMORY : controls the amount of memory NWChem will use
* SCRATCH_DIR : set this to a directory in [SCRATCH file system](../../storage/storage/#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, e.g.. "scf direct"
# Compilers
## Available Compilers, Including GNU, INTEL, and UPC Compilers
Currently there are several compilers for different programming languages available on the Anselm cluster:
* C/C++
* Fortran 77/90/95
* Unified Parallel C
* Java
* NVIDIA CUDA
The C/C++ and Fortran compilers are divided into two main groups GNU and Intel.
## Intel Compilers
For information about the usage of Intel Compilers and other Intel products, please read the [Intel Parallel studio](intel-suite/) page.
## GNU C/C++ and Fortran Compilers
For compatibility reasons there are still available the original (old 4.4.6-4) versions of GNU compilers as part of the OS. These are accessible in the search path by default.
It is strongly recommended to use the up to date version (4.8.1) which comes with the module gcc:
```console
$ ml gcc
$ gcc -v
$ g++ -v
$ gfortran -v
```
With the module loaded two environment variables are predefined. One for maximum optimizations on the Anselm cluster architecture, and the other for debugging purposes:
```console
$ echo $OPTFLAGS
-O3 -march=corei7-avx
$ echo $DEBUGFLAGS
-O0 -g
```
For more information about the possibilities of the compilers, please see the man pages.
## Unified Parallel C
UPC is supported by two compiler/runtime implementations:
* GNU - SMP/multi-threading support only
* Berkley - multi-node support as well as SMP/multi-threading support
### GNU UPC Compiler
To use the GNU UPC compiler and run the compiled binaries use the module gupc
```console
$ module add gupc
$ gupc -v
$ g++ -v
```
Simple program to test the compiler
```console
$ cat count.upc
/* hello.upc - a simple UPC example */
#include <upc.h>
#include <stdio.h>
int main() {
if (MYTHREAD == 0) {
printf("Welcome to GNU UPC!!!n");
}
upc_barrier;
printf(" - Hello from thread %in", MYTHREAD);
return 0;
}
```
To compile the example use
```console
$ gupc -o count.upc.x count.upc
```
To run the example with 5 threads issue
```console
$ ./count.upc.x -fupc-threads-5
```
For more information see the man pages.
### Berkley UPC Compiler
To use the Berkley UPC compiler and runtime environment to run the binaries use the module bupc
```console
$ module add bupc
$ upcc -version
```
As default UPC network the "smp" is used. This is very quick and easy way for testing/debugging, but limited to one node only.
For production runs, it is recommended to use the native Infiband implementation of UPC network "ibv". For testing/debugging using multiple nodes, the "mpi" UPC network is recommended.
!!! warning
Selection of the network is done at the compile time and not at runtime (as expected)!
Example UPC code:
```console
$ cat hello.upc
/* hello.upc - a simple UPC example */
#include <upc.h>
#include <stdio.h>
int main() {
if (MYTHREAD == 0) {
printf("Welcome to Berkeley UPC!!!n");
}
upc_barrier;
printf(" - Hello from thread %in", MYTHREAD);
return 0;
}
```
To compile the example with the "ibv" UPC network use
```console
$ upcc -network=ibv -o hello.upc.x hello.upc
```
To run the example with 5 threads issue
```console
$ upcrun -n 5 ./hello.upc.x
```
To run the example on two compute nodes using all 32 cores, with 32 threads, issue
```console
$ qsub -I -q qprod -A PROJECT_ID -l select=2:ncpus=16
$ module add bupc
$ upcrun -n 32 ./hello.upc.x
```
For more information see the man pages.
## Java
For information how to use Java (runtime and/or compiler), please read the [Java page](java/).
## NVIDIA CUDA
For information on how to work with NVIDIA CUDA, please read the [NVIDIA CUDA page](nvidia-cuda/).
# COMSOL Multiphysics
## Introduction
[COMSOL](http://www.comsol.com) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many
standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical
applications.
* [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
* [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
* [CFD Module](http://www.comsol.com/cfd-module),
* [Acoustics Module](http://www.comsol.com/acoustics-module),
* and [many others](http://www.comsol.com/products)
COMSOL also allows an interface support for equation-based modelling of partial differential equations.
## Execution
On the Anselm cluster COMSOL is available in the latest stable version. There are two variants of the release:
* **Non commercial** or so called **EDU variant**, which can be used for research and educational purposes.
* **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
To load the of COMSOL load the module
```console
$ ml comsol
```
By default the **EDU variant** will be loaded. If user needs other version or variant, load the particular version. To obtain the list of available versions use
```console
$ ml av comsol
```
If user needs to prepare COMSOL jobs in the interactive mode it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the Virtual Network Computing (VNC).
```console
$ xhost +
$ qsub -I -X -A PROJECT_ID -q qprod -l select=1:ncpus=16
$ ml comsol
$ comsol
```
To run COMSOL in batch mode, without the COMSOL Desktop GUI environment, user can utilized the default (comsol.pbs) job script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -l select=3:ncpus=16
#PBS -q qprod
#PBS -N JOB_NAME
#PBS -A PROJECT_ID
cd /scratch/$USER/ || exit
echo Time is `date`
echo Directory is `pwd`
echo '**PBS_NODEFILE***START*******'
cat $PBS_NODEFILE
echo '**PBS_NODEFILE***END*********'
text_nodes < cat $PBS_NODEFILE
module load comsol
# module load comsol/43b-COM
ntask=$(wc -l $PBS_NODEFILE)
comsol -nn ${ntask} batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tmpdir /scratch/$USER/ -inputfile name_input_f.mph -outputfile name_output_f.mph -batchlog name_log_f.log
```
Working directory has to be created before sending the (comsol.pbs) job script into the queue. Input file (name_input_f.mph) has to be in working directory or full path to input file has to be specified. The appropriate path to the temp directory of the job has to be set by command option (-tmpdir).
## LiveLink for MATLAB
COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB.
LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On Anselm 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](isv_licenses/)) are available.
Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode.
```console
$ xhost +
$ qsub -I -X -A PROJECT_ID -q qexp -l select=1:ncpus=16
$ ml matlab
$ ml comsol
$ comsol server matlab
```
At the first time to launch the LiveLink for MATLAB (client-MATLAB/server-COMSOL connection) the login and password is requested and this information is not requested again.
To run LiveLink for MATLAB in batch mode with (comsol_matlab.pbs) job script you can utilize/modify the following script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -l select=3:ncpus=16
#PBS -q qprod
#PBS -N JOB_NAME
#PBS -A PROJECT_ID
cd /scratch/$USER || exit
echo Time is `date`
echo Directory is `pwd`
echo '**PBS_NODEFILE***START*******'
cat $PBS_NODEFILE
echo '**PBS_NODEFILE***END*********'
text_nodes < cat $PBS_NODEFILE
module load matlab
module load comsol/43b-EDU
ntask=$(wc -l $PBS_NODEFILE)
comsol -nn ${ntask} server -configuration /tmp -mpiarg -rmk -mpiarg pbs -tmpdir /scratch/$USER &
cd /apps/engineering/comsol/comsol43b/mli
matlab -nodesktop -nosplash -r "mphstart; addpath /scratch/$USER; test_job"
```
This example shows, how to run LiveLink for MATLAB with following configuration: 3 nodes and 16 cores per node. Working directory has to be created before submitting (comsol_matlab.pbs) job script into the queue. Input file (test_job.m) has to be in working directory or full path to input file has to be specified. The MATLAB command option (-r ”mphstart”) created a connection with a COMSOL server using the default port number.
# Allinea Forge (DDT,MAP)
Allinea Forge consist of two tools - debugger DDT and profiler MAP.
Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel Xeon Phi accelerators. DDT provides all the standard debugging features (stack trace, breakpoints, watches, view variables, threads etc.) for every thread running as part of your program, or for every process - even if these processes are distributed across a cluster using an MPI implementation.
Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed for profiling parallel code, which uses pthreads, OpenMP or MPI.
## License and Limitations for Anselm Users
On Anselm users can debug OpenMP or MPI code that runs up to 64 parallel processes. In case of debugging GPU or Xeon Phi accelerated codes the limit is 8 accelerators. These limitation means that:
* 1 user can debug up 64 processes, or
* 32 users can debug 2 processes, etc.
In case of debugging on accelerators:
* 1 user can debug on up to 8 accelerators, or
* 8 users can debug on single accelerator.
## Compiling Code to Run With DDT
### Modules
Load all necessary modules to compile the code. For example:
```console
$ ml intel
$ ml impi ... or ... module load openmpi/X.X.X-icc
```
Load the Allinea DDT module:
```console
$ ml Forge
```
Compile the code:
```console
$ mpicc -g -O0 -o test_debug test.c
$ mpif90 -g -O0 -o test_debug test.f
```
### Compiler Flags
Before debugging, you need to compile your code with theses flags:
!!! note
\* **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for GNU and INTEL C/C++ and Fortran compilers.
\* **O0** : Suppress all optimizations.
## Starting a Job With DDT
Be sure to log in with an X window forwarding enabled. This could mean using the -X in the ssh:
```console
$ ssh -X username@anselm.it4i.cz
```
Other options is to access login node using VNC. Please see the detailed information on how to [use graphic user interface on Anselm](/general/accessing-the-clusters/graphical-user-interface/x-window-system/)
From the login node an interactive session **with X windows forwarding** (-X option) can be started by following command:
```console
$ qsub -I -X -A NONE-0-0 -q qexp -lselect=1:ncpus=16:mpiprocs=16,walltime=01:00:00
```
Then launch the debugger with the ddt command followed by the name of the executable to debug:
```console
$ ddt test_debug
```
A submission window that appears have a prefilled path to the executable to debug. You can select the number of MPI processors and/or OpenMP threads on which to run and press run. Command line arguments to a program can be entered to the "Arguments " box.
![](../../../img/ddt1.png)
To start the debugging directly without the submission window, user can specify the debugging and execution parameters from the command line. For example the number of MPI processes is set by option "-np 4". Skipping the dialog is done by "-start" option. To see the list of the "ddt" command line parameters, run "ddt --help".
```console
ddt -start -np 4 ./hello_debug_impi
```
## Documentation
Users can find original User Guide after loading the DDT module:
```console
$DDTPATH/doc/userguide.pdf
```
[1] Discipline, Magic, Inspiration and Science: Best Practice Debugging with Allinea DDT, Workshop conducted at LLNL by Allinea on May 10, 2013, [link](https://computing.llnl.gov/tutorials/allineaDDT/index.html)
# Allinea Performance Reports
## Introduction
Allinea Performance Reports characterize the performance of HPC application runs. After executing your application through the tool, a synthetic HTML report is generated automatically, containing information about several metrics along with clear behavior statements and hints to help you improve the efficiency of your runs.
The Allinea Performance Reports is most useful in profiling MPI programs.
Our license is limited to 64 MPI processes.
## Modules
Allinea Performance Reports version 6.0 is available
```console
$ ml PerformanceReports/6.0
```
The module sets up environment variables, required for using the Allinea Performance Reports. This particular command loads the default module, which is performance reports version 4.2.
## Usage
!!! note
Use the the perf-report wrapper on your (MPI) program.
Instead of [running your MPI program the usual way](../mpi/), use the the perf report wrapper:
```console
$ perf-report mpirun ./mympiprog.x
```
The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that [demanding MPI codes should be run within the queue system](../../job-submission-and-execution/).
## Example
In this example, we will be profiling the mympiprog.x MPI program, using Allinea performance reports. Assume that the code is compiled with Intel compilers and linked against Intel MPI library:
First, we allocate some nodes via the express queue:
```console
$ qsub -q qexp -l select=2:ncpus=16:mpiprocs=16:ompthreads=1 -I
qsub: waiting for job 262197.dm2 to start
qsub: job 262197.dm2 ready
```
Then we load the modules and run the program the usual way:
```console
$ ml intel impi allinea-perf-report/4.2
$ mpirun ./mympiprog.x
```
Now lets profile the code:
```console
$ perf-report mpirun ./mympiprog.x
```
Performance report files [mympiprog_32p\*.txt](../../../src/mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p\*.html](../../../src/mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded.
# Debuggers and profilers summary
## Introduction
We provide state of the art programms and tools to develop, profile and debug HPC codes at IT4Innovations. On these pages, we provide an overview of the profiling and debugging tools available on Anslem at IT4I.
## Intel Debugger
The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use X display for running the GUI.