Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • sccs/docs.it4i.cz
  • soj0018/docs.it4i.cz
  • lszustak/docs.it4i.cz
  • jarosjir/docs.it4i.cz
  • strakpe/docs.it4i.cz
  • beranekj/docs.it4i.cz
  • tab0039/docs.it4i.cz
  • davidciz/docs.it4i.cz
  • gui0013/docs.it4i.cz
  • mrazek/docs.it4i.cz
  • lriha/docs.it4i.cz
  • it4i-vhapla/docs.it4i.cz
  • hol0598/docs.it4i.cz
  • sccs/docs-it-4-i-cz-fumadocs
  • siw019/docs-it-4-i-cz-fumadocs
15 results
Show changes
Showing
with 980 additions and 0 deletions
docs.it4i/anselm-cluster-documentation/scheme.png

74 KiB

ANSYS CFX
=========
[ANSYS CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX)
software is a high-performance, general purpose fluid dynamics program that has been applied to solve wide-ranging fluid flow problems for over 20 years. At the heart of ANSYS CFX is its advanced solver technology, the key to achieving reliable and accurate solutions quickly and robustly. The modern, highly parallelized solver is the foundation for an abundant choice of physical models to capture virtually any type of phenomena related to fluid flow. The solver and its many physical models are wrapped in a modern, intuitive, and flexible GUI and user environment, with extensive capabilities for customization and automation using session files, scripting and a powerful expression language.
To run ANSYS CFX in batch mode you can utilize/modify the default cfx.pbs script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod
#PBS -N $USER-CFX-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory> (working directory must exists)
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
module load ansys
#### Set number of processors per host listing
#### (set to 1 as $PBS_NODEFILE lists each node twice if :ppn=2)
procs_per_host=1
#### Create host list
hl=""
for host in `cat $PBS_NODEFILE`
do
if [ "$hl" = "" ]
then hl="$host:$procs_per_host"
else hl="${hl}:$host:$procs_per_host"
fi
done
echo Machines: $hl
#-dev input.def includes the input of CFX analysis in DEF format
#-P the name of prefered license feature (aa_r=ANSYS Academic Research, ane3fl=Multiphysics(commercial))
/ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def
**License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**.
[More about licensing here](licensing.md)
\ No newline at end of file
ANSYS Fluent
============
[ANSYS Fluent](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+Fluent)
software contains the broad physical modeling capabilities needed to model flow, turbulence, heat transfer, and reactions for industrial applications ranging from air flow over an aircraft wing to combustion in a furnace, from bubble columns to oil platforms, from blood flow to semiconductor manufacturing, and from clean room design to wastewater treatment plants. Special models that give the software the ability to model in-cylinder combustion, aeroacoustics, turbomachinery, and multiphase systems have served to broaden its reach.
1. Common way to run Fluent over pbs file
------------------------------------------------------
To run ANSYS Fluent in batch mode you can utilize/modify the default fluent.pbs script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -S /bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod
#PBS -N $USER-Fluent-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory> (working directory must exists)
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
#### Load ansys module so that we find the cfx5solve command
module load ansys
# Use following line to specify MPI for message-passing instead
NCORES=`wc -l $PBS_NODEFILE |awk '{print $1}'`
/ansys_inc/v145/fluent/bin/fluent 3d -t$NCORES -cnf=$PBS_NODEFILE -g -i fluent.jou
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common Fluent journal file which is attached to the Fluent solver via parameter -i fluent.jou
Journal file with definition of the input geometry and boundary conditions and defined process of solution has e.g. the following structure:
```bash
/file/read-case aircraft_2m.cas.gz
/solve/init
init
/solve/iterate
10
/file/write-case-dat aircraft_2m-solution
/exit yes
```
The appropriate dimension of the problem has to be set by parameter (2d/3d).
2. Fast way to run Fluent from command line
--------------------------------------------------------
```bash
fluent solver_version [FLUENT_options] -i journal_file -pbs
```
This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of *job_ID.hostname*. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o *job_ID*.
3. Running Fluent via user's config file
----------------------------------------
The sample script uses a configuration file called pbs_fluent.conf if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of pbs_fluent.conf can be:
```bash
input="example_small.flin"
case="Small-1.65m.cas"
fluent_args="3d -pmyrinet"
outfile="fluent_test.out"
mpp="true"
```
The following is an explanation of the parameters:
input is the name of the input file.
case is the name of the .cas file that the input file will utilize.
fluent_args are extra ANSYS FLUENT arguments. As shown in the previous example, you can specify the interconnect by using the -p interconnect command. The available interconnects include ethernet (the default), myrinet, infiniband, vendor, altix, and crayx. The MPI is selected automatically, based on the specified interconnect.
outfile is the name of the file to which the standard output will be sent.
mpp="true" will tell the job script to execute the job across multiple processors.
To run ANSYS Fluent in batch mode with user's config file you can utilize/modify the following script and execute it via the qsub command.
```bash
#!/bin/sh
#PBS -l nodes=2:ppn=4
#PBS -1 qprod
#PBS -N $USE-Fluent-Project
#PBS -A XX-YY-ZZ
cd $PBS_O_WORKDIR
#We assume that if they didn’t specify arguments then they should use the
#config file if [ "xx${input}${case}${mpp}${fluent_args}zz" = "xxzz" ]; then
if [ -f pbs_fluent.conf ]; then
. pbs_fluent.conf
else
printf "No command line arguments specified, "
printf "and no configuration file found. Exiting n"
fi
fi
#Augment the ANSYS FLUENT command line arguments case "$mpp" in
true)
#MPI job execution scenario
num_nodes=‘cat $PBS_NODEFILE | sort -u | wc -l
cpus=‘expr $num_nodes * $NCPUS
#Default arguments for mpp jobs, these should be changed to suit your
#needs.
fluent_args="-t${cpus} $fluent_args -cnf=$PBS_NODEFILE"
;;
*)
#SMP case
#Default arguments for smp jobs, should be adjusted to suit your
#needs.
fluent_args="-t$NCPUS $fluent_args"
;;
esac
#Default arguments for all jobs
fluent_args="-ssh -g -i $input $fluent_args"
echo "---------- Going to start a fluent job with the following settings:
Input: $input
Case: $case
Output: $outfile
Fluent arguments: $fluent_args"
#run the solver
/ansys_inc/v145/fluent/bin/fluent $fluent_args > $outfile
```
It runs the jobs out of the directory from which they are submitted (PBS_O_WORKDIR).
4. Running Fluent in parralel
-----------------------------
Fluent could be run in parallel only under Academic Research license. To do so this ANSYS Academic Research license must be placed before ANSYS CFD license in user preferences. To make this change anslic_admin utility should be run
```bash
/ansys_inc/shared_les/licensing/lic_admin/anslic_admin
```
ANSLIC_ADMIN Utility will be run
![](Fluent_Licence_1.jpg)
![](Fluent_Licence_2.jpg)
![](Fluent_Licence_3.jpg)
ANSYS Academic Research license should be moved up to the top of the list.
![](Fluent_Licence_4.jpg)
\ No newline at end of file
ANSYS LS-DYNA
=============
**[ANSYSLS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment.
To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod
#PBS -N $USER-DYNA-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory>
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
#! Counts the number of processors
NPROCS=`wc -l < $PBS_NODEFILE`
echo This job has allocated $NPROCS nodes
module load ansys
#### Set number of processors per host listing
#### (set to 1 as $PBS_NODEFILE lists each node twice if :ppn=2)
procs_per_host=1
#### Create host list
hl=""
for host in `cat $PBS_NODEFILE`
do
if [ "$hl" = "" ]
then hl="$host:$procs_per_host"
else hl="${hl}:$host:$procs_per_host"
fi
done
echo Machines: $hl
/ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ansys solver via parameter i=
\ No newline at end of file
ANSYS MAPDL
===========
**[ANSYS Multiphysics](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/ANSYS+Multiphysics)**
software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high- and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver.
To run ANSYS MAPDL in batch mode you can utilize/modify the default mapdl.pbs script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod
#PBS -N $USER-ANSYS-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory> (working directory must exists)
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
module load ansys
#### Set number of processors per host listing
#### (set to 1 as $PBS_NODEFILE lists each node twice if :ppn=2)
procs_per_host=1
#### Create host list
hl=""
for host in `cat $PBS_NODEFILE`
do
if [ "$hl" = "" ]
then hl="$host:$procs_per_host"
else hl="${hl}:$host:$procs_per_host"
fi
done
echo Machines: $hl
#-i input.dat includes the input of analysis in APDL format
#-o file.out is output file from ansys where all text outputs will be redirected
#-p the name of license feature (aa_r=ANSYS Academic Research, ane3fl=Multiphysics(commercial), aa_r_dy=Academic AUTODYN)
/ansys_inc/v145/ansys/bin/ansys145 -b -dis -p aa_r -i input.dat -o file.out -machines $hl -dir $WORK_DIR
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.md). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ansys solver via parameter -i
**License** should be selected by parameter -p. Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**, aa_r_dy (ANSYS **Academic** AUTODYN)
[More about licensing here](licensing.md)
\ No newline at end of file
Overview of ANSYS Products
==========================
**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More about licensing here](ansys/licensing.html)
To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module:
```bash
$ module load ansys
```
ANSYS supports interactive regime, but due to assumed solution of extremely difficult tasks it is not recommended.
If user needs to work in interactive regime we recommend to configure the RSM service on the client machine which allows to forward the solution to the Anselm directly from the client's Workbench project (see ANSYS RSM service).
LS-DYNA
=======
[LS-DYNA](http://www.lstc.com/) is a multi-purpose, explicit and implicit finite element program used to analyze the nonlinear dynamic response of structures. Its fully automated contact analysis capability, a wide range of constitutive models to simulate a whole range of engineering materials (steels, composites, foams, concrete, etc.), error-checking features and the high scalability have enabled users worldwide to solve successfully many complex problems. Additionally LS-DYNA is extensively used to simulate impacts on structures from drop tests, underwater shock, explosions or high-velocity impacts. Explosive forming, process engineering, accident reconstruction, vehicle dynamics, thermal brake disc analysis or nuclear safety are further areas in the broad range of possible applications. In leading-edge research LS-DYNA is used to investigate the behaviour of materials like composites, ceramics, concrete, or wood. Moreover, it is used in biomechanics, human modelling, molecular structures, casting, forging, or virtual testing.
Anselm provides **1 commercial license of LS-DYNA without HPC** support now.
To run LS-DYNA in batch mode you can utilize/modify the default lsdyna.pbs script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -l nodes=1:ppn=16
#PBS -q qprod
#PBS -N $USER-LSDYNA-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory> (working directory must exists)
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
module load lsdyna
/apps/engineering/lsdyna/lsdyna700s i=input.k
```
Header of the pbs file (above) is common and description can be find on [this site](../../resource-allocation-and-job-execution/job-submission-and-execution.html). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA **.k** file which is attached to the LS-DYNA solver via parameter i=
\ No newline at end of file
Molpro
======
Molpro is a complete system of ab initio programs for molecular electronic structure calculations.
About Molpro
------------
Molpro is a software package used for accurate ab-initio quantum chemistry calculations. More information can be found at the [official webpage](http://www.molpro.net/).
License
-------
Molpro software package is available only to users that have a valid license. Please contact support to enable access to Molpro if you have a valid license appropriate for running on our cluster (eg. academic research group licence, parallel execution).
To run Molpro, you need to have a valid license token present in " $HOME/.molpro/token". You can download the token from [Molpro website](https://www.molpro.net/licensee/?portal=licensee).
Installed version
-----------------
Currently on Anselm is installed version 2010.1, patch level 45, parallel version compiled with Intel compilers and Intel MPI.
Compilation parameters are default:
|Parameter|Value|
|---|---|
|max number of atoms|200|
|max number of valence orbitals|300|
|max number of basis functions|4095|
|max number of states per symmmetry|20|
|max number of state symmetries|16|
|max number of records|200|
|max number of primitives|maxbfn x [2]|
Running
------
Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details.
>The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage.md). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem.
### Example jobscript
```bash
#PBS -A IT4I-0-0
#PBS -q qprod
#PBS -l select=1:ncpus=16:mpiprocs=16:ompthreads=1
cd $PBS_O_WORKDIR
# load Molpro module
module add molpro
# create a directory in the SCRATCH filesystem
mkdir -p /scratch/$USER/$PBS_JOBID
# copy an example input
cp /apps/chem/molpro/2010.1/molprop_2010_1_Linux_x86_64_i8/examples/caffeine_opt_diis.com .
# run Molpro with default options
molpro -d /scratch/$USER/$PBS_JOBID caffeine_opt_diis.com
# delete scratch directory
rm -rf /scratch/$USER/$PBS_JOBID
```
\ No newline at end of file
NWChem
======
**High-Performance Computational Chemistry**
Introduction
-------------------------
NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters.
[Homepage](http://www.nwchem-sw.org/index.php/Main_Page)
Installed versions
------------------
The following versions are currently installed:
- 6.1.1, not recommended, problems have been observed with this version
- 6.3-rev2-patch1, current release with QMD patch applied. Compiled with Intel compilers, MKL and Intel MPI
- 6.3-rev2-patch1-openmpi, same as above, but compiled with OpenMPI and NWChem provided BLAS instead of MKL. This version is expected to be slower
- 6.3-rev2-patch1-venus, this version contains only libraries for VENUS interface linking. Does not provide standalone NWChem executable
For a current list of installed versions, execute:
```bash
module avail nwchem
```
Running
-------
NWChem is compiled for parallel MPI execution. Normal procedure for MPI jobs applies. Sample jobscript:
```bash
#PBS -A IT4I-0-0
#PBS -q qprod
#PBS -l select=1:ncpus=16
module add nwchem/6.3-rev2-patch1
mpirun -np 16 nwchem h2o.nw
```
Options
--------------------
Please refer to [the documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and in the input file set the following directives :
- MEMORY : controls the amount of memory NWChem will use
- SCRATCH_DIR : set this to a directory in [SCRATCH filesystem](../../storage.md#scratch) (or run the calculation completely in a scratch directory). For certain calculations, it might be advisable to reduce I/O by forcing "direct" mode, eg. "scf direct"
\ No newline at end of file
Compilers
=========
##Available compilers, including GNU, INTEL and UPC compilers
Currently there are several compilers for different programming languages available on the Anselm cluster:
- C/C++
- Fortran 77/90/95
- Unified Parallel C
- Java
- nVidia CUDA
The C/C++ and Fortran compilers are divided into two main groups GNU and Intel.
Intel Compilers
---------------
For information about the usage of Intel Compilers and other Intel products, please read the [Intel Parallel studio](intel-suite.html) page.
GNU C/C++ and Fortran Compilers
-------------------------------
For compatibility reasons there are still available the original (old 4.4.6-4) versions of GNU compilers as part of the OS. These are accessible in the search path by default.
It is strongly recommended to use the up to date version (4.8.1) which comes with the module gcc:
```bash
$ module load gcc
$ gcc -v
$ g++ -v
$ gfortran -v
```
With the module loaded two environment variables are predefined. One for maximum optimizations on the Anselm cluster architecture, and the other for debugging purposes:
```bash
$ echo $OPTFLAGS
-O3 -march=corei7-avx
$ echo $DEBUGFLAGS
-O0 -g
```
For more informations about the possibilities of the compilers, please see the man pages.
Unified Parallel C
------------------
UPC is supported by two compiler/runtime implementations:
- GNU - SMP/multi-threading support only
- Berkley - multi-node support as well as SMP/multi-threading support
### GNU UPC Compiler
To use the GNU UPC compiler and run the compiled binaries use the module gupc
```bash
$ module add gupc
$ gupc -v
$ g++ -v
```
Simple program to test the compiler
```bash
$ cat count.upc
/* hello.upc - a simple UPC example */
#include <upc.h>
#include <stdio.h>
int main() {
if (MYTHREAD == 0) {
printf("Welcome to GNU UPC!!!n");
}
upc_barrier;
printf(" - Hello from thread %in", MYTHREAD);
return 0;
}
```
To compile the example use
```bash
$ gupc -o count.upc.x count.upc
```
To run the example with 5 threads issue
```bash
$ ./count.upc.x -fupc-threads-5
```
For more informations see the man pages.
### Berkley UPC Compiler
To use the Berkley UPC compiler and runtime environment to run the binaries use the module bupc
```bash
$ module add bupc
$ upcc -version
```
As default UPC network the "smp" is used. This is very quick and easy way for testing/debugging, but limited to one node only.
For production runs, it is recommended to use the native Infiband implementation of UPC network "ibv". For testing/debugging using multiple nodes, the "mpi" UPC network is recommended. Please note, that **the selection of the network is done at the compile time** and not at runtime (as expected)!
Example UPC code:
```bash
$ cat hello.upc
/* hello.upc - a simple UPC example */
#include <upc.h>
#include <stdio.h>
int main() {
if (MYTHREAD == 0) {
printf("Welcome to Berkeley UPC!!!n");
}
upc_barrier;
printf(" - Hello from thread %in", MYTHREAD);
return 0;
}
```
To compile the example with the "ibv" UPC network use
```bash
$ upcc -network=ibv -o hello.upc.x hello.upc
```
To run the example with 5 threads issue
```bash
$ upcrun -n 5 ./hello.upc.x
```
To run the example on two compute nodes using all 32 cores, with 32 threads, issue
```bash
$ qsub -I -q qprod -A PROJECT_ID -l select=2:ncpus=16
$ module add bupc
$ upcrun -n 32 ./hello.upc.x
```
For more informations see the man pages.
Java
----
For information how to use Java (runtime and/or compiler), please read the [Java page](java.html).
nVidia CUDA
-----------
For information how to work with nVidia CUDA, please read the [nVidia CUDA page](nvidia-cuda.html).
\ No newline at end of file
COMSOL Multiphysics®
====================
Introduction
-------------------------
[COMSOL](http://www.comsol.com) is a powerful environment for modelling and solving various engineering and scientific problems based on partial differential equations. COMSOL is designed to solve coupled or multiphysics phenomena. For many
standard engineering problems COMSOL provides add-on products such as electrical, mechanical, fluid flow, and chemical
applications.
- [Structural Mechanics Module](http://www.comsol.com/structural-mechanics-module),
- [Heat Transfer Module](http://www.comsol.com/heat-transfer-module),
- [CFD Module](http://www.comsol.com/cfd-module),
- [Acoustics Module](http://www.comsol.com/acoustics-module),
- and [many others](http://www.comsol.com/products)
COMSOL also allows an interface support for equation-based modelling of partial differential equations.
Execution
----------------------
On the Anselm cluster COMSOL is available in the latest stable version. There are two variants of the release:
- **Non commercial** or so called **EDU variant**, which can be used for research and educational purposes.
- **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon.
To load the of COMSOL load the module
```bash
$ module load comsol
```
By default the **EDU variant** will be loaded. If user needs other version or variant, load the particular version. To obtain the list of available versions use
```bash
$ module avail comsol
```
If user needs to prepare COMSOL jobs in the interactive mode it is recommend to use COMSOL on the compute nodes via PBS Pro scheduler. In order run the COMSOL Desktop GUI on Windows is recommended to use the [Virtual Network Computing (VNC)](https://docs.it4i.cz/anselm-cluster-documentation/software/comsol/resolveuid/11e53ad0d2fd4c5187537f4baeedff33).
```bash
$ xhost +
$ qsub -I -X -A PROJECT_ID -q qprod -l select=1:ncpus=16
$ module load comsol
$ comsol
```
To run COMSOL in batch mode, without the COMSOL Desktop GUI environment, user can utilized the default (comsol.pbs) job script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -l select=3:ncpus=16
#PBS -q qprod
#PBS -N JOB_NAME
#PBS -A PROJECT_ID
cd /scratch/$USER/ || exit
echo Time is `date`
echo Directory is `pwd`
echo '**PBS_NODEFILE***START*******'
cat $PBS_NODEFILE
echo '**PBS_NODEFILE***END*********'
text_nodes < cat $PBS_NODEFILE
module load comsol
# module load comsol/43b-COM
ntask=$(wc -l $PBS_NODEFILE)
comsol -nn ${ntask} batch -configuration /tmp –mpiarg –rmk –mpiarg pbs -tmpdir /scratch/$USER/ -inputfile name_input_f.mph -outputfile name_output_f.mph -batchlog name_log_f.log
```
Working directory has to be created before sending the (comsol.pbs) job script into the queue. Input file (name_input_f.mph) has to be in working directory or full path to input file has to be specified. The appropriate path to the temp directory of the job has to be set by command option (-tmpdir).
LiveLink™* *for MATLAB®^
-------------------------
COMSOL is the software package for the numerical solution of the partial differential equations. LiveLink for MATLAB allows connection to the COMSOL**®** API (Application Programming Interface) with the benefits of the programming language and computing environment of the MATLAB.
LiveLink for MATLAB is available in both **EDU** and **COM** **variant** of the COMSOL release. On Anselm 1 commercial (**COM**) license and the 5 educational (**EDU**) licenses of LiveLink for MATLAB (please see the [ISV Licenses](../isv_licenses.html)) are available.
Following example shows how to start COMSOL model from MATLAB via LiveLink in the interactive mode.
```bash
$ xhost +
$ qsub -I -X -A PROJECT_ID -q qexp -l select=1:ncpus=16
$ module load matlab
$ module load comsol
$ comsol server matlab
```
At the first time to launch the LiveLink for MATLAB (client-MATLAB/server-COMSOL connection) the login and password is requested and this information is not requested again.
To run LiveLink for MATLAB in batch mode with (comsol_matlab.pbs) job script you can utilize/modify the following script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -l select=3:ncpus=16
#PBS -q qprod
#PBS -N JOB_NAME
#PBS -A PROJECT_ID
cd /scratch/$USER || exit
echo Time is `date`
echo Directory is `pwd`
echo '**PBS_NODEFILE***START*******'
cat $PBS_NODEFILE
echo '**PBS_NODEFILE***END*********'
text_nodes < cat $PBS_NODEFILE
module load matlab
module load comsol/43b-EDU
ntask=$(wc -l $PBS_NODEFILE)
comsol -nn ${ntask} server -configuration /tmp -mpiarg -rmk -mpiarg pbs -tmpdir /scratch/$USER &
cd /apps/engineering/comsol/comsol43b/mli
matlab -nodesktop -nosplash -r "mphstart; addpath /scratch/$USER; test_job"
```
This example shows how to run Livelink for MATLAB with following configuration: 3 nodes and 16 cores per node. Working directory has to be created before submitting (comsol_matlab.pbs) job script into the queue. Input file (test_job.m) has to be in working directory or full path to input file has to be specified. The Matlab command option (-r ”mphstart”) created a connection with a COMSOL server using the default port number.
\ No newline at end of file
Allinea Forge (DDT,MAP)
=======================
Allinea Forge consist of two tools - debugger DDT and profiler MAP.
Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel Xeon Phi accelerators. DDT provides all the standard debugging features (stack trace, breakpoints, watches, view variables, threads etc.) for every thread running as part of your program, or for every process - even if these processes are distributed across a cluster using an MPI implementation.
Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed for profiling parallel code, which uses pthreads, OpenMP or MPI.
License and Limitations for Anselm Users
----------------------------------------
On Anselm users can debug OpenMP or MPI code that runs up to 64 parallel processes. In case of debugging GPU or Xeon Phi accelerated codes the limit is 8 accelerators. These limitation means that:
- 1 user can debug up 64 processes, or
- 32 users can debug 2 processes, etc.
In case of debugging on accelerators:
- 1 user can debug on up to 8 accelerators, or
- 8 users can debug on single accelerator.
Compiling Code to run with DDT
------------------------------
### Modules
Load all necessary modules to compile the code. For example:
```bash
$ module load intel
$ module load impi ... or ... module load openmpi/X.X.X-icc
```
Load the Allinea DDT module:
```bash
$ module load Forge
```
Compile the code:
```bash
$ mpicc -g -O0 -o test_debug test.c
$ mpif90 -g -O0 -o test_debug test.f
```
### Compiler flags
Before debugging, you need to compile your code with theses flags:
>- **g** : Generates extra debugging information usable by GDB. -g3 includes even more debugging information. This option is available for
GNU and INTEL C/C++ and Fortran compilers.
>- **O0** : Suppress all optimizations.
Starting a Job with DDT
-----------------------
Be sure to log in with an X window forwarding enabled. This could mean using the -X in the ssh:
```bash
$ ssh -X username@anselm.it4i.cz
```
Other options is to access login node using VNC. Please see the detailed information on how to [use graphic user interface on Anselm](https://docs.it4i.cz/anselm-cluster-documentation/software/debuggers/resolveuid/11e53ad0d2fd4c5187537f4baeedff33)
From the login node an interactive session **with X windows forwarding** (-X option) can be started by following command:
```bash
$ qsub -I -X -A NONE-0-0 -q qexp -lselect=1:ncpus=16:mpiprocs=16,walltime=01:00:00
```
Then launch the debugger with the ddt command followed by the name of the executable to debug:
```bash
$ ddt test_debug
```
A submission window that appears have a prefilled path to the executable to debug. You can select the number of MPI processors and/or OpenMP threads on which to run and press run. Command line arguments to a program can be entered to the "Arguments " box.
![](ddt1.png)
To start the debugging directly without the submission window, user can specify the debugging and execution parameters from the command line. For example the number of MPI processes is set by option "-np 4". Skipping the dialog is done by "-start" option. To see the list of the "ddt" command line parameters, run "ddt --help".
```bash
ddt -start -np 4 ./hello_debug_impi
```
Documentation
-------------
Users can find original User Guide after loading the DDT module:
```bash
$DDTPATH/doc/userguide.pdf
```
[1] Discipline, Magic, Inspiration and Science: Best Practice Debugging with Allinea DDT, Workshop conducted at LLNL by Allinea on May 10, 2013, [link](https://computing.llnl.gov/tutorials/allineaDDT/index.html)
\ No newline at end of file
Allinea Performance Reports
===========================
##quick application profiling
Introduction
------------
Allinea Performance Reports characterize the performance of HPC application runs. After executing your application through the tool, a synthetic HTML report is generated automatically, containing information about several metrics along with clear behavior statements and hints to help you improve the efficiency of your runs.
The Allinea Performance Reports is most useful in profiling MPI programs.
Our license is limited to 64 MPI processes.
Modules
-------
Allinea Performance Reports version 6.0 is available
```bash
$ module load PerformanceReports/6.0
```
The module sets up environment variables, required for using the Allinea Performance Reports. This particular command loads the default module, which is performance reports version 4.2.
Usage
-----
>Use the the perf-report wrapper on your (MPI) program.
Instead of [running your MPI program the usual way](../mpi-1.md), use the the perf report wrapper:
```bash
$ perf-report mpirun ./mympiprog.x
```
The mpi program will run as usual. The perf-report creates two additional files, in *.txt and *.html format, containing the performance report. Note that [demanding MPI codes should be run within the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution.md).
Example
-------
In this example, we will be profiling the mympiprog.x MPI program, using Allinea performance reports. Assume that the code is compiled with intel compilers and linked against intel MPI library:
First, we allocate some nodes via the express queue:
```bash
$ qsub -q qexp -l select=2:ncpus=16:mpiprocs=16:ompthreads=1 -I
qsub: waiting for job 262197.dm2 to start
qsub: job 262197.dm2 ready
```
Then we load the modules and run the program the usual way:
```bash
$ module load intel impi allinea-perf-report/4.2
$ mpirun ./mympiprog.x
```
Now lets profile the code:
```bash
$ perf-report mpirun ./mympiprog.x
```
Performance report files [mympiprog_32p*.txt](mympiprog_32p_2014-10-15_16-56.txt) and [mympiprog_32p*.html](mympiprog_32p_2014-10-15_16-56.html) were created. We can see that the code is very efficient on MPI and is CPU bounded.
\ No newline at end of file
CUBE
====
Introduction
------------
CUBE is a graphical performance report explorer for displaying data from Score-P and Scalasca (and other compatible tools). The name comes from the fact that it displays performance data in a three-dimensions :
- **performance metric**, where a number of metrics are available, such as communication time or cache misses,
- **call path**, which contains the call tree of your program
- s**ystem resource**, which contains system's nodes, processes and threads, depending on the parallel programming model.
Each dimension is organized in a tree, for example the time performance metric is divided into Execution time and Overhead time, call path dimension is organized by files and routines in your source code etc.
![](Snmekobrazovky20141204v12.56.36.png)
*Figure 1. Screenshot of CUBE displaying data from Scalasca.*
Each node in the tree is colored by severity (the color scheme is displayed at the bottom of the window, ranging from the least severe blue to the most severe being red). For example in Figure 1, we can see that most of the point-to-point MPI communication happens in routine exch_qbc, colored red.
Installed versions
------------------
Currently, there are two versions of CUBE 4.2.3 available as [modules](../../environment-and-modules.html) :
- cube/4.2.3-gcc, compiled with GCC
- cube/4.2.3-icc, compiled with Intel compiler
Usage
-----
CUBE is a graphical application. Refer to [Graphical User Interface documentation](https://docs.it4i.cz/anselm-cluster-documentation/software/debuggers/resolveuid/11e53ad0d2fd4c5187537f4baeedff33) for a list of methods to launch graphical applications on Anselm.
>Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes.
After loading the apropriate module, simply launch cube command, or alternatively you can use scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available.
References
1. <http://www.scalasca.org/software/cube-4.x/download.html>