Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • sccs/docs.it4i.cz
  • soj0018/docs.it4i.cz
  • lszustak/docs.it4i.cz
  • jarosjir/docs.it4i.cz
  • strakpe/docs.it4i.cz
  • beranekj/docs.it4i.cz
  • tab0039/docs.it4i.cz
  • davidciz/docs.it4i.cz
  • gui0013/docs.it4i.cz
  • mrazek/docs.it4i.cz
  • lriha/docs.it4i.cz
  • it4i-vhapla/docs.it4i.cz
  • hol0598/docs.it4i.cz
  • sccs/docs-it-4-i-cz-fumadocs
  • siw019/docs-it-4-i-cz-fumadocs
15 results
Show changes
Showing
with 1087 additions and 0 deletions
ANSYS CFX
=========
[ANSYS
CFX](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+CFX)
software is a high-performance, general purpose fluid dynamics program
that has been applied to solve wide-ranging fluid flow problems for over
20 years. At the heart of ANSYS CFX is its advanced solver technology,
the key to achieving reliable and accurate solutions quickly and
robustly. The modern, highly parallelized solver is the foundation for
an abundant choice of physical models to capture virtually any type of
phenomena related to fluid flow. The solver and its many physical models
are wrapped in a modern, intuitive, and flexible GUI and user
environment, with extensive capabilities for customization and
automation using session files, scripting and a powerful expression
language.
>To run ANSYS CFX in batch mode you can utilize/modify the default
cfx.pbs script and execute it via the qsub command.
```
#!/bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod
#PBS -N $USER-CFX-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory> (working directory must exists)
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
module load ansys
#### Set number of processors per host listing
#### (set to 1 as $PBS_NODEFILE lists each node twice if :ppn=2)
procs_per_host=1
#### Create host list
hl=""
for host in `cat $PBS_NODEFILE`
do
if [ "$hl" = "" ]
then hl="$host:$procs_per_host"
else hl="${hl}:$host:$procs_per_host"
fi
done
echo Machines: $hl
#-dev input.def includes the input of CFX analysis in DEF format
#-P the name of prefered license feature (aa_r=ANSYS Academic Research, ane3fl=Multiphysics(commercial))
/ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r
```
Header of the pbs file (above) is common and description can be find
on [this
site](../../resource-allocation-and-job-execution/job-submission-and-execution.html).
SVS FEM recommends to utilize sources by keywords: nodes, ppn. These
keywords allows to address directly the number of nodes (computers) and
cores (ppn) which will be utilized in the job. Also the rest of code
assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the
queue. Input file should be in working directory or full path to input
file has to be specified. >Input file has to be defined by common
CFX def file which is attached to the cfx solver via parameter
-def
License** should be selected by parameter -P (Big letter **P**).
Licensed products are the following: aa_r
(ANSYS **Academic** Research), ane3fl (ANSYS
Multiphysics)-**Commercial.**
[ id="result_box" More
about licensing
here](licensing.html)
ANSYS Fluent
============
[ANSYS
Fluent](http://www.ansys.com/Products/Simulation+Technology/Fluid+Dynamics/Fluid+Dynamics+Products/ANSYS+Fluent)
software contains the broad physical modeling capabilities needed to
model flow, turbulence, heat transfer, and reactions for industrial
applications ranging from air flow over an aircraft wing to combustion
in a furnace, from bubble columns to oil platforms, from blood flow to
semiconductor manufacturing, and from clean room design to wastewater
treatment plants. Special models that give the software the ability to
model in-cylinder combustion, aeroacoustics, turbomachinery, and
multiphase systems have served to broaden its reach.
>1. Common way to run Fluent over pbs file
------------------------------------------------------
>To run ANSYS Fluent in batch mode you can utilize/modify the
default fluent.pbs script and execute it via the qsub command.
```
#!/bin/bash
#PBS -S /bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod
#PBS -N $USER-Fluent-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory> (working directory must exists)
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
#### Load ansys module so that we find the cfx5solve command
module load ansys
# Use following line to specify MPI for message-passing instead
NCORES=`wc -l $PBS_NODEFILE |awk '{print $1}'`
/ansys_inc/v145/fluent/bin/fluent 3d -t$NCORES -cnf=$PBS_NODEFILE -g -i fluent.jou
```
Header of the pbs file (above) is common and description can be find
on [this
site](../../resource-allocation-and-job-execution/job-submission-and-execution.html).
[SVS FEM](http://www.svsfem.cz) recommends to utilize
sources by keywords: nodes, ppn. These keywords allows to address
directly the number of nodes (computers) and cores (ppn) which will be
utilized in the job. Also the rest of code assumes such structure of
allocated resources.
Working directory has to be created before sending pbs job into the
queue. Input file should be in working directory or full path to input
file has to be specified. Input file has to be defined by common Fluent
journal file which is attached to the Fluent solver via parameter -i
fluent.jou
Journal file with definition of the input geometry and boundary
conditions and defined process of solution has e.g. the following
structure:
/file/read-case aircraft_2m.cas.gz
/solve/init
init
/solve/iterate
10
/file/write-case-dat aircraft_2m-solution
/exit yes
>The appropriate dimension of the problem has to be set by
parameter (2d/3d).
>2. Fast way to run Fluent from command line
--------------------------------------------------------
```
fluent solver_version [FLUENT_options] -i journal_file -pbs
```
This syntax will start the ANSYS FLUENT job under PBS Professional using
the qsub command in a batch manner. When
resources are available, PBS Professional will start the job and return
a job ID, usually in the form of
*job_ID.hostname*. This job ID can then be used
to query, control, or stop the job using standard PBS Professional
commands, such as qstat or
qdel. The job will be run out of the current
working directory, and all output will be written to the file
fluent.o>
*job_ID*.
3. Running Fluent via user's config file
----------------------------------------
The sample script uses a configuration file called
pbs_fluent.conf if no command line arguments
are present. This configuration file should be present in the directory
from which the jobs are submitted (which is also the directory in which
the jobs are executed). The following is an example of what the content
of pbs_fluent.conf can be:
```
input="example_small.flin"
case="Small-1.65m.cas"
fluent_args="3d -pmyrinet"
outfile="fluent_test.out"
mpp="true"
```
The following is an explanation of the parameters:
> input is the name of the input
file.
case is the name of the
.cas file that the input file will utilize.
fluent_args are extra ANSYS FLUENT
arguments. As shown in the previous example, you can specify the
interconnect by using the -p interconnect
command. The available interconnects include
ethernet (the default),
myrinet, class="monospace">
infiniband, vendor,
altix>, and
crayx. The MPI is selected automatically, based
on the specified interconnect.
outfile is the name of the file to which
the standard output will be sent.
mpp="true" will tell the job script to
execute the job across multiple processors.
>To run ANSYS Fluent in batch mode with user's config file you can
utilize/modify the following script and execute it via the qsub
command.
```
#!/bin/sh
#PBS -l nodes=2:ppn=4
#PBS -1 qprod
#PBS -N $USE-Fluent-Project
#PBS -A XX-YY-ZZ
cd $PBS_O_WORKDIR
#We assume that if they didn’t specify arguments then they should use the
#config file if [ "xx${input}${case}${mpp}${fluent_args}zz" = "xxzz" ]; then
if [ -f pbs_fluent.conf ]; then
. pbs_fluent.conf
else
printf "No command line arguments specified, "
printf "and no configuration file found. Exiting n"
fi
fi
#Augment the ANSYS FLUENT command line arguments case "$mpp" in
true)
#MPI job execution scenario
num_nodes=‘cat $PBS_NODEFILE | sort -u | wc -l
cpus=‘expr $num_nodes * $NCPUS
#Default arguments for mpp jobs, these should be changed to suit your
#needs.
fluent_args="-t${cpus} $fluent_args -cnf=$PBS_NODEFILE"
;;
*)
#SMP case
#Default arguments for smp jobs, should be adjusted to suit your
#needs.
fluent_args="-t$NCPUS $fluent_args"
;;
esac
#Default arguments for all jobs
fluent_args="-ssh -g -i $input $fluent_args"
echo "---------- Going to start a fluent job with the following settings:
Input: $input
Case: $case
Output: $outfile
Fluent arguments: $fluent_args"
#run the solver
/ansys_inc/v145/fluent/bin/fluent $fluent_args> $outfile
```
>It runs the jobs out of the directory from which they are
submitted (PBS_O_WORKDIR).
4. Running Fluent in parralel
-----------------------------
Fluent could be run in parallel only under Academic Research license. To
do so this ANSYS Academic Research license must be placed before ANSYS
CFD license in user preferences. To make this change anslic_admin
utility should be run
```
/ansys_inc/shared_les/licensing/lic_admin/anslic_admin
```
ANSLIC_ADMIN Utility will be run
![](Fluent_Licence_1.jpg)
![](Fluent_Licence_2.jpg)
![](Fluent_Licence_3.jpg)
ANSYS Academic Research license should be moved up to the top of the
list.
![](Fluent_Licence_4.jpg)
ANSYS LS-DYNA
=============
[ANSYS
LS-DYNA](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/Explicit+Dynamics/ANSYS+LS-DYNA)
software provides convenient and easy-to-use access to the
technology-rich, time-tested explicit solver without the need to contend
with the complex input requirements of this sophisticated program.
Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in
numerous industries to resolve highly intricate design
issues. >ANSYS Mechanical users have been able take advantage of
complex explicit solutions for a long time utilizing the traditional
ANSYS Parametric Design Language (APDL) environment. >These
explicit capabilities are available to ANSYS Workbench users as well.
The Workbench platform is a powerful, comprehensive, easy-to-use
environment for engineering simulation. CAD import from all sources,
geometry cleanup, automatic meshing, solution, parametric optimization,
result visualization and comprehensive report generation are all
available within a single fully interactive modern graphical user
environment.
>To run ANSYS LS-DYNA in batch mode you can utilize/modify the
default ansysdyna.pbs script and execute it via the qsub command.
```
#!/bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod
#PBS -N $USER-DYNA-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory>
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
#! Counts the number of processors
NPROCS=`wc -l < $PBS_NODEFILE`
echo This job has allocated $NPROCS nodes
module load ansys
#### Set number of processors per host listing
#### (set to 1 as $PBS_NODEFILE lists each node twice if :ppn=2)
procs_per_host=1
#### Create host list
hl=""
for host in `cat $PBS_NODEFILE`
do
if [ "$hl" = "" ]
then hl="$host:$procs_per_host"
else hl="${hl}:$host:$procs_per_host"
fi
done
echo Machines: $hl
/ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl
```
>Header of the pbs file (above) is common and description can be
find on [this
site](../../resource-allocation-and-job-execution/job-submission-and-execution.html)>.
[SVS FEM](http://www.svsfem.cz) recommends to utilize
sources by keywords: nodes, ppn. These keywords allows to address
directly the number of nodes (computers) and cores (ppn) which will be
utilized in the job. Also the rest of code assumes such structure of
allocated resources.
Working directory has to be created before sending pbs job into the
queue. Input file should be in working directory or full path to input
file has to be specified. Input file has to be defined by common LS-DYNA
.**k** file which is attached to the ansys solver via parameter i=
>>
ANSYS MAPDL
===========
>**[ANSYS
Multiphysics](http://www.ansys.com/Products/Simulation+Technology/Structural+Mechanics/ANSYS+Multiphysics)**
software offers a comprehensive product solution for both multiphysics
and single-physics analysis. The product includes structural, thermal,
fluid and both high- and low-frequency electromagnetic analysis. The
product also contains solutions for both direct and sequentially coupled
physics problems including direct coupled-field elements and the ANSYS
multi-field solver.
>To run ANSYS MAPDL in batch mode you can utilize/modify the
default mapdl.pbs script and execute it via the qsub command.
```
#!/bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod
#PBS -N $USER-ANSYS-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory> (working directory must exists)
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
module load ansys
#### Set number of processors per host listing
#### (set to 1 as $PBS_NODEFILE lists each node twice if :ppn=2)
procs_per_host=1
#### Create host list
hl=""
for host in `cat $PBS_NODEFILE`
do
if [ "$hl" = "" ]
then hl="$host:$procs_per_host"
else hl="${hl}:$host:$procs_per_host"
fi
done
echo Machines: $hl
#-i input.dat includes the input of analysis in APDL format
#-o file.out is output file from ansys where all text outputs will be redirected
#-p the name of license feature (aa_r=ANSYS Academic Research, ane3fl=Multiphysics(commercial), aa_r_dy=Academic AUTODYN)
/ansys_inc/v145/ansys/bin/ansys145 -b -dis -p aa_r -i input.dat -o file.out -machines $hl -dir $WORK_DIR
```
Header of the pbs file (above) is common and description can be find on
[this
site](../../resource-allocation-and-job-execution/job-submission-and-execution.html).
[SVS FEM](http://www.svsfem.cz) recommends to utilize
sources by keywords: nodes, ppn. These keywords allows to address
directly the number of nodes (computers) and cores (ppn) which will be
utilized in the job. Also the rest of code assumes such structure of
allocated resources.
Working directory has to be created before sending pbs job into the
queue. Input file should be in working directory or full path to input
file has to be specified. Input file has to be defined by common APDL
file which is attached to the ansys solver via parameter -i
License** should be selected by parameter -p. Licensed products are
the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS
Multiphysics)-**Commercial**, aa_r_dy (ANSYS **Academic**
AUTODYN)>
[ id="result_box" More
about licensing
here](licensing.html)
LS-DYNA
=======
[LS-DYNA](http://www.lstc.com/) is a multi-purpose,
explicit and implicit finite element program used to analyze the
nonlinear dynamic response of structures. Its fully automated contact
analysis capability, a wide range of constitutive models to simulate a
whole range of engineering materials (steels, composites, foams,
concrete, etc.), error-checking features and the high scalability have
enabled users worldwide to solve successfully many complex
problems. >Additionally LS-DYNA is extensively used to simulate
impacts on structures from drop tests, underwater shock, explosions or
high-velocity impacts. Explosive forming, process engineering, accident
reconstruction, vehicle dynamics, thermal brake disc analysis or nuclear
safety are further areas in the broad range of possible applications. In
leading-edge research LS-DYNA is used to investigate the behaviour of
materials like composites, ceramics, concrete, or wood. Moreover, it is
used in biomechanics, human modelling, molecular structures, casting,
forging, or virtual testing.
>Anselm provides **1 commercial license of LS-DYNA without HPC**
support now.
>>To run LS-DYNA in batch mode you can utilize/modify the
default lsdyna.pbs script and execute it via the qsub
command.
```
#!/bin/bash
#PBS -l nodes=1:ppn=16
#PBS -q qprod
#PBS -N $USER-LSDYNA-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory> (working directory must exists)
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
module load lsdyna
/apps/engineering/lsdyna/lsdyna700s i=input.k
```
Header of the pbs file (above) is common and description can be find
on [this
site](../../resource-allocation-and-job-execution/job-submission-and-execution.html).
[SVS FEM](http://www.svsfem.cz) recommends to utilize
sources by keywords: nodes, ppn. These keywords allows to address
directly the number of nodes (computers) and cores (ppn) which will be
utilized in the job. Also the rest of code assumes such structure of
allocated resources.
Working directory has to be created before sending pbs job into the
queue. Input file should be in working directory or full path to input
file has to be specified. Input file has to be defined by common LS-DYNA
.k** file which is attached to the LS-DYNA solver via parameter i=
Molpro
======
Molpro is a complete system of ab initio programs for molecular
electronic structure calculations.
About Molpro
------------
Molpro is a software package used for accurate ab-initio quantum
chemistry calculations. More information can be found at the [official
webpage](http://www.molpro.net/).
License
-------
Molpro software package is available only to users that have a valid
license. Please contact support to enable access to Molpro if you have a
valid license appropriate for running on our cluster (eg. >academic
research group licence, parallel execution).
>To run Molpro, you need to have a valid license token present in
" $HOME/.molpro/token". You can
download the token from [Molpro
website](https://www.molpro.net/licensee/?portal=licensee).
Installed version
-----------------
Currently on Anselm is installed version 2010.1, patch level 45,
parallel version compiled with Intel compilers and Intel MPI.
Compilation parameters are default :
Parameter Value
------------------------------------------------- -----------------------------
>max number of atoms 200
>max number of valence orbitals 300
>max number of basis functions 4095
>max number of states per symmmetry 20
>max number of state symmetries 16
>max number of records 200
>max number of primitives >maxbfn x [2]
Running
-------
Molpro is compiled for parallel execution using MPI and OpenMP. By
default, Molpro reads the number of allocated nodes from PBS and
launches a data server on one node. On the remaining allocated nodes,
compute processes are launched, one process per node, each with 16
threads. You can modify this behavior by using -n, -t and helper-server
options. Please refer to the [Molpro
documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html)
for more details.
The OpenMP parallelization in Molpro is limited and has been observed to
produce limited scaling. We therefore recommend to use MPI
parallelization only. This can be achieved by passing option
mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH
filesystem](../../storage.html). Molpro can produce a
large amount of temporary data during its run, and it is important that
these are placed in the fast scratch filesystem.
### Example jobscript
#PBS -A IT4I-0-0
#PBS -q qprod
#PBS -l select=1:ncpus=16:mpiprocs=16:ompthreads=1
cd $PBS_O_WORKDIR
# load Molpro module
module add molpro
# create a directory in the SCRATCH filesystem
mkdir -p /scratch/$USER/$PBS_JOBID
# copy an example input
cp /apps/chem/molpro/2010.1/molprop_2010_1_Linux_x86_64_i8/examples/caffeine_opt_diis.com .
# run Molpro with default options
molpro -d /scratch/$USER/$PBS_JOBID caffeine_opt_diis.com
# delete scratch directory
rm -rf /scratch/$USER/$PBS_JOBID
NWChem
======
High-Performance Computational Chemistry
>Introduction
-------------------------
>NWChem aims to provide its users with computational chemistry
tools that are scalable both in their ability to treat large scientific
computational chemistry problems efficiently, and in their use of
available parallel computing resources from high-performance parallel
supercomputers to conventional workstation clusters.
[Homepage](http://www.nwchem-sw.org/index.php/Main_Page)
Installed versions
------------------
The following versions are currently installed :
- 6.1.1, not recommended, problems have been observed with this
version
- 6.3-rev2-patch1, current release with QMD patch applied. Compiled
with Intel compilers, MKL and Intel MPI
- 6.3-rev2-patch1-openmpi, same as above, but compiled with OpenMPI
and NWChem provided BLAS instead of MKL. This version is expected to
be slower
- 6.3-rev2-patch1-venus, this version contains only libraries for
VENUS interface linking. Does not provide standalone NWChem
executable
For a current list of installed versions, execute :
module avail nwchem
Running
-------
NWChem is compiled for parallel MPI execution. Normal procedure for MPI
jobs applies. Sample jobscript :
#PBS -A IT4I-0-0
#PBS -q qprod
#PBS -l select=1:ncpus=16
module add nwchem/6.3-rev2-patch1
mpirun -np 16 nwchem h2o.nw
>Options
--------------------
Please refer to [the
documentation](http://www.nwchem-sw.org/index.php/Release62:Top-level) and
in the input file set the following directives :
- >MEMORY : controls the amount of memory NWChem will use
- >SCRATCH_DIR : set this to a directory in [SCRATCH
filesystem](../../storage.html#scratch) (or run the
calculation completely in a scratch directory). For certain
calculations, it might be advisable to reduce I/O by forcing
"direct" mode, eg. "scf direct"
Compilers
=========
Available compilers, including GNU, INTEL and UPC compilers
Currently there are several compilers for different programming
languages available on the Anselm cluster:
- C/C++
- Fortran 77/90/95
- Unified Parallel C
- Java
- nVidia CUDA
The C/C++ and Fortran compilers are divided into two main groups GNU and
Intel.
Intel Compilers
---------------
For information about the usage of Intel Compilers and other Intel
products, please read the [Intel Parallel
studio](intel-suite.html) page.
GNU C/C++ and Fortran Compilers
-------------------------------
For compatibility reasons there are still available the original (old
4.4.6-4) versions of GNU compilers as part of the OS. These are
accessible in the search path by default.
It is strongly recommended to use the up to date version (4.8.1) which
comes with the module gcc:
$ module load gcc
$ gcc -v
$ g++ -v
$ gfortran -v
With the module loaded two environment variables are predefined. One for
maximum optimizations on the Anselm cluster architecture, and the other
for debugging purposes:
$ echo $OPTFLAGS
-O3 -march=corei7-avx
$ echo $DEBUGFLAGS
-O0 -g
For more informations about the possibilities of the compilers, please
see the man pages.
Unified Parallel C
------------------
UPC is supported by two compiler/runtime implementations:
- GNU - SMP/multi-threading support only
- Berkley - multi-node support as well as SMP/multi-threading support
### GNU UPC Compiler
To use the GNU UPC compiler and run the compiled binaries use the module
gupc
$ module add gupc
$ gupc -v
$ g++ -v
Simple program to test the compiler
$ cat count.upc
/* hello.upc - a simple UPC example */
#include <upc.h>
#include <stdio.h>
int main() {
if (MYTHREAD == 0) {
printf("Welcome to GNU UPC!!!n");
}
upc_barrier;
printf(" - Hello from thread %in", MYTHREAD);
return 0;
}
To compile the example use
$ gupc -o count.upc.x count.upc
To run the example with 5 threads issue
$ ./count.upc.x -fupc-threads-5
For more informations see the man pages.
### Berkley UPC Compiler
To use the Berkley UPC compiler and runtime environment to run the
binaries use the module bupc
$ module add bupc
$ upcc -version
As default UPC network the "smp" is used. This is very quick and easy
way for testing/debugging, but limited to one node only.
For production runs, it is recommended to use the native Infiband
implementation of UPC network "ibv". For testing/debugging using
multiple nodes, the "mpi" UPC network is recommended. Please note, that
the selection of the network is done at the compile time** and not at
runtime (as expected)!
Example UPC code:
$ cat hello.upc
/* hello.upc - a simple UPC example */
#include <upc.h>
#include <stdio.h>
int main() {
if (MYTHREAD == 0) {
printf("Welcome to Berkeley UPC!!!n");
}
upc_barrier;
printf(" - Hello from thread %in", MYTHREAD);
return 0;
}
To compile the example with the "ibv" UPC network use
$ upcc -network=ibv -o hello.upc.x hello.upc
To run the example with 5 threads issue
$ upcrun -n 5 ./hello.upc.x
To run the example on two compute nodes using all 32 cores, with 32
threads, issue
$ qsub -I -q qprod -A PROJECT_ID -l select=2:ncpus=16
$ module add bupc
$ upcrun -n 32 ./hello.upc.x
For more informations see the man pages.
Java
----
For information how to use Java (runtime and/or compiler), please read
the [Java page](java.html).
nVidia CUDA
-----------
For information how to work with nVidia CUDA, please read the [nVidia
CUDA page](nvidia-cuda.html).
Debuggers and profilers summary
===============================
Introduction
------------
We provide state of the art programms and tools to develop, profile and
debug HPC codes at IT4Innovations.
On these pages, we provide an overview of the profiling and debugging
tools available on Anslem at IT4I.
Intel debugger
--------------
The intel debugger version 13.0 is available, via module intel. The
debugger works for applications compiled with C and C++ compiler and the
ifort fortran 77/90/95 compiler. The debugger provides java GUI
environment. Use [X
display](https://docs.it4i.cz/anselm-cluster-documentation/software/debuggers/resolveuid/11e53ad0d2fd4c5187537f4baeedff33)
for running the GUI.
$ module load intel
$ idb
Read more at the [Intel
Debugger](intel-suite/intel-debugger.html) page.
Allinea Forge (DDT/MAP)
-----------------------
Allinea DDT, is a commercial debugger primarily for debugging parallel
MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel
Xeon Phi accelerators. DDT provides all the standard debugging features
(stack trace, breakpoints, watches, view variables, threads etc.) for
every thread running as part of your program, or for every process -
even if these processes are distributed across a cluster using an MPI
implementation.
$ module load Forge
$ forge
Read more at the [Allinea
DDT](debuggers/allinea-ddt.html) page.
Allinea Performance Reports
---------------------------
Allinea Performance Reports characterize the performance of HPC
application runs. After executing your application through the tool, a
synthetic HTML report is generated automatically, containing information
about several metrics along with clear behavior statements and hints to
help you improve the efficiency of your runs. Our license is limited to
64 MPI processes.
$ module load PerformanceReports/6.0
$ perf-report mpirun -n 64 ./my_application argument01 argument02
Read more at the [Allinea Performance
Reports](debuggers/allinea-performance-reports.html)
page.
RougeWave Totalview
-------------------
TotalView is a source- and machine-level debugger for multi-process,
multi-threaded programs. Its wide range of tools provides ways to
analyze, organize, and test programs, making it easy to isolate and
identify problems in individual threads and processes in programs of
great complexity.
$ module load totalview
$ totalview
Read more at the [Totalview](debuggers/total-view.html)
page.
Vampir trace analyzer
---------------------
Vampir is a GUI trace analyzer for traces in OTF format.
$ module load Vampir/8.5.0
$ vampir
Read more at
the [Vampir](../../salomon/software/debuggers/vampir.html) page.