Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision

Target

Select target project
  • sccs/docs.it4i.cz
  • soj0018/docs.it4i.cz
  • lszustak/docs.it4i.cz
  • jarosjir/docs.it4i.cz
  • strakpe/docs.it4i.cz
  • beranekj/docs.it4i.cz
  • tab0039/docs.it4i.cz
  • davidciz/docs.it4i.cz
  • gui0013/docs.it4i.cz
  • mrazek/docs.it4i.cz
  • lriha/docs.it4i.cz
  • it4i-vhapla/docs.it4i.cz
  • hol0598/docs.it4i.cz
  • sccs/docs-it-4-i-cz-fumadocs
  • siw019/docs-it-4-i-cz-fumadocs
15 results
Select Git revision
Show changes
Showing
with 2924 additions and 0 deletions
# PETSc
PETSc is a suite of building blocks for the scalable solution of scientific and engineering applications modeled by partial differential equations. It supports MPI, shared memory, and GPU through CUDA or OpenCL, as well as hybrid MPI-shared memory or MPI-GPU parallelism.
## Introduction
PETSc (Portable, Extensible Toolkit for Scientific Computation) is a suite of building blocks (data structures and routines) for the scalable solution of scientific and engineering applications modelled by partial differential equations. It allows thinking in terms of high-level objects (matrices) instead of low-level objects (raw arrays). Written in C language but can also be called from FORTRAN, C++, Python and Java codes. It supports MPI, shared memory, and GPUs through CUDA or OpenCL, as well as hybrid MPI-shared memory or MPI-GPU parallelism.
## Resources
* [project webpage](http://www.mcs.anl.gov/petsc/)
* [documentation](http://www.mcs.anl.gov/petsc/documentation/)
* [PETSc Users Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf)
* [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html)
* PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM)
## Modules
You can start using PETSc on Anselm by loading the PETSc module. Module names obey this pattern:
```console
$# ml petsc/version-compiler-mpi-blas-variant, e.g.
$ ml petsc/3.4.4-icc-impi-mkl-opt
```
where `variant` is replaced by one of `{dbg, opt, threads-dbg, threads-opt}`. The `opt` variant is compiled without debugging information (no `-g` option) and with aggressive compiler optimizations (`-O3 -xAVX`). This variant is suitable for performance measurements and production runs. In all other cases use the debug (`dbg`) variant, because it contains debugging information, performs validations and self-checks, and provides a clear stack trace and message in case of an error. The other two variants `threads-dbg` and `threads-opt` are `dbg` and `opt`, respectively, built with [OpenMP and pthreads threading support](https://www.mcs.anl.gov/petsc/miscellaneous/petscthreads.html).
## External Libraries
PETSc needs at least MPI, BLAS and LAPACK. These dependencies are currently satisfied with Intel MPI and Intel MKL in Anselm `petsc` modules.
PETSc can be linked with a plethora of [external numerical libraries](http://www.mcs.anl.gov/petsc/miscellaneous/external.html), extending PETSc functionality, e.g. direct linear system solvers, preconditioners or partitioners. See below a list of libraries currently included in Anselm `petsc` modules.
All these libraries can be used also alone, without PETSc. Their static or shared program libraries are available in
`$PETSC_DIR/$PETSC_ARCH/lib` and header files in `$PETSC_DIR/$PETSC_ARCH/include`. `PETSC_DIR` and `PETSC_ARCH` are environment variables pointing to a specific PETSc instance based on the PETSc module loaded.
* dense linear algebra
* [Elemental](http://libelemental.org/)
* sparse linear system solvers
* [Intel MKL Pardiso](https://software.intel.com/en-us/node/470282)
* [MUMPS](http://mumps.enseeiht.fr/)
* [PaStiX](http://pastix.gforge.inria.fr/)
* [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
* [SuperLU](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu)
* [SuperLU_Dist](http://crd.lbl.gov/~xiaoye/SuperLU/#superlu_dist)
* input/output
* [ExodusII](http://sourceforge.net/projects/exodusii/)
* [HDF5](http://www.hdfgroup.org/HDF5/)
* [NetCDF](http://www.unidata.ucar.edu/software/netcdf/)
* partitioning
* [Chaco](http://www.cs.sandia.gov/CRF/chac.html)
* [METIS](http://glaros.dtc.umn.edu/gkhome/metis/metis/overview)
* [ParMETIS](http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview)
* [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/)
* preconditioners & multigrid
* [Hypre](http://www.nersc.gov/users/software/programming-libraries/math-libraries/petsc/)
* [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai)
# Trilinos
Packages for large scale scientific and engineering problems. Provides MPI and hybrid parallelization.
## Introduction
Trilinos is a collection of software packages for the numerical solution of large scale scientific and engineering problems. It is based on C++ and features modern object-oriented design. Both serial as well as parallel computations based on MPI and hybrid parallelization are supported within Trilinos packages.
## Installed Packages
Current Trilinos installation on ANSELM contains (among others) the following main packages
* **Epetra** - core linear algebra package containing classes for manipulation with serial and distributed vectors, matrices, and graphs. Dense linear solvers are supported via interface to BLAS and LAPACK (Intel MKL on ANSELM). Its extension **EpetraExt** contains e.g. methods for matrix-matrix multiplication.
* **Tpetra** - next-generation linear algebra package. Supports 64-bit indexing and arbitrary data type using C++ templates.
* **Belos** - library of various iterative solvers (CG, block CG, GMRES, block GMRES etc.).
* **Amesos** - interface to direct sparse solvers.
* **Anasazi** - framework for large-scale eigenvalue algorithms.
* **IFPACK** - distributed algebraic preconditioner (includes e.g. incomplete LU factorization)
* **Teuchos** - common tools packages. This package contains classes for memory management, output, performance monitoring, BLAS and LAPACK wrappers etc.
For the full list of Trilinos packages, descriptions of their capabilities, and user manuals see [http://trilinos.sandia.gov.](http://trilinos.sandia.gov)
## Installed Version
Currently, Trilinos in version 11.2.3 compiled with Intel Compiler is installed on ANSELM.
## Compiling Against Trilinos
First, load the appropriate module:
```console
$ ml trilinos
```
For the compilation of CMake-aware project, Trilinos provides the FIND_PACKAGE( Trilinos ) capability, which makes it easy to build against Trilinos, including linking against the correct list of libraries.
For compiling using simple Makefiles, Trilinos provides Makefile.export system, which allows users to include important Trilinos variables directly into their Makefiles. This can be done simply by inserting the following line into the Makefile:
```cpp
include Makefile.export.Trilinos
```
or
```cpp
include Makefile.export.<package>
```
if you are interested only in a specific Trilinos package. This will give you access to the variables such as Trilinos_CXX_COMPILER, Trilinos_INCLUDE_DIRS, Trilinos_LIBRARY_DIRS etc.
# ANSYS CFX
[ANSYS CFX](http://www.ansys.com/products/fluids/ansys-cfx) software is a high-performance, general purpose fluid dynamics program that has been applied to solve wide-ranging fluid flow problems for over 20 years. At the heart of ANSYS CFX is its advanced solver technology, the key to achieving reliable and accurate solutions quickly and robustly. The modern, highly parallelized solver is the foundation for an abundant choice of physical models to capture virtually any type of phenomena related to fluid flow. The solver and its many physical models are wrapped in a modern, intuitive, and flexible GUI and user environment, with extensive capabilities for customization and automation using session files, scripting and a powerful expression language.
To run ANSYS CFX in batch mode you can utilize/modify the default cfx.pbs script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod
#PBS -N $USER-CFX-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory> (working directory must exists)
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
ml ansys
#### Set number of processors per host listing
#### (set to 1 as $PBS_NODEFILE lists each node twice if :ppn=2)
procs_per_host=1
#### Create host list
hl=""
for host in `cat $PBS_NODEFILE`
do
if ["$hl" = "" ]
then hl="$host:$procs_per_host"
else hl="${hl}:$host:$procs_per_host"
fi
done
echo Machines: $hl
#-dev input.def includes the input of CFX analysis in DEF format
#-P the name of prefered license feature (aa_r=ANSYS Academic Research, ane3fl=Multiphysics(commercial))
/ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r
```
Header of the PBS file (above) is common and description can be find on [this site](/anselm/job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the CFX solver via parameter -def
**License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**.
# ANSYS Fluent
[ANSYS Fluent](http://www.ansys.com/products/fluids/ansys-fluent)
software contains the broad physical modeling capabilities needed to model flow, turbulence, heat transfer, and reactions for industrial applications ranging from air flow over an aircraft wing to combustion in a furnace, from bubble columns to oil platforms, from blood flow to semiconductor manufacturing, and from clean room design to wastewater treatment plants. Special models that give the software the ability to model in-cylinder combustion, aeroacoustics, turbomachinery, and multiphase systems have served to broaden its reach.
## Common Way to Run Fluent Over PBS File
To run ANSYS Fluent in batch mode you can utilize/modify the default fluent.pbs script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -S /bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod
#PBS -N $USER-Fluent-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory> (working directory must exists)
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
#### Load ansys module so that we find the cfx5solve command
ml ansys
# Use following line to specify MPI for message-passing instead
NCORES=`wc -l $PBS_NODEFILE |awk '{print $1}'`
/ansys_inc/v145/fluent/bin/fluent 3d -t$NCORES -cnf=$PBS_NODEFILE -g -i fluent.jou
```
Header of the pbs file (above) is common and description can be find on [this site](/salomon/resources-allocation-policy/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common Fluent journal file which is attached to the Fluent solver via parameter -i fluent.jou
Journal file with definition of the input geometry and boundary conditions and defined process of solution has e.g. the following structure:
```console
/file/read-case aircraft_2m.cas.gz
/solve/init
init
/solve/iterate
10
/file/write-case-dat aircraft_2m-solution
/exit yes
```
The appropriate dimension of the problem has to be set by parameter (2d/3d).
## Fast Way to Run Fluent From Command Line
```console
fluent solver_version [FLUENT_options] -i journal_file -pbs
```
This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of _job_ID.hostname_. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o _job_ID_.
## Running Fluent via User's Config File
The sample script uses a configuration file called pbs_fluent.conf if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of pbs_fluent.conf can be:
```console
input="example_small.flin"
case="Small-1.65m.cas"
fluent_args="3d -pmyrinet"
outfile="fluent_test.out"
mpp="true"
```
The following is an explanation of the parameters:
input is the name of the input file.
case is the name of the .cas file that the input file will utilize.
fluent_args are extra ANSYS FLUENT arguments. As shown in the previous example, you can specify the interconnect by using the -p interconnect command. The available interconnects include ethernet (the default), myrinet, Infiniband, vendor, altix, and crayx. The MPI is selected automatically, based on the specified interconnect.
outfile is the name of the file to which the standard output will be sent.
mpp="true" will tell the job script to execute the job across multiple processors.
To run ANSYS Fluent in batch mode with user's config file you can utilize/modify the following script and execute it via the qsub command.
```bash
#!/bin/sh
#PBS -l nodes=2:ppn=4
#PBS -1 qprod
#PBS -N $USE-Fluent-Project
#PBS -A XX-YY-ZZ
cd $PBS_O_WORKDIR
#We assume that if they didn’t specify arguments then they should use the
#config file if ["xx${input}${case}${mpp}${fluent_args}zz" = "xxzz" ]; then
if [ -f pbs_fluent.conf ]; then
. pbs_fluent.conf
else
printf "No command line arguments specified, "
printf "and no configuration file found. Exiting n"
fi
fi
#Augment the ANSYS FLUENT command line arguments case "$mpp" in
true)
#MPI job execution scenario
num_nodes=‘cat $PBS_NODEFILE | sort -u | wc -l
cpus=‘expr $num_nodes * $NCPUS
#Default arguments for mpp jobs, these should be changed to suit your
#needs.
fluent_args="-t${cpus} $fluent_args -cnf=$PBS_NODEFILE"
;;
*)
#SMP case
#Default arguments for smp jobs, should be adjusted to suit your
#needs.
fluent_args="-t$NCPUS $fluent_args"
;;
esac
#Default arguments for all jobs
fluent_args="-ssh -g -i $input $fluent_args"
echo "---------- Going to start a fluent job with the following settings:
Input: $input
Case: $case
Output: $outfile
Fluent arguments: $fluent_args"
#run the solver
/ansys_inc/v145/fluent/bin/fluent $fluent_args > $outfile
```
It runs the jobs out of the directory from which they are submitted (PBS_O_WORKDIR).
## Running Fluent in Parralel
Fluent could be run in parallel only under Academic Research license. To do so this ANSYS Academic Research license must be placed before ANSYS CFD license in user preferences. To make this change anslic_admin utility should be run
```console
/ansys_inc/shared_les/licensing/lic_admin/anslic_admin
```
ANSLIC_ADMIN Utility will be run
![](../../../img/Fluent_Licence_1.jpg)
![](../../../img/Fluent_Licence_2.jpg)
![](../../../img/Fluent_Licence_3.jpg)
ANSYS Academic Research license should be moved up to the top of the list.
![](../../../img/Fluent_Licence_4.jpg)
# ANSYS LS-DYNA
**[ANSYSLS-DYNA](http://www.ansys.com/products/structures/ansys-ls-dyna)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment.
To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod
#PBS -N $USER-DYNA-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory>
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
#! Counts the number of processors
NPROCS=`wc -l < $PBS_NODEFILE`
echo This job has allocated $NPROCS nodes
ml ansys
#### Set number of processors per host listing
#### (set to 1 as $PBS_NODEFILE lists each node twice if :ppn=2)
procs_per_host=1
#### Create host list
hl=""
for host in `cat $PBS_NODEFILE`
do
if ["$hl" = "" ]
then hl="$host:$procs_per_host"
else hl="${hl}:$host:$procs_per_host"
fi
done
echo Machines: $hl
/ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl
```
Header of the PBS file (above) is common and description can be find on [this site](/anselm/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ANSYS solver via parameter i=
# ANSYS MAPDL
**[ANSYS Multiphysics](http://www.ansys.com/products/multiphysics)**
software offers a comprehensive product solution for both multiphysics and single-physics analysis. The product includes structural, thermal, fluid and both high- and low-frequency electromagnetic analysis. The product also contains solutions for both direct and sequentially coupled physics problems including direct coupled-field elements and the ANSYS multi-field solver.
To run ANSYS MAPDL in batch mode you can utilize/modify the default mapdl.pbs script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -l nodes=2:ppn=16
#PBS -q qprod
#PBS -N $USER-ANSYS-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory> (working directory must exists)
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
ml ansys
#### Set number of processors per host listing
#### (set to 1 as $PBS_NODEFILE lists each node twice if :ppn=2)
procs_per_host=1
#### Create host list
hl=""
for host in `cat $PBS_NODEFILE`
do
if ["$hl" = "" ]
then hl="$host:$procs_per_host"
else hl="${hl}:$host:$procs_per_host"
fi
done
echo Machines: $hl
#-i input.dat includes the input of analysis in APDL format
#-o file.out is output file from ansys where all text outputs will be redirected
#-p the name of license feature (aa_r=ANSYS Academic Research, ane3fl=Multiphysics(commercial), aa_r_dy=Academic AUTODYN)
/ansys_inc/v145/ansys/bin/ansys145 -b -dis -p aa_r -i input.dat -o file.out -machines $hl -dir $WORK_DIR
```
Header of the PBS file (above) is common and description can be found on [this site](/anselm/resources-allocation-policy/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allow to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ANSYS solver via parameter -i
**License** should be selected by parameter -p. Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**, aa_r_dy (ANSYS **Academic** AUTODYN)
# Overview of ANSYS Products
**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
Anselm provides commercial as well as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa\_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products).
To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module:
```console
$ ml ansys
```
ANSYS supports interactive regime, but due to assumed solution of extremely difficult tasks it is not recommended.
If user needs to work in interactive regime we recommend to configure the RSM service on the client machine which allows to forward the solution to the Anselm directly from the client's Workbench project (see ANSYS RSM service).
# Licensing and Available Versions
## ANSYS License Can Be Used By:
* all persons in the carrying out of the CE IT4Innovations Project (In addition to the primary licensee, which is VSB - Technical University of Ostrava, users are CE IT4Innovations third parties - CE IT4Innovations project partners, particularly the University of Ostrava, the Brno University of Technology - Faculty of Informatics, the Silesian University in Opava, Institute of Geonics AS CR.)
* all persons who have a valid license
* students of the Technical University
## ANSYS Academic Research
The licence intended to be used for science and research, publications, students’ projects (academic licence).
## ANSYS COM
The licence intended to be used for science and research, publications, students’ projects, commercial research with no commercial use restrictions.
## Available Versions
* 16.1
* 18.0
* 19.1
# LS-DYNA
[LS-DYNA](http://www.lstc.com/) is a multi-purpose, explicit and implicit finite element program used to analyze the nonlinear dynamic response of structures. Its fully automated contact analysis capability, a wide range of constitutive models to simulate a whole range of engineering materials (steels, composites, foams, concrete, etc.), error-checking features and the high scalability have enabled users worldwide to solve successfully many complex problems. Additionally LS-DYNA is extensively used to simulate impacts on structures from drop tests, underwater shock, explosions or high-velocity impacts. Explosive forming, process engineering, accident reconstruction, vehicle dynamics, thermal brake disc analysis or nuclear safety are further areas in the broad range of possible applications. In leading-edge research LS-DYNA is used to investigate the behavior of materials like composites, ceramics, concrete, or wood. Moreover, it is used in biomechanics, human modeling, molecular structures, casting, forging, or virtual testing.
Anselm provides **1 commercial license of LS-DYNA without HPC** support now.
To run LS-DYNA in batch mode you can utilize/modify the default lsdyna.pbs script and execute it via the qsub command.
```bash
#!/bin/bash
#PBS -l nodes=1:ppn=16
#PBS -q qprod
#PBS -N $USER-LSDYNA-Project
#PBS -A XX-YY-ZZ
#! Mail to user when job terminate or abort
#PBS -m ae
#!change the working directory (default is home directory)
#cd <working directory> (working directory must exists)
WORK_DIR="/scratch/$USER/work"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
ml lsdyna
/apps/engineering/lsdyna/lsdyna700s i=input.k
```
Header of the PBS file (above) is common and description can be find on [this site](/anselm/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA **.k** file which is attached to the LS-DYNA solver via parameter i=
# Setting License Preferences
Some ANSYS tools allow you to explicitly specify usage of academic or commercial licenses in the command line (e.g. ansys161 -p aa_r to select Academic Research license). However, we have observed that not all tools obey this option and choose commercial license.
Thus you need to configure preferred license order with ANSLIC_ADMIN. Please follow these steps and move Academic Research license to the top or bottom of the list accordingly.
Launch the ANSLIC_ADMIN utility in a graphical environment:
```console
$ANSYSLIC_DIR/lic_admin/anslic_admin
```
ANSLIC_ADMIN Utility will be run
![](../../../img/Fluent_Licence_1.jpg)
![](../../../img/Fluent_Licence_2.jpg)
![](../../../img/Fluent_Licence_3.jpg)
ANSYS Academic Research license should be moved up to the top or down to the bottom of the list.
![](../../../img/Fluent_Licence_4.jpg)
# Workbench
## Workbench Batch Mode
It is possible to run Workbench scripts in batch mode. You need to configure solvers of individual components to run in parallel mode. Open your project in Workbench. Then, for example, in Mechanical, go to Tools - Solve Process Settings ...
![](../../../img/AMsetPar1.png)
Enable Distribute Solution checkbox and enter number of cores (e.g. 48 to run on two Salomon nodes). If you want the job to run on more then 1 node, you must also provide a so called MPI appfile. In the Additional Command Line Arguments input field, enter:
```console
-mpifile /path/to/my/job/mpifile.txt
```
Where /path/to/my/job is the directory where your project is saved. We will create the file mpifile.txt programmatically later in the batch script. For more information, refer to \*ANSYS Mechanical APDL Parallel Processing\* \*Guide\*.
Now, save the project and close Workbench. We will use this script to launch the job:
```bash
#!/bin/bash
#PBS -l select=2:ncpus=24
#PBS -q qprod
#PBS -N test9_mpi_2
#PBS -A OPEN-0-0
# Mail to user when job terminate or abort
#PBS -m a
# change the working directory
WORK_DIR="$PBS_O_WORKDIR"
cd $WORK_DIR
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following nodes:
echo `cat $PBS_NODEFILE`
ml ANSYS
#### Set number of processors per host listing
procs_per_host=24
#### Create MPI appfile
echo -n "" > mpifile.txt
for host in `cat $PBS_NODEFILE`
do
echo "-h $host -np $procs_per_host $ANSYS160_DIR/bin/ansysdis161 -dis" > mpifile.txt
done
#-i input.dat includes the input of analysis in APDL format
#-o file.out is output file from ansys where all text outputs will be redirected
#-p the name of license feature (aa_r=ANSYS Academic Research, ane3fl=Multiphysics(commercial), aa_r_dy=Academic AUTODYN)
# prevent using scsif0 interface on accelerated nodes
export MPI_IC_ORDER="UDAPL"
# spawn remote process using SSH (default is RSH)
export MPI_REMSH="/usr/bin/ssh"
runwb2 -R jou6.wbjn -B -F test9.wbpj
```
The solver settings are saved in file solvehandlers.xml, which is not located in the project directory. Verify your solved settings when uploading a project from your local computer.
# Generating Container Recipes & Images
EasyBuild has support for generating container recipes that will use EasyBuild to build and install a specified software stack. In addition, EasyBuild can (optionally) leverage the build tool provided by the container software of choice to create container images.
!!! info
The features documented here have been available since **EasyBuild v3.6.0** but are still experimental, which implies they are subject to change in upcoming versions of EasyBuild.
You will need to enable the `--experimental` configuration option in order to use them.
## Generating Container Recipes
To generate container recipes, use `eb --containerize`, or `eb -C` for short.
The resulting container recipe will, in turn, leverage EasyBuild to build and install the software that corresponds to the easyconfig files that are specified as arguments to the eb command (and all required dependencies, if needed).
!!! note
EasyBuild will refuse to overwrite existing container recipes.
To re-generate an already existing recipe file, use the `--force` command line option.
## Base Container Image
In order to let EasyBuild generate a container recipe, it is required to specify which container image should be used as a base, via the `--container-base` configuration option.
Currently, three types of container base images can be specified:
* ** localimage:*path* **: the location of an existing container image file
* ** docker:*name* **: the name of a Docker container image (to be downloaded from [Docker Hub](https://hub.docker.com/))
* ** shub:*name* **: the name of a Singularity container image (to be downloaded from [Singularity Hub](https://singularity-hub.org/))
## Building Container Images
To instruct EasyBuild to also build a container image from the generated container recipe, use `--container-build-image` (in combination with `-C` or `--containerize`).
EasyBuild will leverage functionality provided by the container software of choice (see containers_cfg_image_type) to build the container image.
For example, in the case of Singularity, EasyBuild will run `sudo /path/to/singularity build` on the generated container recipe.
The container image will be placed in the location specified by the `--containerpath` configuration option (see Location for generated container recipes & images (`--containerpath`)), next to the generated container recipe that was used to build the image.
## Example Usage
In this example, we will use a pre-built base container image located at `/tmp/example.simg` (see also Base container image (`--container-base`)).
To let EasyBuild generate a container recipe for GCC 6.4.0 + binutils 2.28:
```console
eb GCC-6.4.0-2.28.eb --containerize --container-base localimage:/tmp/example.simg --experimental
```
With other configuration options left to default (see output of `eb --show-config`), this will result in a Singularity container recipe using example.simg as base image, which will be stored in `$HOME/.local/easybuild/containers`:
```console
$ eb GCC-6.4.0-2.28.eb --containerize --container-base localimage:/tmp/example.simg --experimental
== temporary log file in case of crash /tmp/eb-dLZTNF/easybuild-LPLeG0.log
== Singularity definition file created at /home/example/.local/easybuild/containers/Singularity.GCC-6.4.0-2.28
== Temporary log file(s) /tmp/eb-dLZTNF/easybuild-LPLeG0.log* have been removed.
== Temporary directory /tmp/eb-dLZTNF has been removed.
```
## Example of a Generated Container Recipe
Below is an example of container recipe for that was generated by EasyBuild, using the following command:
```console
eb Python-3.6.4-foss-2018a.eb OpenMPI-2.1.2-GCC-6.4.0-2.28.eb -C --container-base shub:shahzebsiddiqui/eb-singularity:centos-7.4.1708 --experimental
```
It uses the *shahzebsiddiqui/eb-singularity:centos-7.4.1708* base container image that is available from Singularity hub ([see](https://singularity-hub.org/collections/143)).
```
Bootstrap: shub
From: shahzebsiddiqui/eb-singularity:centos-7.4.1708
%post
yum --skip-broken -y install openssl-devel libssl-dev libopenssl-devel
yum --skip-broken -y install libibverbs-dev libibverbs-devel rdma-core-devel
# upgrade easybuild package automatically to latest version
pip install -U easybuild
# change to 'easybuild' user
su - easybuild
eb Python-3.6.4-foss-2018a.eb OpenMPI-2.1.2-GCC-6.4.0-2.28.eb --robot --installpath=/app/ --prefix=/scratch --tmpdir=/scratch/tmp
# exit from 'easybuild' user
exit
# cleanup
rm -rf /scratch/tmp/* /scratch/build /scratch/sources /scratch/ebfiles_repo
%runscript
eval "$@"
%environment
source /etc/profile
module use /app/modules/all
ml Python/3.6.4-foss-2018a OpenMPI/2.1.2-GCC-6.4.0-2.28
%labels
```
!!! note
We also specify the easyconfig file for the OpenMPI component of foss/2018a here, because it requires specific OS dependencies to be installed (see the 2nd yum ... install line in the generated container recipe).
We intend to let EasyBuild take into account the OS dependencies of the entire software stack automatically in a future update.
The generated container recipe includes pip install -U easybuild to ensure that the latest version of EasyBuild is used to build the software in the container image, regardless of whether EasyBuild was already present in the container and which version it was.
In addition, the generated module files will follow the default module naming scheme (EasyBuildMNS). The modules that correspond to the easyconfig files that were specified on the command line will be loaded automatically, see the statements in the %environment section of the generated container recipe.
## Example of Building Container Image
You can instruct EasyBuild to also build the container image by also using `--container-build-image`.
Note that you will need to enter your sudo password (unless you recently executed a sudo command in the same shell session):
```console
$ eb GCC-6.4.0-2.28.eb --containerize --container-base localimage:/tmp/example.simg --container-build-image --experimental
== temporary log file in case of crash /tmp/eb-aYXYC8/easybuild-8uXhvu.log
== Singularity tool found at /usr/bin/singularity
== Singularity version '2.4.6' is 2.4 or higher ... OK
== Singularity definition file created at /home/example/.local/easybuild/containers/Singularity.GCC-6.4.0-2.28
== Running 'sudo /usr/bin/singularity build /home/example/.local/easybuild/containers/GCC-6.4.0-2.28.simg /home/example/.local/easybuild/containers/Singularity.GCC-6.4.0-2.28', you may need to enter your 'sudo' password...
== (streaming) output for command 'sudo /usr/bin/singularity build /home/example/.local/easybuild/containers/GCC-6.4.0-2.28.simg /home/example/.local/easybuild/containers/Singularity.GCC-6.4.0-2.28':
Using container recipe deffile: /home/example/.local/easybuild/containers/Singularity.GCC-6.4.0-2.28
Sanitizing environment
Adding base Singularity environment to container
...
== temporary log file in case of crash /scratch/tmp/eb-WnmCI_/easybuild-GcKyY9.log
== resolving dependencies ...
...
== building and installing GCCcore/6.4.0...
...
== building and installing binutils/2.28-GCCcore-6.4.0...
...
== building and installing GCC/6.4.0-2.28...
...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /app/software/GCC/6.4.0-2.28/easybuild/easybuild-GCC-6.4.0-20180424.084946.log
== Build succeeded for 15 out of 15
...
Building Singularity image...
Singularity container built: /home/example/.local/easybuild/containers/GCC-6.4.0-2.28.simg
Cleaning up...
== Singularity image created at /home/example/.local/easybuild/containers/GCC-6.4.0-2.28.simg
== Temporary log file(s) /tmp/eb-aYXYC8/easybuild-8uXhvu.log* have been removed.
== Temporary directory /tmp/eb-aYXYC8 has been removed.
```
The inspect the container image, you can use `singularity shell` to start a shell session in the container:
```console
$ singularity shell --shell "/bin/bash --norc" $HOME/.local/easybuild/containers/GCC-6.4.0-2.28.simg
Singularity GCC-6.4.0-2.28.simg:~> source /etc/profile
Singularity GCC-6.4.0-2.28.simg:~> module list
Currently Loaded Modules:
1) GCCcore/6.4.0 2) binutils/2.28-GCCcore-6.4.0 3) GCC/6.4.0-2.28
Singularity GCC-6.4.0-2.28.simg:~> which gcc
/app/software/GCCcore/6.4.0/bin/gcc
Singularity GCC-6.4.0-2.28.simg:~> gcc --version
gcc (GCC) 6.4.0
...
```
Or, you can use `singularity exec` to execute a command in the container.
Compare the output of running which gcc and `gcc --version` locally:
```console
$ which gcc
/usr/bin/gcc
$ gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16)
...
```
and the output when running the same commands in the container:
```console
$ singularity exec GCC-6.4.0-2.28.simg which gcc
/app/software/GCCcore/6.4.0/bin/gcc
$ singularity exec GCC-6.4.0-2.28.simg gcc --version
gcc (GCC) 6.4.0
...
```
## Configuration
### Location for Generated Container Recipes & Images
To control the location where EasyBuild will put generated container recipes & images, use the `--containerpath` configuration setting. Next to providing this as an option to the eb command, you can also define the `$EASYBUILD_CONTAINERPATH` environment variable or specify containerpath in an EasyBuild configuration file.
The default value for this location is `$HOME/.local/easybuild/containers`, unless the `--prefix` configuration setting was provided, in which case it becomes <prefix>/containers (see Overall prefix path (`--prefix`)).
Use `eb --show-full-config | grep containerpath` to determine the currently active setting.
### Container Image Format
The format for container images that EasyBuild is produces via the functionality provided by the container software can be controlled via the `--container-image-format` configuration setting.
For Singularity containers (see Type of container recipe/image to generate (`--container-type`)), three image formats are supported:
* squashfs (default): compressed images using squashfs read-only file system
* ext3: writable image file using ext3 file system
* sandbox: container image in a regular directory
[See also](https://singularity.lbl.gov/user-guide#supported-container-formats and http://singularity.lbl.gov/docs-build-container).
## Name for Container Recipe & Image
By default, EasyBuild will use the name of the first easyconfig file (without the .eb suffix) as a name for both the container recipe and image.
You can specify an altername name using the `--container-image-name` configuration setting.
The filename of generated container recipe will be `Singularity`.<name>.
The filename of the container image will be `<name><extension>`, where the value for `<extension>` depends on the image format (see Container image format (`--container-image-format`)):
* ‘.simg’ for squashfs container images
* ‘.img’ for ext3 container images
* empty for sandbox container images (in which case the container image is actually a directory rather than a file)
### Temporary Directory for Creating Container Images
The container software that EasyBuild leverages to build container images may be using a temporary directory in a location that doesn’t have sufficient free space.
You can instruct EasyBuild to pass an alternate location via the `--container-tmpdir` configuration setting.
For Singularity, the default is to use /tmp, [see](http://singularity.lbl.gov/build-environment#temporary-folders). If `--container-tmpdir` is specified, the `$SINGULARITY_TMPDIR` environment variable will be defined accordingly to let Singularity use that location instead.
Type of container recipe/image to generate (`--container-type`)
With the `--container-type` configuration option, you can specify what type of container recipe/image EasyBuild should generated. Possible values are:
* singularity (default): [Singularity](https://singularity.lbl.gov) container recipes & images
* docker: [Docker](https://docs.docker.com/) container recipe & images
For detailed documentations see [here](http://easybuild.readthedocs.io/en/latest/Containers.html).
# EasyBuild
The objective of this tutorial is to show how EasyBuild can be used to ease, automate and script the build of software on the IT4Innovations clusters. Two use-cases are considered. First, we are going to build software that is supported by EasyBuild. In a second time, we will see through a simple example how to add support for a new software in EasyBuild.
The benefit of using EasyBuild for your builds is that it allows automated and reproducable build of software. Once a build has been made, the build script (via the EasyConfig file) or the installed software (via the module file) can be shared with other users.
## Short Introduction
EasyBuild is a tool that allows to perform automated and reproducible compilation and installation of software.
All builds and installations are performed at user level, so you don't need the admin rights. The software is installed in your home directory (by default in `$HOME/.local/easybuild/software/`) and a module file is generated (by default in `$HOME/.local/easybuild/modules/`) to use the software.
EasyBuild relies on two main concepts
* Toolchains
* EasyConfig file (our easyconfigs is [here](https://code.it4i.cz/sccs/easyconfigs-it4i))
Detailed documentations is available [here](http://easybuild.readthedocs.io).
## Toolchains
A toolchain corresponds to a compiler and a set of libraries which are commonly used to build a software. The two main toolchains frequently used on the IT4Innovations clusters are the **foss** and **intel**.
* **foss** is based on the GCC compiler and on open-source libraries (OpenMPI, OpenBLAS, etc.).
* **intel** is based on the Intel compiler and on Intel libraries (Intel MPI, Intel Math Kernel Library, etc.).
Additional details are available on [here](https://github.com/hpcugent/easybuild/wiki/Compiler-toolchains).
## EasyConfig File
An EasyConfig file is a simple text file that describes the build process of a software. For most software that uses standard procedure (like configure, make and make install), this file is very simple. Many EasyConfig files are already provided with EasyBuild.
By default, EasyConfig files and generated modules are named using the following convention
`software-name-software-version-toolchain-name-toolchain-version(-suffix).eb`
Additional details are available on [here](https://github.com/hpcugent/easybuild-easyconfigs).
## EasyBuild on IT4Innovations Clusters
To use EasyBuild on a compute node, load the EasyBuild module:
```console
$ ml av EasyBuild
-------------------------- /apps/modules/modulefiles/tools ---------------------
EasyBuild/2.8.1 EasyBuild/3.0.0 EasyBuild/3.0.2 EasyBuild/3.1.0 (S,D)
Where:
S: Module is Sticky, requires --force to unload or purge
D: Default Module
$ ml EasyBuild
```
The EasyBuild command is eb. Check the version you have loaded:
```console
$ eb --version
This is EasyBuild 3.1.0 (framework: 3.1.0, easyblocks: 3.1.0) on host login2
```
To get help on the EasyBuild options, use the -h or -H option flags:
```console
$ eb -h
Usage: eb [options] easyconfig [...]
Builds software based on easyconfig (or parse a directory). Provide one or
more easyconfigs or directories, use -H or --help more information.
Options:
-h show short help message and exit
-H OUTPUT_FORMAT show full help message and exit
Debug and logging options (configfile section MAIN):
-d Enable debug log mode (def False)
Basic options:
Basic runtime options for EasyBuild. (configfile section basic)
...
```
## Build Software Using Provided EasyConfig File
### Search for Available Easyconfig
Searching for available easyconfig files can be done using the **--search** (long output) and **-S** (short output) command line options. All easyconfig files available in the robot search path are considered and searching is done case-insensitive.
```console
$ eb -S git
CFGS1=/apps/easybuild/easyconfigs/easybuild/easyconfigs
* $CFGS1/g/git-lfs/git-lfs-1.1.1.eb
* $CFGS1/g/git/git-1.7.12-goalf-1.1.0-no-OFED.eb
* $CFGS1/g/git/git-1.7.12-goolf-1.4.10.eb
* $CFGS1/g/git/git-1.7.12-ictce-4.0.6.eb
* $CFGS1/g/git/git-1.7.12-ictce-5.3.0.eb
* $CFGS1/g/git/git-1.8.2-cgmpolf-1.1.6.eb
* $CFGS1/g/git/git-1.8.2-cgmvolf-1.1.12rc1.eb
* $CFGS1/g/git/git-1.8.2-cgmvolf-1.2.7.eb
* $CFGS1/g/git/git-1.8.2-cgoolf-1.1.7.eb
* $CFGS1/g/git/git-1.8.2-gmvolf-1.7.12.eb
* $CFGS1/g/git/git-1.8.2-gmvolf-1.7.12rc1.eb
* $CFGS1/g/git/git-1.8.2-goolf-1.4.10.eb
* $CFGS1/g/git/git-1.8.3.1-goolf-1.4.10.eb
* $CFGS1/g/git/git-1.8.5.6-GCC-4.9.2.eb
* $CFGS1/g/git/git-2.10.2.eb
* $CFGS1/g/git/git-2.11.0-GNU-4.9.3-2.25.eb
* $CFGS1/g/git/git-2.11.0.eb
* $CFGS1/g/git/git-2.2.2-GCC-4.9.2.eb
* $CFGS1/g/git/git-2.4.1-GCC-4.9.2.eb
* $CFGS1/g/git/git-2.7.3-GNU-4.9.3-2.25.eb
* $CFGS1/g/git/git-2.7.3-foss-2015g.eb
* $CFGS1/g/git/git-2.8.0-GNU-4.9.3-2.25.eb
* $CFGS1/g/git/git-2.8.0-foss-2016a.eb
* $CFGS1/g/git/git-2.8.0-intel-2017.00.eb
* $CFGS1/g/git/git-2.8.0.eb
```
### Get an Overview of Planned Installations
You can do a “dry-run” overview by supplying **-D**/**--dry-run** (typically combined with **--robot**, in the form of **-Dr**):
```console
$ eb git-2.8.0.eb -Dr
eb git-2.8.0.eb -Dr
== temporary log file in case of crash /tmp/eb-JcU1eA/easybuild-emly2F.log
Dry run: printing build status of easyconfigs and dependencies
CFGS=/apps/easybuild/easyconfigs/easybuild/easyconfigs
* [x] $CFGS/c/cURL/cURL-7.37.1.eb (module: cURL/7.37.1)
* [x] $CFGS/e/expat/expat-2.1.0.eb (module: expat/2.1.0)
* [x] $CFGS/g/gettext/gettext-0.19.2.eb (module: gettext/0.19.2)
* [x] $CFGS/p/Perl/Perl-5.20.2-bare.eb (module: Perl/5.20.2-bare)
* [x] $CFGS/m/M4/M4-1.4.17.eb (module: M4/1.4.17)
* [x] $CFGS/a/Autoconf/Autoconf-2.69.eb (module: Autoconf/2.69)
* [ ] $CFGS/g/git/git-2.8.0.eb (module: git/2.8.0)
== Temporary log file(s) /tmp/eb-JcU1eA/easybuild-emly2F.log* have been removed.
== Temporary directory /tmp/eb-JcU1eA has been removed.
```
### Compile and Install Module
If we try to build *git-2.8.0.eb*, nothing will be done as it is already installed on the cluster. To enable dependency resolution, use the **--robot** command line option (or **-r** for short):
```console
$ eb git-2.8.0.eb -r
== temporary log file in case of crash /tmp/eb-PXe3Zo/easybuild-hEckF4.log
== git/2.8.0 is already installed (module found), skipping
== No easyconfigs left to be built.
== Build succeeded for 0 out of 0
== Temporary log file(s) /tmp/eb-PXe3Zo/easybuild-hEckF4.log* have been removed.
== Temporary directory /tmp/eb-PXe3Zo has been removed.
```
Rebuild *git-2.8.0.eb*. Use eb **--rebuild** to rebuild a given easyconfig/module or use eb **--force**/**-f** to force the reinstallation of a given easyconfig/module. The behavior of **--force** is the same as **--rebuild** and **--ignore-osdeps**.
```console
$ eb git-2.8.0.eb -r -f
== temporary log file in case of crash /tmp/eb-JS_Fb5/easybuild-OwJZKn.log
== resolving dependencies ...
== processing EasyBuild easyconfig /apps/easybuild/easyconfigs/easybuild/easyconfigs/g/git/git-2.8.0.eb
== building and installing git/2.8.0...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== cleaning up...
== creating module...
== permissions...
== packaging...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /apps/all/git/2.8.0/easybuild/easybuild-git-2.8.0-20170221.110059.log
== Build succeeded for 1 out of 1
== Temporary log file(s) /tmp/eb-JS_Fb5/easybuild-OwJZKn.log\* have been removed.
== Temporary directory /tmp/eb-JS_Fb5 has been removed.
```
If we try to build *git-2.11.0.eb*:
```console
== temporary log file in case of crash /tmp/eb-JS_Fb5/easybuild-OwXCKn.log
== resolving dependencies ...
== processing EasyBuild easyconfig /apps/easybuild/easyconfigs/easybuild/easyconfigs/g/git/git-2.11.0.eb
== building and installing git/2.11.0...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== cleaning up...
== creating module...
== permissions...
== packaging...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /apps/all/git/2.11.0/easybuild/easybuild-git-2.11.0-20170221.110059.log
== Build succeeded for 1 out of 1
== Temporary log file(s) /tmp/eb-JS_Fb5/easybuild-OwXCKn.log\* have been removed.
== Temporary directory /tmp/eb-JS_Fb5 has been removed.
```
If we try to build *git-2.11.1*, but we used easyconfig *git-2.11.0.eb* - change version command **--try-software-version=2.11.1**:
```console
$ eb git-2.11.0.eb -r --try-software-version=2.11.1
== temporary log file in case of crash /tmp/eb-oisi0q/easybuild-2rNh7I.log
== resolving dependencies ...
== processing EasyBuild easyconfig /tmp/eb-oisi0q/tweaked_easyconfigs/git-2.11.1.eb
== building and installing git/2.11.1...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== cleaning up...
== creating module...
== permissions...
== packaging...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /apps/all/git/2.11.1/easybuild/easybuild-git-2.11.1-20170221.111005.log
== Build succeeded for 1 out of 1
== Temporary log file(s) /tmp/eb-oisi0q/easybuild-2rNh7I.log\* have been removed.
== Temporary directory /tmp/eb-oisi0q has been removed.
```
and try to build *git-2.11.1-intel-2017a*, but we used easyconfig *git-2.11.0.eb* - change toolchains **--try-toolchain-name=intel --try-toolchain-version=2017a** or **--try-toolchain=intel,2017a**:
```console
$ eb git-2.11.0.eb -r --try-toolchain=intel,2017a
== temporary log file in case of crash /tmp/eb-oisi0q/easybuild-2Trh7I.log
== resolving dependencies ...
== processing EasyBuild easyconfig /tmp/eb-oisi0q/tweaked_easyconfigs/git-2.11.1-intel-2017a.eb
== building and installing git/2.11.1-intel-2017a...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== cleaning up...
== creating module...
== permissions...
== packaging...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /apps/all/git/2.11.1-intel-2017a/easybuild/easybuild-git-2.11.1-20170221.111005.log
== Build succeeded for 1 out of 1
== Temporary log file(s) /tmp/eb-oisi0q/easybuild-2Trh7I.log\* have been removed.
== Temporary directory /tmp/eb-oisi0q has been removed.
```
### MODULEPATH
To see the newly installed modules, you need to add the path where they were installed to the MODULEPATH. On the cluster you have to use the `module use` command:
```console
$ module use $HOME/.local/easybuild/modules/all/
```
or modify your `.bash_profile`:
```console
$ cat ~/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
module use $HOME/.local/easybuild/modules/all/
PATH=$PATH:$HOME/bin
export PATH
```
## Build Software Using Your Own EasyConfig File
For this example, we create an EasyConfig file to build Git 2.11.1 with *foss* toolchain. Open your favorite editor and create a file named *git-2.11.1-foss-2017a.eb* with the following content:
```console
$ vim git-2.11.1-foss-2017a.eb
```
```python
easyblock = 'ConfigureMake'
name = 'git'
version = '2.11.1'
homepage = 'http://git-scm.com/'
description = """Git is a free and open source distributed version control system designed
to handle everything from small to very large projects with speed and efficiency."""
toolchain = {'name': 'foss', 'version': '2017a'}
sources = ['v%(version)s.tar.gz']
source_urls = ['https://github.com/git/git/archive']
builddependencies = [('Autoconf', '2.69')]
dependencies = [
('cURL', '7.37.1'),
('expat', '2.1.0'),
('gettext', '0.19.2'),
('Perl', '5.20.2'),
]
preconfigopts = 'make configure && '
# Work around git build system bug. If LIBS contains -lpthread, then configure
# will not append -lpthread to LDFLAGS, but Makefile ignores LIBS.
configopts = "--with-perl=${EBROOTPERL}/bin/perl --enable-pthreads='-lpthread'"
sanity_check_paths = {
'files': ['bin/git'],
'dirs': [],
}
moduleclass = 'tools'
```
This is a simple EasyConfig. Most of the fields are self-descriptive. No build method is explicitely defined, so it uses by default the standard configure/make/make install approach.
Let's build Git with this EasyConfig file:
```console
$ eb ./git-2.11.1-foss-2017a.eb -r
== temporary log file in case of crash /tmp/eb-oisi0q/easybuild-2Tii7I.log
== resolving dependencies ...
== processing EasyBuild easyconfig /home/username/git-2.11.1-foss-2017a.eb
== building and installing git/2.11.1-foss-2017a...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== cleaning up...
== creating module...
== permissions...
== packaging...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /home/username/.local/easybuild/modules/all/git/2.11.1-foss-2017a/easybuild/easybuild-git-2.11.1-20170221.111005.log
== Build succeeded for 1 out of 1
== Temporary log file(s) /tmp/eb-oisi0q/easybuild-2Tii7I.log\* have been removed.
== Temporary directory /tmp/eb-oisi0q has been removed.
```
We can now check that our version of Git is available via the modules:
```console
$ ml av git
-------------------------------- /apps/modules/modulefiles/tools -------------------------
git/2.8.0-GNU-4.9.3-2.25 git/2.11.0-GNU-4.9.3-2.25 git/2.11.1-GNU-4.9.3-2.25 (D)
-------------------------------- /home/username/.local/easybuild/modules/all -------------
git/2.11.1-foss-2017a
Where:
D: Default Module
If you need software that is not listed, request it at support@it4i.cz.
```
## Submitting Build Jobs (Experimental)
Using the **--job** command line option, you can instruct EasyBuild to submit jobs for the installations that should be performed, rather than performing the installations locally on the system you are on.
```console
$ eb git-2.11.0-GNU-4.9.3-2.25.eb -r --job
== temporary log file in case of crash /tmp/eb-zeLzBb/easybuild-H_Z0fB.log
== resolving dependencies ...
== GC3Pie job overview: 1 submitted (total: 1)
== GC3Pie job overview: 1 running (total: 1)
== GC3Pie job overview: 1 running (total: 1)
== GC3Pie job overview: 1 running (total: 1)
== GC3Pie job overview: 1 running (total: 1)
== GC3Pie job overview: 1 running (total: 1)
== GC3Pie job overview: 1 running (total: 1)
== GC3Pie job overview: 1 running (total: 1)
== GC3Pie job overview: 1 running (total: 1)
== GC3Pie job overview: 1 terminated, 1 ok (total: 1)
== GC3Pie job overview: 1 terminated, 1 ok (total: 1)
== Done processing jobs
== GC3Pie job overview: 1 terminated, 1 ok (total: 1)
== Submitted parallel build jobs, exiting now
== Temporary log file(s) /tmp/eb-zeLzBb/easybuild-H_Z0fB.log* have been removed.
== Temporary directory /tmp/eb-zeLzBb has been removed.
```
!!! note ""
Salomon jobs ... XXXXX.isrv5
Anselm jobs ... XXXXX.dm2
```console
$ qstat -u username -w
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
------------------------------ --------------- --------------- --------------- -------- ---- ----- ------ ----- - -----
1319314.dm2 username qprod git-2.11.0-GNU- 85605 1 16 -- 24:00 R 00:00:17
```
# Singularity on IT4Innovations
On our clusters, the Singularity images of main linux distributions are prepared. List of available singularity images (05.04.2018):
```console
Salomon Anselm
├── CentOS ├── CentOS
│ ├── 6.9 │ ├── 6.9
│ ├── 6.9-MIC │ ├── 6.9-GPU
│ ├── 7.4 │ ├── 7.4
│ └── 7.4-MIC │ └── 7.4-GPU
├── Debian ├── Debian
│ └── 8.0 │ ├── 8.0
└── Ubuntu │ └── 8.0-GPU
└── 16.04 └── Ubuntu
├── 16.04
└── 16.04-GPU
```
Current information about available Singularity images can be obtained by the `ml av` command. The Images are listed in the `OS` section.
The bootstrap scripts, wrappers, features, etc. are located [here](https://code.it4i.cz/sccs/it4i-singularity).
!!! note
The images with graphic card support are marked as **-GPU** and images with Intel Xeon Phi support are marked as **-MIC**
## IT4Innovations Singularity Wrappers
For better user experience with Singularity containers we prepared several wrappers:
* image-exec
* image-mpi
* image-run
* image-shell
* image-update
Listed wrappers help you to use prepared Singularity images loaded as modules. You can easily load Singularity image like any other module on the cluster by `ml OS/version` command. After the module is loaded for the first time, the prepared image is copied into your home folder and is ready for use. When you load the module next time, the version of image is checked and image update (if exists) is offered. Then you can update your copy of image by the `image-update` command.
!!! warning
With image update, all user changes to the image will be overridden.
The runscript inside the Singularity image can be run by the `image-run` command. This command automatically mounts the `/scratch` and `/apps` storage and invokes the image as writable, so user changes can be made.
Very similar to `image-run` is the `image-exec` command. The only difference is that `image-exec` runs user-defined command instead of the runscript. In this case, the command to be run is specified as a parameter.
For development is very useful to use interactive shell inside the Singularity container. In this interactive shell you can make any changes to the image you want, but be aware that you can not use the `sudo` privileged commands directly on the cluster. To invoke interactive shell easily just use the `image-shell` command.
Another useful feature of the Singularity is direct support of OpenMPI. For proper MPI function, you have to install the same version of OpenMPI inside the image as you use on cluster. OpenMPI/2.1.1 is installed in prepared images. The MPI must be started outside the container. The easiest way to start the MPI is to use the `image-mpi` command.
This command has the same parameters as the `mpirun`. Thanks to that, there is no difference between running normal MPI application and MPI application in Singularity container.
## Examples
In the examples, we will use the prepared Singularity images.
### Load Image
```console
$ ml CentOS/6.9
Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-6.9_20180220133305.img
```
!!! tip
After the module is loaded for the first time, the prepared image is copied into your home folder to the *.singularity/images* subfolder.
### Wrappers
**image-exec**
Executes the given command inside the Singularity image. The container is in this case started, then the command is executed and the container is stopped.
```console
$ ml CentOS/7.3
Your image of CentOS/7.3 is at location: /home/login/.singularity/images/CentOS-7.3_20180220104046.img
$ image-exec cat /etc/centos-release
CentOS Linux release 7.3.1708 (Core)
```
**image-mpi**
MPI wrapper - see more in the chapter [Examples MPI](#mpi).
**image-run**
This command runs the runscript inside the Singularity image. Note, that the prepared images don't contain a runscript.
**image-shell**
Invokes an interactive shell inside the Singularity image.
```console
$ ml CentOS/7.3
$ image-shell
Singularity: Invoking an interactive shell within container...
Singularity CentOS-7.3_20180220104046.img:~>
```
### Update Image
This command is for updating your local copy of the Singularity image. The local copy is overridden in this case.
```console
$ ml CentOS/6.9
New version of CentOS image was found. (New: CentOS-6.9_20180220092823.img Old: CentOS-6.9_20170220092823.img)
For updating image use: image-update
Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-6.9_20170220092823.img
$ image-update
New version of CentOS image was found. (New: CentOS-6.9_20180220092823.img Old: CentOS-6.9_20170220092823.img)
Do you want to update local copy? (WARNING all user modification will be deleted) [y/N]: y
Updating image CentOS-6.9_20180220092823.img
2.71G 100% 199.49MB/s 0:00:12 (xfer#1, to-check=0/1)
sent 2.71G bytes received 31 bytes 163.98M bytes/sec
total size is 2.71G speedup is 1.00
New version is ready. (/home/login/.singularity/images/CentOS-6.9_20180220092823.img)
```
### Intel Xeon Phi Cards - MIC
In the following example, we are using a job submitted by the command: `qsub -A PROJECT -q qprod -l select=1:mpiprocs=24:accelerator=true -I`
!!! info
The MIC image was prepared only for the Salomon cluster.
**Code for the Offload Test**
```c
#include <stdio.h>
#include <thread>
#include <stdlib.h>
#include <unistd.h>
int main() {
char hostname[1024];
gethostname(hostname, 1024);
unsigned int nthreads = std::thread::hardware_concurrency();
printf("Hello world, #of cores: %d\n",nthreads);
#pragma offload target(mic)
{
nthreads = std::thread::hardware_concurrency();
printf("Hello world from MIC, #of cores: %d\n",nthreads);
}
}
```
**Compile and Run**
```console
[login@r38u03n975 ~]$ ml CentOS/6.9-MIC
Your image of CentOS/6.9-MIC is at location: /home/login/.singularity/images/CentOS-6.9-MIC_20180220112004.img
[login@r38u03n975 ~]$ image-shell
Singularity: Invoking an interactive shell within container...
Singularity CentOS-6.9-MIC_20180220112004.img:~> ml intel/2017b
Singularity CentOS-6.9-MIC_20180220112004.img:~> ml
Currently Loaded Modules:
1) GCCcore/6.3.0 3) icc/2017.1.132-GCC-6.3.0-2.27 5) iccifort/2017.1.132-GCC-6.3.0-2.27 7) iimpi/2017a 9) intel/2017a
2) binutils/2.27-GCCcore-6.3.0 4) ifort/2017.1.132-GCC-6.3.0-2.27 6) impi/2017.1.132-iccifort-2017.1.132-GCC-6.3.0-2.27 8) imkl/2017.1.132-iimpi-2017a
Singularity CentOS-6.9-MIC_20180220112004.img:~> icpc -std=gnu++11 -qoffload=optional hello.c -o hello-host
Singularity CentOS-6.9-MIC_20180220112004.img:~> ./hello-host
Hello world, #of cores: 24
Hello world from MIC, #of cores: 244
```
### GPU Image
In the following example, we are using a job submitted by the command: `qsub -A PROJECT -q qnvidia -l select=1:ncpus=16:mpiprocs=16 -l walltime=01:00:00 -I`
!!! note
The GPU image was prepared only for the Anselm cluster.
**Checking NVIDIA Driver Inside Image**
```console
[login@cn199.anselm ~]$ image-shell
Singularity: Invoking an interactive shell within container...
Singularity CentOS-6.9-GPU_20180309130604.img:~> ml
No modules loaded
Singularity CentOS-6.9-GPU_20180309130604.img:~> nvidia-smi
Mon Mar 12 07:07:53 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.30 Driver Version: 390.30 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K20m Off | 00000000:02:00.0 Off | 0 |
| N/A 28C P0 51W / 225W | 0MiB / 4743MiB | 89% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
### MPI
In the following example, we are using a job submitted by the command: `qsub -A PROJECT -q qprod -l select=2:mpiprocs=24 -l walltime=00:30:00 -I`
!!! note
We have seen no major performance impact for a job running in a Singularity container.
With Singularity, the MPI usage model is to call `mpirun` from outside the container, and reference the container from your `mpirun` command. Usage would look like this:
```console
$ mpirun -np 24 singularity exec container.img /path/to/contained_mpi_prog
```
By calling `mpirun` outside of the container, we solve several very complicated work-flow aspects. For example, if `mpirun` is called from within the container it must have a method for spawning processes on remote nodes. Historically the SSH is used for this which means that there must be an `sshd` running within the container on the remote nodes, and this `sshd` process must not conflict with the `sshd` running on that host! It is also possible for the resource manager to launch the job and (in OpenMPI’s case) the Orted (Open RTE User-Level Daemon) processes on the remote system, but that then requires resource manager modification and container awareness.
In the end, we do not gain anything by calling `mpirun` from within the container except for increasing the complexity levels and possibly losing out on some added
performance benefits (e.g. if a container wasn’t built with the proper OFED as the host).
#### MPI Inside Singularity Image
```console
$ ml CentOS/6.9
$ image-shell
Singularity: Invoking an interactive shell within container...
Singularity CentOS-6.9_20180220092823.img:~> mpirun hostname | wc -l
24
```
As you can see in this example, we allocated two nodes, but MPI can use only one node (24 processes) when used inside the Singularity image.
#### MPI Outside Singularity Image
```console
$ ml CentOS/6.9
Your image of CentOS/6.9 is at location: /home/login/.singularity/images/CentOS-6.9_20180220092823.img
$ image-mpi hostname | wc -l
48
```
In this case, the MPI wrapper behaves like `mpirun` command. The `mpirun` is called outside the container and the communication between nodes are propagated
into the container automatically.
## How to Use Own Image on Cluster?
* Prepare the image on your computer
* Transfer the images to your `/home` directory on the cluster (for example `.singularity/image`)
```console
local:$ scp container.img login@login4.salomon.it4i.cz:~/.singularity/image/container.img
```
* Load module Singularity (`ml Singularity`)
* Use your image
!!! note
If you want to use the Singularity wrappers with your own images, then load module `Singularity-wrappers/master` and set the environment variable `IMAGE_PATH_LOCAL=/path/to/container.img`.
## How to Edit IT4Innovations Image?
* Transfer the image to your computer
```console
local:$ scp login@login4.salomon.it4i.cz:/home/login/.singularity/image/container.img container.img
```
* Modify the image
* Transfer the image from your computer to your `/home` directory on the cluster
```console
local:$ scp container.img login@login4.salomon.it4i.cz:/home/login/.singularity/image/container.img
```
* Load module Singularity (`ml Singularity`)
* Use your image
# Singularity Container
[Singularity](http://singularity.lbl.gov/) enables users to have full control of their environment. A non-privileged user can "swap out" the operating system on the host for one they control. So if the host system is running RHEL6 but your application runs in Ubuntu/RHEL7, you can create an Ubuntu/RHEL7 image, install your applications into that image, copy the image to another host, and run your application on that host in it’s native Ubuntu/RHEL7 environment.
Singularity also allows you to leverage the resources of whatever host you are on. This includes HPC interconnects, resource managers, file systems, GPUs and/or accelerators, etc. Singularity does this by enabling several key facets:
* Encapsulation of the environment
* Containers are image based
* No user contextual changes or root escalation allowed
* No root owned daemon processes
This documentation is for Singularity version 2.4 and newer.
## Using Docker Images
Singularity can import, bootstrap, and even run Docker images directly from [Docker Hub](https://hub.docker.com/). You can easily run RHEL7 container like this:
```console
hra0031@login4:~$ cat /etc/redhat-release
CentOS release 6.9 (Final)
hra0031@login4:~$ ml Singularity
hra0031@login4:~$ singularity shell docker://centos:latest
Docker image path: index.docker.io/library/centos:latest
Cache folder set to /home/hra0031/.singularity/docker
[1/1] |===================================| 100.0%
Creating container runtime...
Singularity: Invoking an interactive shell within container...
Singularity centos:latest:~> cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
```
In this case, image is downloaded from Docker Hub, extracted to a temporary directory and Singularity interactive shell is invoked. This procedure can take a lot of time, especially with large images.
## Importing Docker Image
Singularity containers can be in three different formats:
* read-only **squashfs** (default) - best for production
* writable **ext3** (--writable option)
* writable **(ch)root directory** (--sandbox option) - best for development
Squashfs and (ch)root directory images can be built from Docker source directly on the cluster, no root privileges are needed. It is strongly recomended to create native Singularity image to speed up the launch of the container.
```console
hra0031@login4:~$ ml Singularity
hra0031@login4:~$ singularity build ubuntu.img docker://ubuntu:latest
Docker image path: index.docker.io/library/ubuntu:latest
Cache folder set to /home/hra0031/.singularity/docker
Importing: base Singularity environment
Importing: /home/hra0031/.singularity/docker/sha256:50aff78429b146489e8a6cb9334d93a6d81d5de2edc4fbf5e2d4d9253625753e.tar.gz
Importing: /home/hra0031/.singularity/docker/sha256:f6d82e297bce031a3de1fa8c1587535e34579abce09a61e37f5a225a8667422f.tar.gz
Importing: /home/hra0031/.singularity/docker/sha256:275abb2c8a6f1ce8e67a388a11f3cc014e98b36ff993a6ed1cc7cd6ecb4dd61b.tar.gz
Importing: /home/hra0031/.singularity/docker/sha256:9f15a39356d6fc1df0a77012bf1aa2150b683e46be39d1c51bc7a320f913e322.tar.gz
Importing: /home/hra0031/.singularity/docker/sha256:fc0342a94c89e477c821328ccb542e6fb86ce4ef4ebbf1098e85669e051ef0dd.tar.gz
Importing: /home/hra0031/.singularity/metadata/sha256:c6a9ef4b9995d615851d7786fbc2fe72f72321bee1a87d66919b881a0336525a.tar.gz
WARNING: Building container as an unprivileged user. If you run this container as root
WARNING: it may be missing some functionality.
Building Singularity image...
Singularity container built: ubuntu.img
Cleaning up...
```
## Launching the Container
The interactive shell can be invoked by the `singularity shell` command. This is useful for development purposes. Use the `-w | --writable` option to make changes inside the container permanent.
```console
hra0031@login4:~$ singularity shell -w ubuntu.img
Singularity: Invoking an interactive shell within container...
Singularity ubuntu.img:~> cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
```
A command can be run inside the container (without interactive shell) by invoking `singularity exec` command.
```
hra0031@login4:~$ singularity exec ubuntu.img cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
```
Singularity image can contain a runscript. This script is executed inside the container after the `singularity run` command is used. The runscript is mostly used to run an application for which the container is built. In the following example it is `fortune | cowsay` command.
```
hra0031@login4:~$ singularity run ubuntu.img
___________________
< Are you a turtle? >
-------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
```
## Accessing /HOME and /SCRATCH Within Container
User home directory is mounted inside the container automatically. If you need access to **/SCRATCH** storage for your computation, this must be mounted by `-B | --bind` option.
!!!Warning
The mounted folder has to exist inside the container or the container image has to be writable!
```console
hra0031@login4:~$ singularity shell -B /scratch -w ubuntu.img
Singularity: Invoking an interactive shell within container...
Singularity ubuntu.img:~> ls /scratch
ddn sys temp work
```
Comprehensive documentation can be found at the [Singularity](http://singularity.lbl.gov/quickstart) website.
\ No newline at end of file
# Spack
Spack is a package manager for supercomputers, Linux, and macOS. It makes installing scientific software easy. With Spack, you can build a package with multiple versions, configurations, platforms, and compilers, and all of these builds can coexist on the same machine.
Homepage is at [https://spack.io/](https://spack.io/)
Documentation is at [https://spack.readthedocs.io/en/latest/](https://spack.readthedocs.io/en/latest/)
## Spack on IT4Innovations Clusters
```console
$ ml av Spack
---------------------- /apps/modules/devel ------------------------------
Spack/default
```
!!! note
Spack/default is rule for setting up local installation
## First Usage Module Spack/Default
The Spack will be installed into `~/Spack` folder. You can set the configuration by modifying ~/.spack/configure.yml.
```console
$ ml Spack
== Settings for first use
Couldn't import dot_parser, loading of dot files will not be possible.
== temporary log file in case of crash /tmp/eb-wLh1RT/easybuild-54vEn3.log
== processing EasyBuild easyconfig /apps/easybuild/easyconfigs-it4i/s/Spack/Spack-0.10.0.eb
== building and installing Spack/0.10.0...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== cleaning up...
== creating module...
== permissions...
== packaging...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) ~/.local/easybuild/software/Spack/0.10.0/easybuild/easybuild-Spack-0.10.0-20170707.122650.log
== Build succeeded for 1 out of 1
== Temporary log file(s) /tmp/eb-wLh1RT/easybuild-54vEn3.log* have been removed.
== Temporary directory /tmp/eb-wLh1RT has been removed.
== Create folder ~/Spack
The following have been reloaded with a version change:
1) Spack/default => Spack/0.10.0
$ spack --version
0.10.0
```
## Usage Module Spack/Default
```console
$ ml Spack
The following have been reloaded with a version change:
1) Spack/default => Spack/0.10.0
$ spack --version
0.10.0
```
## Build Software Package
Packages in Spack are written in pure Python, so you can do anything in Spack that you can do in Python. Python was chosen as the implementation language for two reasons. First, Python is becoming ubiquitous in the scientific software community. Second, it’s a modern language and has many powerful features to help make package writing easy.
### Search for Available Software
To install software with Spack, you need to know what software is available. Use the `spack list` command.
```console
$ spack list
==> 1114 packages.
abinit font-bh-100dpi libffi npm py-ply r-maptools tetgen
ack font-bh-75dpi libfontenc numdiff py-pmw r-markdown tethex
activeharmony font-bh-lucidatypewriter-100dpi libfs nwchem py-prettytable r-mass texinfo
adept-utils font-bh-lucidatypewriter-75dpi libgcrypt ocaml py-proj r-matrix texlive
adios font-bh-ttf libgd oce py-prompt-toolkit r-matrixmodels the-platinum-searcher
adol-c font-bh-type1 libgpg-error oclock py-protobuf r-memoise the-silver-searcher
allinea-forge font-bitstream-100dpi libgtextutils octave py-psutil r-mgcv thrift
allinea-reports font-bitstream-75dpi libhio octave-splines py-ptyprocess r-mime tinyxml
ant font-bitstream-speedo libice octopus py-pudb r-minqa tinyxml2
antlr font-bitstream-type1 libiconv ompss py-py r-multcomp tk
ape font-cronyx-cyrillic libint ompt-openmp py-py2cairo r-munsell tmux
apex font-cursor-misc libjpeg-turbo opari2 py-py2neo r-mvtnorm tmuxinator
applewmproto font-daewoo-misc libjson-c openblas py-pychecker r-ncdf4 transset
appres font-dec-misc liblbxutil opencoarrays py-pycodestyle r-networkd3 trapproto
apr font-ibm-type1 libmesh opencv py-pycparser r-nlme tree
...
```
#### Specify Software Version (For Package)
To see more available versions of a package, run `spack versions`.
```console
$ spack versions git
==> Safe versions (already checksummed):
2.11.0 2.9.3 2.9.2 2.9.1 2.9.0 2.8.4 2.8.3 2.8.2 2.8.1 2.8.0 2.7.3 2.7.1
==> Remote versions (not yet checksummed):
Found no versions for git
```
## Graph for Software Package
Spack provides `spack graph` command to display dependency graph. The command by default generates an ASCII rendering of a spec’s dependency graph.
```console
$ spack graph git
o git
|\
| |\
| | |\
| | | |\
| | | | |\
| | | | | |\
| | | | | | |\
| | | | | | | |\
| | | | | | | o | curl
| |_|_|_|_|_|/| |
|/| | | |_|_|/ /
| | | |/| | | |
| | | o | | | | openssl
| |_|/ / / / /
|/| | | | | |
| | | | o | | gettext
| | | | |\ \ \
| | | | | |\ \ \
| | | | | | |\ \ \
| | | | | | | |\ \ \
| | | | | | | o | | | libxml2
| |_|_|_|_|_|/| | | |
|/| | | | |_|/| | | |
| | | | |/| | | | | |
o | | | | | | | | | | zlib
/ / / / / / / / / /
| | | o | | | | | | xz
| | | / / / / / /
| | | o | | | | | tar
| | | / / / / /
| | | | o | | | pkg-config
| | | | / / /
o | | | | | | perl
/ / / / / /
o | | | | | pcre
/ / / / /
| o | | | ncurses
| / / /
| | | o autoconf
| | | o m4
| | | o libsigsegv
| | |
o | | libiconv
/ /
| o expat
|
o bzip2
```
### Information for Software Package
To get more information on a particular package from `spack list`, use `spack info`.
```console
$ spack info git
Package: git
Homepage: http://git-scm.com
Safe versions:
2.11.0 https://github.com/git/git/tarball/v2.11.0
2.9.3 https://github.com/git/git/tarball/v2.9.3
2.9.2 https://github.com/git/git/tarball/v2.9.2
2.9.1 https://github.com/git/git/tarball/v2.9.1
2.9.0 https://github.com/git/git/tarball/v2.9.0
2.8.4 https://github.com/git/git/tarball/v2.8.4
2.8.3 https://github.com/git/git/tarball/v2.8.3
2.8.2 https://github.com/git/git/tarball/v2.8.2
2.8.1 https://github.com/git/git/tarball/v2.8.1
2.8.0 https://github.com/git/git/tarball/v2.8.0
2.7.3 https://github.com/git/git/tarball/v2.7.3
2.7.1 https://github.com/git/git/tarball/v2.7.1
Variants:
None
Installation Phases:
install
Build Dependencies:
autoconf curl expat gettext libiconv openssl pcre perl zlib
Link Dependencies:
curl expat gettext libiconv openssl pcre perl zlib
Run Dependencies:
None
Virtual Packages:
None
Description:
Git is a free and open source distributed version control system
designed to handle everything from small to very large projects with
speed and efficiency.
```
### Install Software Package
`spack install` will install any package shown by `spack list`. For example, to install the latest version of the `git` package, you might type `spack install git` for default version or `spack install git@version` to chose the particular one.
```console
$ spack install git@2.11.0
==> Installing git
==> Installing pcre
==> Fetching http://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.39.tar.bz2
...
```
!!! warning
`FTP` on cluster is not allowed, you must edit source link.
### Edit Rule
```console
$ spack edit git
```
!!! note
To change source link (`ftp://` to `http://`) use `spack create URL -f` to regenerates rules.
#### **Example**
```console
$ spack install git
==> Installing git
==> Installing pcre
==> Fetching ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.39.tar.bz2
curl: (7) couldn't connect to host
==> Fetching from ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.39.tar.bz2 failed.
==> Error: FetchError: All fetchers failed for pcre-8.39-bm3lumpbghly2l7bkjsi4n2l3jyam6ax
...
$ spack create http://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.39.tar.bz2 -f
==> This looks like a URL for pcre
==> Found 2 versions of pcre:
8.41 http://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.41.tar.bz2
8.40 http://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.40.tar.bz2
How many would you like to checksum? (default is 1, q to abort) 1
==> Downloading...
==> Fetching http://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.41.tar.bz2
######################################################################## 100,0%
==> Checksummed 1 version of pcre
==> This package looks like it uses the cmake build system
==> Created template for pcre package
==> Created package file: ~/.local/easybuild/software/Spack/0.10.0/var/spack/repos/builtin/packages/pcre/package.py
$
$ spack install git
==> Installing git
==> Installing pcre
==> Installing cmake
==> Installing ncurses
==> Fetching http://ftp.gnu.org/pub/gnu/ncurses/ncurses-6.0.tar.gz
######################################################################## 100,0%
...
```
## Available Spack Module
We know that `spack list` shows you the names of available packages, but how do you figure out which are already installed?
```console
==> 19 installed packages.
-- linux-centos6-x86_64 / gcc@4.4.7 -----------------------------
autoconf@2.69 cmake@3.7.1 expat@2.2.0 git@2.11.0 libsigsegv@2.10 m4@1.4.17 openssl@1.0.2j perl@5.24.0 tar@1.29 zlib@1.2.10
bzip2@1.0.6 curl@7.50.3 gettext@0.19.8.1 libiconv@1.14 libxml2@2.9.4 ncurses@6.0 pcre@8.41 pkg-config@0.29.1 xz@5.2.2
```
Spack colorizes output.
```console
$ spack find | less -R
```
`spack find` shows the specs of installed packages. A spec is like a name, but it has a version, compiler, architecture, and build options associated with it. In spack, you can have many installations of the same package with different specs.
## Load and Unload Module
Neither of these is particularly pretty, easy to remember, or easy to type. Luckily, Spack has its own interface for using modules and dotkits.
```console
$ spack load git
==> This command requires spack's shell integration.
To initialize spack's shell commands, you must run one of
the commands below. Choose the right command for your shell.
For bash and zsh:
. ~/.local/easybuild/software/Spack/0.10.0/share/spack/setup-env.sh
For csh and tcsh:
setenv SPACK_ROOT ~/.local/easybuild/software/Spack/0.10.0
source ~/.local/easybuild/software/Spack/0.10.0/share/spack/setup-env.csh
```
### First Usage
```console
$ . ~/.local/easybuild/software/Spack/0.10.0/share/spack/setup-env.sh
```
```console
$ git version 1.7.1
$ spack load git
$ git --version
git version 2.11.0
$ spack unload git
$ git --version
git version 1.7.1
```
## Uninstall Software Package
Spack will ask you either to provide a version number to remove the ambiguity or use the `--all` option to uninstall all of the matching packages.
You may force uninstall a package with the `--force` option.
```console
$ spack uninstall git
==> The following packages will be uninstalled :
-- linux-centos6-x86_64 / gcc@4.4.7 -----------------------------
xmh3hmb git@2.11.0%gcc
==> Do you want to proceed ? [y/n]
y
==> Successfully uninstalled git@2.11.0%gcc@4.4.7 arch=linux-centos6-x86_64 -xmh3hmb
```
# Virtualization
Running virtual machines on compute nodes
## Introduction
There are situations when Anselm's environment is not suitable for user needs.
* Application requires different operating system (e.g Windows), application is not available for Linux
* Application requires different versions of base system libraries and tools
* Application requires specific setup (installation, configuration) of complex software stack
* Application requires privileged access to operating system
* ... and combinations of above cases
We offer solution for these cases - **virtualization**. Anselm's environment gives the possibility to run virtual machines on compute nodes. Users can create their own images of operating system with specific software stack and run instances of these images as virtual machines on compute nodes. Run of virtual machines is provided by standard mechanism of [Resource Allocation and Job Execution](/salomon/job-submission-and-execution/).
Solution is based on QEMU-KVM software stack and provides hardware-assisted x86 virtualization.
## Limitations
Anselm's infrastructure was not designed for virtualization. Anselm's environment is not intended primary for virtualization, compute nodes, storages and all infrastructure of Anselm is intended and optimized for running HPC jobs, this implies suboptimal configuration of virtualization and limitations.
Anselm's virtualization does not provide performance and all features of native environment. There is significant performance hit (degradation) in I/O performance (storage, network). Anselm's virtualization is not suitable for I/O (disk, network) intensive workloads.
Virtualization has also some drawbacks, it is not so easy to setup efficient solution.
Solution described in chapter [HOWTO](#howto) is suitable for single node tasks, does not introduce virtual machine clustering.
!!! note
Please consider virtualization as last resort solution for your needs.
!!! warning
Please consult use of virtualization with IT4Innovation's support.
For running Windows application (when source code and Linux native application are not available) consider use of Wine, Windows compatibility layer. Many Windows applications can be run using Wine with less effort and better performance than when using virtualization.
## Licensing
IT4Innovations does not provide any licenses for operating systems and software of virtual machines. Users are ( in accordance with [Acceptable use policy document](http://www.it4i.cz/acceptable-use-policy.pdf)) fully responsible for licensing all software running in virtual machines on Anselm. Be aware of complex conditions of licensing software in virtual environments.
!!! note
Users are responsible for licensing OS e.g. MS Windows and all software running in their virtual machines.
## Howto
### Virtual Machine Job Workflow
We propose this job workflow:
![Workflow](../../img/virtualization-job-workflow.png)
Our recommended solution is that job script creates distinct shared job directory, which makes a central point for data exchange between Anselm's environment, compute node (host) (e.g. HOME, SCRATCH, local scratch and other local or cluster file systems) and virtual machine (guest). Job script links or copies input data and instructions what to do (run script) for virtual machine to job directory and virtual machine process input data according instructions in job directory and store output back to job directory. We recommend, that virtual machine is running in so called [snapshot mode](virtualization/#snapshot-mode), image is immutable - image does not change, so one image can be used for many concurrent jobs.
### Procedure
1. Prepare image of your virtual machine
1. Optimize image of your virtual machine for Anselm's virtualization
1. Modify your image for running jobs
1. Create job script for executing virtual machine
1. Run jobs
### Prepare Image of Your Virtual Machine
You can either use your existing image or create new image from scratch.
QEMU currently supports these image types or formats:
* raw
* cloop
* cow
* qcow
* qcow2
* vmdk - VMware 3 & 4, or 6 image format, for exchanging images with that product
* vdi - VirtualBox 1.1 compatible image format, for exchanging images with VirtualBox.
You can convert your existing image using `qemu-img convert` command. Supported formats of this command are: `blkdebug blkverify bochs cloop cow dmg file ftp ftps host_cdrom host_device host_floppy http https nbd parallels qcow qcow2 qed raw sheepdog tftp vdi vhdx vmdk vpc vvfat`.
We recommend using advanced QEMU native image format qcow2.
[More about QEMU Images](http://en.wikibooks.org/wiki/QEMU/Images)
### Optimize Image of Your Virtual Machine
Use virtio devices (for disk/drive and network adapter) and install virtio drivers (paravirtualized drivers) into virtual machine. There is significant performance gain when using virtio drivers. For more information see [Virtio Linux](http://www.linux-kvm.org/page/Virtio) and [Virtio Windows](http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers).
Disable all unnecessary services and tasks. Restrict all unnecessary operating system operations.
Remove all unnecessary software and files.
Remove all paging space, swap files, partitions, etc.
Shrink your image. (It is recommended to zero all free space and reconvert image using `qemu-img`.)
### Modify Your Image for Running Jobs
Your image should run some kind of operating system startup script. Startup script should run application and when application exits run shutdown or quit virtual machine.
We recommend, that startup script
* maps Job Directory from host (from compute node)
* runs script (we call it "run script") from Job Directory and waits for application's exit
* for management purposes if run script does not exist wait for some time period (few minutes)
* shutdowns/quits OS
For Windows operating systems we suggest using Local Group Policy Startup script, for Linux operating systems rc.local, runlevel init script or similar service.
Example startup script for Windows virtual machine:
```bat
@echo off
set LOG=c:\startup.log
set MAPDRIVE=z:
set SCRIPT=%MAPDRIVE%\run.bat
set TIMEOUT=300
echo %DATE% %TIME% Running startup script>%LOG%
rem Mount share
echo %DATE% %TIME% Mounting shared drive>>%LOG%
net use z: \\10.0.2.4\qemu >>%LOG% 2>&1
dir z:\ >>%LOG% 2>&1
echo. >>%LOG%
if exist %MAPDRIVE%\ (
echo %DATE% %TIME% The drive "%MAPDRIVE%" exists>>%LOG%
if exist %SCRIPT% (
echo %DATE% %TIME% The script file "%SCRIPT%"exists>>%LOG%
echo %DATE% %TIME% Running script %SCRIPT%>>%LOG%
set TIMEOUT=0
call %SCRIPT%
) else (
echo %DATE% %TIME% The script file "%SCRIPT%"does not exist>>%LOG%
)
) else (
echo %DATE% %TIME% The drive "%MAPDRIVE%" does not exist>>%LOG%
)
echo. >>%LOG%
timeout /T %TIMEOUT%
echo %DATE% %TIME% Shut down>>%LOG%
shutdown /s /t 0
```
Example startup script maps shared job script as drive z: and looks for run script called run.bat. If run script is found it is run else wait for 5 minutes, then shutdown virtual machine.
### Create Job Script for Executing Virtual Machine
Create job script according recommended
[Virtual Machine Job Workflow](#virtual-machine-job-workflow).
Example job for Windows virtual machine:
```bat
#/bin/sh
JOB_DIR=/scratch/$USER/win/${PBS_JOBID}
#Virtual machine settings
VM_IMAGE=~/work/img/win.img
VM_MEMORY=49152
VM_SMP=16
# Prepare job dir
mkdir -p ${JOB_DIR} && cd ${JOB_DIR} || exit 1
ln -s ~/work/win .
ln -s /scratch/$USER/data .
ln -s ~/work/win/script/run/run-appl.bat run.bat
# Run virtual machine
export TMPDIR=/lscratch/${PBS_JOBID}
module add qemu
qemu-system-x86_64
-enable-kvm
-cpu host
-smp ${VM_SMP}
-m ${VM_MEMORY}
-vga std
-localtime
-usb -usbdevice tablet
-device virtio-net-pci,netdev=net0
-netdev user,id=net0,smb=${JOB_DIR},hostfwd=tcp::3389-:3389
-drive file=${VM_IMAGE},media=disk,if=virtio
-snapshot
-nographic
```
Job script links application data (win), input data (data) and run script (run.bat) into job directory and runs virtual machine.
Example run script (run.bat) for Windows virtual machine:
```doscon
z:
cd winappl
call application.bat z:data z:output
```
Run script runs application from shared job directory (mapped as drive z:), process input data (z:data) from job directory and store output to job directory (z:output).
### Run Jobs
Run jobs as usual, see [Resource Allocation and Job Execution](/salomon/job-submission-and-execution/). Use only full node allocation for virtualization jobs.
### Running Virtual Machines
Virtualization is enabled only on compute nodes, virtualization does not work on login nodes.
Load QEMU environment module:
```console
$ module add qemu
```
Get help
```console
$ man qemu
```
Run virtual machine (simple)
```console
$ qemu-system-x86_64 -hda linux.img -enable-kvm -cpu host -smp 16 -m 32768 -vga std -vnc :0
$ qemu-system-x86_64 -hda win.img -enable-kvm -cpu host -smp 16 -m 32768 -vga std -localtime -usb -usbdevice tablet -vnc :0
```
You can access virtual machine by VNC viewer (option `-vnc`) connecting to IP address of compute node. For VNC you must use VPN network.
Install virtual machine from ISO file
```console
$ qemu-system-x86_64 -hda linux.img -enable-kvm -cpu host -smp 16 -m 32768 -vga std -cdrom linux-install.iso -boot d -vnc :0
$ qemu-system-x86_64 -hda win.img -enable-kvm -cpu host -smp 16 -m 32768 -vga std -localtime -usb -usbdevice tablet -cdrom win-install.iso -boot d -vnc :0
```
Run virtual machine using optimized devices, user network back-end with sharing and port forwarding, in snapshot mode
```console
$ qemu-system-x86_64 -drive file=linux.img,media=disk,if=virtio -enable-kvm -cpu host -smp 16 -m 32768 -vga std -device virtio-net-pci,netdev=net0 -netdev user,id=net0,smb=/scratch/$USER/tmp,hostfwd=tcp::2222-:22 -vnc :0 -snapshot
$ qemu-system-x86_64 -drive file=win.img,media=disk,if=virtio -enable-kvm -cpu host -smp 16 -m 32768 -vga std -localtime -usb -usbdevice tablet -device virtio-net-pci,netdev=net0 -netdev user,id=net0,smb=/scratch/$USER/tmp,hostfwd=tcp::3389-:3389 -vnc :0 -snapshot
```
Thanks to port forwarding you can access virtual machine via SSH (Linux) or RDP (Windows) connecting to IP address of compute node (and port 2222 for SSH). You must use VPN network).
!!! note
Keep in mind, that if you use virtio devices, you must have virtio drivers installed on your virtual machine.
### Networking and Data Sharing
For networking virtual machine we suggest to use (default) user network back-end (sometimes called slirp). This network back-end NATs virtual machines and provides useful services for virtual machines as DHCP, DNS, SMB sharing, port forwarding.
In default configuration IP network 10.0.2.0/24 is used, host has IP address 10.0.2.2, DNS server 10.0.2.3, SMB server 10.0.2.4 and virtual machines obtain address from range 10.0.2.15-10.0.2.31. Virtual machines have access to Anselm's network via NAT on compute node (host).
Simple network setup
```console
$ qemu-system-x86_64 ... -net nic -net user
```
(It is default when no -net options are given.)
Simple network setup with sharing and port forwarding (obsolete but simpler syntax, lower performance)
```console
$ qemu-system-x86_64 ... -net nic -net user,smb=/scratch/$USER/tmp,hostfwd=tcp::3389-:3389
```
Optimized network setup with sharing and port forwarding
```console
$ qemu-system-x86_64 ... -device virtio-net-pci,netdev=net0 -netdev user,id=net0,smb=/scratch/$USER/tmp,hostfwd=tcp::2222-:22
```
### Advanced Networking
#### Internet Access
Sometime your virtual machine needs access to internet (install software, updates, software activation, etc). We suggest solution using Virtual Distributed Ethernet (VDE) enabled QEMU with SLIRP running on login node tunneled to compute node. Be aware, this setup has very low performance, the worst performance of all described solutions.
Load VDE enabled QEMU environment module (unload standard QEMU module first if necessary).
```console
$ module add qemu/2.1.2-vde2
```
Create virtual network switch.
```console
$ vde_switch -sock /tmp/sw0 -mgmt /tmp/sw0.mgmt -daemon
```
Run SLIRP daemon over SSH tunnel on login node and connect it to virtual network switch.
```console
$ dpipe vde_plug /tmp/sw0 = ssh login1 $VDE2_DIR/bin/slirpvde -s - --dhcp &
```
Run QEMU using VDE network back-end, connect to created virtual switch.
Basic setup (obsolete syntax)
```console
$ qemu-system-x86_64 ... -net nic -net vde,sock=/tmp/sw0
```
Setup using virtio device (obsolete syntax)
```console
$ qemu-system-x86_64 ... -net nic,model=virtio -net vde,sock=/tmp/sw0
```
Optimized setup
```console
$ qemu-system-x86_64 ... -device virtio-net-pci,netdev=net0 -netdev vde,id=net0,sock=/tmp/sw0
```
#### TAP Interconnect
Both user and VDE network back-end have low performance. For fast interconnect (10 Gbit/s and more) of compute node (host) and virtual machine (guest) we suggest using Linux kernel TAP device.
Cluster Anselm provides TAP device tap0 for your job. TAP interconnect does not provide any services (like NAT, DHCP, DNS, SMB, etc.) just raw networking, so you should provide your services if you need them.
To enable TAP interconect feature you need to specify virt_network=True PBS resource at job submit.
```console
$ qsub ... -l virt_network=True
```
Run QEMU with TAP network back-end:
```console
$ qemu-system-x86_64 ... -device virtio-net-pci,netdev=net1 -netdev tap,id=net1,ifname=tap0,script=no,downscript=no
```
Interface tap0 has IP address 192.168.1.1 and network mask 255.255.255.0 (/24). In virtual machine use IP address from range 192.168.1.2-192.168.1.254. For your convenience some ports on tap0 interface are redirected to higher numbered ports, so you as non-privileged user can provide services on these ports.
Redirected ports:
* DNS UDP/53-&gt;UDP/3053, TCP/53-&gt;TCP/3053
* DHCP UDP/67-&gt;UDP/3067
* SMB TCP/139-&gt;TCP/3139, TCP/445-&gt;TCP/3445).
You can configure IP address of virtual machine statically or dynamically. For dynamic addressing provide your DHCP server on port 3067 of tap0 interface, you can also provide your DNS server on port 3053 of tap0 interface for example:
```console
$ dnsmasq --interface tap0 --bind-interfaces -p 3053 --dhcp-alternate-port=3067,68 --dhcp-range=192.168.1.15,192.168.1.32 --dhcp-leasefile=/tmp/dhcp.leasefile
```
You can also provide your SMB services (on ports 3139, 3445) to obtain high performance data sharing.
Example smb.conf (not optimized)
```console
$ cat smb.conf
[global]
socket address=192.168.1.1
smb ports = 3445 3139
private dir=/tmp/qemu-smb
pid directory=/tmp/qemu-smb
lock directory=/tmp/qemu-smb
state directory=/tmp/qemu-smb
ncalrpc dir=/tmp/qemu-smb/ncalrpc
log file=/tmp/qemu-smb/log.smbd
smb passwd file=/tmp/qemu-smb/smbpasswd
security = user
map to guest = Bad User
unix extensions = no
load printers = no
printing = bsd
printcap name = /dev/null
disable spoolss = yes
log level = 1
guest account = USER
[qemu]
path=/scratch/USER/tmp
read only=no
guest ok=yes
writable=yes
follow symlinks=yes
wide links=yes
force user=USER
```
(Replace USER with your login name.)
Run SMB services
```console
$ smbd -s /tmp/qemu-smb/smb.conf
```
Virtual machine can of course have more than one network interface controller, virtual machine can use more than one network back-end. So, you can combine for example use network back-end and TAP interconnect.
### Snapshot Mode
In snapshot mode image is not written, changes are written to temporary file (and discarded after virtual machine exits). **It is strongly recommended mode for running your jobs.** Set TMPDIR environment variable to local scratch directory for placement temporary files.
```console
$ export TMPDIR=/lscratch/${PBS_JOBID}
$ qemu-system-x86_64 ... -snapshot
```
### Windows Guests
For Windows guests we recommend these options, life will be easier:
```console
$ qemu-system-x86_64 ... -localtime -usb -usbdevice tablet
```
# GPI-2
## Introduction
Programming Next Generation Supercomputers: GPI-2 is an API library for asynchronous interprocess, cross-node communication. It provides a flexible, scalable and fault tolerant interface for parallel applications.
The GPI-2 library implements the GASPI specification (Global Address Space Programming Interface, [www.gaspi.de](http://www.gaspi.de/en/project.html)). GASPI is a Partitioned Global Address Space (PGAS) API. It aims at scalable, flexible and failure tolerant computing in massively parallel environments.
## Modules
The GPI-2, version 1.0.2 is available on Anselm via module gpi2:
```console
$ ml gpi2
$ ml av GPI-2 # Salomon
```
The module sets up environment variables, required for linking and running GPI-2 enabled applications. This particular command loads the default module, which is gpi2/1.0.2
## Linking
!!! note
Link with -lGPI2 -libverbs
Load the gpi2 module. Link using **-lGPI2** and **-libverbs** switches to link your code against GPI-2. The GPI-2 requires the OFED infinband communication library ibverbs.
### Compiling and Linking With Intel Compilers
```console
$ ml intel
$ ml gpi2
$ icc myprog.c -o myprog.x -Wl,-rpath=$LIBRARY_PATH -lGPI2 -libverbs
```
### Compiling and Linking With GNU Compilers
```console
$ ml gcc
$ ml gpi2
$ gcc myprog.c -o myprog.x -Wl,-rpath=$LIBRARY_PATH -lGPI2 -libverbs
```
## Running the GPI-2 Codes
!!! note
gaspi_run starts the GPI-2 application
The gaspi_run utility is used to start and run GPI-2 applications:
```console
$ gaspi_run -m machinefile ./myprog.x
```
A machine file (** machinefile **) with the hostnames of nodes where the application will run, must be provided. The machinefile lists all nodes on which to run, one entry per node per process. This file may be hand created or obtained from standard $PBS_NODEFILE:
```console
$ cut -f1 -d"." $PBS_NODEFILE > machinefile
```
machinefile:
```console
cn79
cn80
```
This machinefile will run 2 GPI-2 processes, one on node cn79 other on node cn80.
machinefle:
```console
cn79
cn79
cn80
cn80
```
This machinefile will run 4 GPI-2 processes, 2 on node cn79 o 2 on node cn80.
!!! note
Use the **mpiprocs**to control how many GPI-2 processes will run per node
Example:
```console
$ qsub -A OPEN-0-0 -q qexp -l select=2:ncpus=16:mpiprocs=16 -I
```
This example will produce $PBS_NODEFILE with 16 entries per node.
### Gaspi_logger
!!! note
gaspi_logger views the output form GPI-2 application ranks
The gaspi_logger utility is used to view the output from all nodes except the master node (rank 0). The gaspi_logger is started, on another session, on the master node - the node where the gaspi_run is executed. The output of the application, when called with gaspi_printf(), will be redirected to the gaspi_logger. Other I/O routines (e.g. printf) will not.
## Example
Following is an example GPI-2 enabled code:
```cpp
#include <GASPI.h>
#include <stdlib.h>
void success_or_exit ( const char* file, const int line, const int ec)
{
if (ec != GASPI_SUCCESS)
{
gaspi_printf ("Assertion failed in %s[%i]:%dn", file, line, ec);
exit (1);
}
}
#define ASSERT(ec) success_or_exit (__FILE__, __LINE__, ec);
int main(int argc, char *argv[])
{
gaspi_rank_t rank, num;
gaspi_return_t ret;
/* Initialize GPI-2 */
ASSERT( gaspi_proc_init(GASPI_BLOCK) );
/* Get ranks information */
ASSERT( gaspi_proc_rank(&rank) );
ASSERT( gaspi_proc_num(&num) );
gaspi_printf("Hello from rank %d of %dn",
rank, num);
/* Terminate */
ASSERT( gaspi_proc_term(GASPI_BLOCK) );
return 0;
}
```
Load modules and compile:
```console
$ ml gcc gpi2
$ gcc helloworld_gpi.c -o helloworld_gpi.x -Wl,-rpath=$LIBRARY_PATH -lGPI2 -libverbs
```
Submit the job and run the GPI-2 application
```console
$ qsub -q qexp -l select=2:ncpus=1:mpiprocs=1,place=scatter,walltime=00:05:00 -I
qsub: waiting for job 171247.dm2 to start
qsub: job 171247.dm2 ready
cn79 $ ml gpi2
cn79 $ cut -f1 -d"." $PBS_NODEFILE > machinefile
cn79 $ gaspi_run -m machinefile ./helloworld_gpi.x
Hello from rank 0 of 2
```
At the same time, in another session, you may start the GASPI logger:
```console
$ ssh cn79
cn79 $ gaspi_logger
GASPI Logger (v1.1)
[cn80:0] Hello from rank 1 of 2
```
In this example, we compile the helloworld_gpi.c code using the **gnu compiler**(gcc) and link it to the GPI-2 and ibverbs library. The library search path is compiled in. For execution, we use the qexp queue, 2 nodes 1 core each. The GPI module must be loaded on the master compute node (in this example the cn79), gaspi_logger is used from different session to view the output of the second process.
# OpenFOAM
a Free, Open Source CFD Software Package
## Introduction
OpenFOAM is a free, open source CFD software package developed by [**OpenCFD Ltd**](http://www.openfoam.com/about) at [**ESI Group**](http://www.esi-group.com/) and distributed by the [**OpenFOAM Foundation **](http://www.openfoam.org/). It has a large user base across most areas of engineering and science, from both commercial and academic organisations.
Homepage: [http://www.openfoam.com/>](http://www.openfoam.com/>)
### Installed Version
Currently, several version compiled by GCC/ICC compilers in single/double precision with several version of openmpi are available on Anselm.
For example syntax of available OpenFOAM module is:
\<openfoam\/2.2.1-icc-openmpi1.6.5-DP\>
this means openfoam version 2.2.1 compiled by ICC compiler with openmpi1.6.5 in double precision.
Naming convection of the installed versions is following:
openfoam\<VERSION\>-\<COMPILER\>\<openmpiVERSION\>-\<PRECISION\>
* \<VERSION\> - version of openfoam
* \<COMPILER\> - version of used compiler
* \<openmpiVERSION\> - version of used openmpi/impi
* \<PRECISION\> - DP/SP – double/single precision
### Available OpenFOAM Modules
To check available modules use
```console
$ ml av
```
In /opt/modules/modulefiles/engineering you can see installed engineering softwares:
```console
------------------------------------ /opt/modules/modulefiles/engineering -------------------------------------------------------------
ansys/14.5.x matlab/R2013a-COM openfoam/2.2.1-icc-impi4.1.1.036-DP
comsol/43b-COM matlab/R2013a-EDU openfoam/2.2.1-icc-openmpi1.6.5-DP
comsol/43b-EDU openfoam/2.2.1-gcc481-openmpi1.6.5-DP paraview/4.0.1-gcc481-bullxmpi1.2.4.1-osmesa10.0
lsdyna/7.x.x openfoam/2.2.1-gcc481-openmpi1.6.5-SP
```
For information how to use modules [look here](environment-and-modules/).
## Getting Started
To create OpenFOAM environment on ANSELM give the commands:
```console
$ ml openfoam/2.2.1-icc-openmpi1.6.5-DP
$ source $FOAM_BASHRC
```
!!! note
Please load correct module with your requirements “compiler - GCC/ICC, precision - DP/SP”.
Create a project directory within the $HOME/OpenFOAM directory named \<USER\>-\<OFversion\> and create a directory named run within it, e.g. by typing:
```console
$ mkdir -p $FOAM_RUN
```
Project directory is now available by typing:
```console
$ cd /home/<USER>/OpenFOAM/<USER>-<OFversion>/run
```
\<OFversion\> - for example \<2.2.1\>
or
```console
$ cd $FOAM_RUN
```
Copy the tutorial examples directory in the OpenFOAM distribution to the run directory:
```console
$ cp -r $FOAM_TUTORIALS $FOAM_RUN
```
Now you can run the first case for example incompressible laminar flow in a cavity.
## Running Serial Applications
Create a Bash script test.sh
```bash
#!/bin/bash
ml openfoam/2.2.1-icc-openmpi1.6.5-DP
source $FOAM_BASHRC
# source to run functions
. $WM_PROJECT_DIR/bin/tools/RunFunctions
cd $FOAM_RUN/tutorials/incompressible/icoFoam/cavity
runApplication blockMesh
runApplication icoFoam
```
Job submission (example for Anselm):
```console
$ qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=16,walltime=03:00:00 test.sh
```
For information about job submission [look here](/anselm/job-submission-and-execution/).
## Running Applications in Parallel
Run the second case for example external incompressible turbulent flow - case - motorBike.
First we must run serial application bockMesh and decomposePar for preparation of parallel computation.
!!! note
Create a Bash scrip test.sh:
```bash
#!/bin/bash
ml openfoam/2.2.1-icc-openmpi1.6.5-DP
source $FOAM_BASHRC
# source to run functions
. $WM_PROJECT_DIR/bin/tools/RunFunctions
cd $FOAM_RUN/tutorials/incompressible/simpleFoam/motorBike
runApplication blockMesh
runApplication decomposePar
```
Job submission
```console
$ qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=16,walltime=03:00:00 test.sh
```
This job create simple block mesh and domain decomposition. Check your decomposition, and submit parallel computation:
!!! note
Create a PBS script testParallel.pbs:
```bash
#!/bin/bash
#PBS -N motorBike
#PBS -l select=2:ncpus=16
#PBS -l walltime=01:00:00
#PBS -q qprod
#PBS -A OPEN-0-0
ml openfoam/2.2.1-icc-openmpi1.6.5-DP
source $FOAM_BASHRC
cd $FOAM_RUN/tutorials/incompressible/simpleFoam/motorBike
nproc = 32
mpirun -hostfile ${PBS_NODEFILE} -np $nproc snappyHexMesh -overwrite -parallel | tee snappyHexMesh.log
mpirun -hostfile ${PBS_NODEFILE} -np $nproc potentialFoam -noFunctionObject-writep -parallel | tee potentialFoam.log
mpirun -hostfile ${PBS_NODEFILE} -np $nproc simpleFoam -parallel | tee simpleFoam.log
```
nproc – number of subdomains
Job submission
```console
$ qsub testParallel.pbs
```
## Compile Your Own Solver
Initialize OpenFOAM environment before compiling your solver
```console
$ ml openfoam/2.2.1-icc-openmpi1.6.5-DP
$ source $FOAM_BASHRC
$ cd $FOAM_RUN/
```
Create directory applications/solvers in user directory
```console
$ mkdir -p applications/solvers
$ cd applications/solvers
```
Copy icoFoam solver’s source files
```console
$ cp -r $FOAM_SOLVERS/incompressible/icoFoam/ My_icoFoam
$ cd My_icoFoam
```
Rename icoFoam.C to My_icoFOAM.C
```console
$ mv icoFoam.C My_icoFoam.C
```
Edit _files_ file in _Make_ directory:
```bash
icoFoam.C
EXE = $(FOAM_APPBIN)/icoFoam
```
and change to:
```bash
My_icoFoam.C
EXE = $(FOAM_USER_APPBIN)/My_icoFoam
```
In directory My_icoFoam give the compilation command:
```console
$ wmake
```
# ParaView
Open-Source, Multi-Platform Data Analysis and Visualization Application
## Introduction
**ParaView** is an open-source, multi-platform data analysis and visualization application. ParaView users can quickly build visualizations to analyze their data using qualitative and quantitative techniques. The data exploration can be done interactively in 3D or programmatically using ParaView's batch processing capabilities.
ParaView was developed to analyze extremely large datasets using distributed memory computing resources. It can be run on supercomputers to analyze datasets of exascale size as well as on laptops for smaller data.
Homepage : [http://www.paraview.org/](http://www.paraview.org/)
## Installed Version
Currently, version 5.1.2 compiled with intel/2017a against intel MPI library and OSMesa 12.0.2 is installed on the clusters.
## Usage
On the clusters, ParaView is to be used in client-server mode. A parallel ParaView server is launched on compute nodes by the user, and client is launched on your desktop PC to control and view the visualization. Download ParaView client application for your OS [here](http://paraview.org/paraview/resources/software.php).
!!!Warning
Your version must match the version number installed on the cluster.
### Launching Server
To launch the server, you must first allocate compute nodes, for example
```console
$ qsub -I -q qprod -A OPEN-0-0 -l select=2
```
to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution](/salomon/job-submission-and-execution/) for details.
After the interactive session is opened, load the ParaView module (following examples for Salomon, Anselm instructions in comments):
```console
$ ml ParaView/5.1.2-intel-2017a-mpi
```
Now launch the parallel server, with number of nodes times 24 (16 on Anselm) processes:
```console
$ mpirun -np 48 pvserver --use-offscreen-rendering
Waiting for client...
Connection URL: cs://r37u29n1006:11111
Accepting connection(s): r37u29n1006:11111i
Anselm:
$ mpirun -np 32 pvserver --use-offscreen-rendering
Waiting for client...
Connection URL: cs://cn77:11111
Accepting connection(s): cn77:11111
```
Note the that the server is listening on compute node r37u29n1006 in this case, we shall use this information later.
### Client Connection
Because a direct connection is not allowed to compute nodes on Salomon, you must establish a SSH tunnel to connect to the server. Choose a port number on your PC to be forwarded to ParaView server, for example 12345. If your PC is running Linux, use this command to establish a SSH tunnel:
```console
Salomon: $ ssh -TN -L 12345:r37u29n1006:11111 username@salomon.it4i.cz
Anselm: $ ssh -TN -L 12345:cn77:11111 username@anselm.it4i.cz
```
replace username with your login and r37u29n1006 (cn77) with the name of compute node your ParaView server is running on (see previous step).
If you use PuTTY on Windows, load Salomon connection configuration, then go to *Connection* -> *SSH* -> *Tunnels* to set up the port forwarding.
Fill the Source port and Destination fields. **Do not forget to click the Add button.**
![](../../img/paraview_ssh_tunnel_salomon.png "SSH Tunnel in PuTTY")
Now launch ParaView client installed on your desktop PC. Select *File* -> *Connect...* and fill in the following :
![](../../img/paraview_connect_salomon.png "ParaView - Connect to server")
The configuration is now saved for later use. Now click Connect to connect to the ParaView server. In your terminal where you have interactive session with ParaView server launched, you should see:
```console
Client connected.
```
You can now use Parallel ParaView.
### Close Server
Remember to close the interactive session after you finish working with ParaView server, as it will remain launched even after your client is disconnected and will continue to consume resources.