Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • chat
  • kru0052-master-patch-91081
  • lifecycles
  • master
  • 20180621-before_revision
  • 20180621-revision
6 results

Target

Select target project
  • sccs/docs.it4i.cz
  • soj0018/docs.it4i.cz
  • lszustak/docs.it4i.cz
  • jarosjir/docs.it4i.cz
  • strakpe/docs.it4i.cz
  • beranekj/docs.it4i.cz
  • tab0039/docs.it4i.cz
  • davidciz/docs.it4i.cz
  • gui0013/docs.it4i.cz
  • mrazek/docs.it4i.cz
  • lriha/docs.it4i.cz
  • it4i-vhapla/docs.it4i.cz
  • hol0598/docs.it4i.cz
  • sccs/docs-it-4-i-cz-fumadocs
  • siw019/docs-it-4-i-cz-fumadocs
15 results
Select Git revision
  • MPDATABenchmark
  • Urx
  • anselm2
  • hot_fix
  • john_branch
  • master
  • mkdocs_update
  • patch-1
  • pbs
  • salomon_upgrade
  • tabs
  • virtual_environment2
  • 20180621-before_revision
  • 20180621-revision
14 results
Show changes
Showing
with 2084 additions and 0 deletions
# Apptainer Container
[Apptainer][a] is a container platform. It allows you to create and run containers that package up pieces of software in a way that is portable and reproducible. You can build a container using Apptainer on your laptop, and then run it on many of the largest HPC clusters in the world, local university or company clusters, a single server, in the cloud, or on a workstation down the hall. Your container is a single file, and you don’t have to worry about how to install all the software you need on each different operating system.
## Using Docker Images
Apptainer can import, bootstrap, and even run Docker images directly from [Docker Hub][b]. You can easily run an CentOS container like this:
```console
$ cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
$ ml apptainer
$ apptainer shell docker://centos:latest
INFO: Converting OCI blobs to SIF format
INFO: Starting build...
Getting image source signatures
Copying blob a1d0c7532777 done
Copying config 8c1402b22a done
Writing manifest to image destination
Storing signatures
2023/01/17 12:55:08 info unpack layer: sha256:a1d0c75327776413fa0db9ed3adcdbadedc95a662eb1d360dad82bb913f8a1d1
2023/01/17 12:55:09 warn rootless{usr/bin/newgidmap} ignoring (usually) harmless EPERM on setxattr "security.capability"
2023/01/17 12:55:09 warn rootless{usr/bin/newuidmap} ignoring (usually) harmless EPERM on setxattr "security.capability"
2023/01/17 12:55:09 warn rootless{usr/bin/ping} ignoring (usually) harmless EPERM on setxattr "security.capability"
2023/01/17 12:55:10 warn rootless{usr/sbin/arping} ignoring (usually) harmless EPERM on setxattr "security.capability"
2023/01/17 12:55:10 warn rootless{usr/sbin/clockdiff} ignoring (usually) harmless EPERM on setxattr "security.capability"
INFO: Creating SIF file...
Apptainer> cat /etc/redhat-release
CentOS Linux release 8.4.2105
```
In this case, the image is downloaded from Docker Hub, extracted to a temporary directory, and Apptainer interactive shell is invoked. This procedure can take a lot of time, especially with large images.
## Importing Docker Image
Apptainer containers can be in three different formats:
* read-only **squashfs** (default) - best for production
* writable **ext3** (--writable option)
* writable **(ch)root directory** (--sandbox option) - best for development
Squashfs and (ch)root directory images can be built from Docker source directly on the cluster, no root privileges are needed. It is strongly recommended to create a native Apptainer image to speed up the launch of the container.
```console
$ ml apptainer
$ apptainer build ubuntu.sif docker://ubuntu:latest
INFO: Starting build...
Getting image source signatures
Copying blob 6e3729cf69e0 done
Copying config 415250ec06 done
Writing manifest to image destination
Storing signatures
2023/01/17 12:58:04 info unpack layer: sha256:6e3729cf69e0ce2de9e779575a1fec8b7fb5efdfa822829290ab6d5d1bc3e797
INFO: Creating SIF file...
INFO: Build complete: ubuntu.sif
```
alternatively, you can create your own docker image and import it to Apptainer.
For example, we show how to create and run ubuntu docker image with gvim installed:
```console
your_local_machine $ docker pull ubuntu
your_local_machine $ docker run --rm -it ubuntu bash
# apt update
# apt install vim-gtk
your_local_machine $ docker ps -a
your_local_machine $ docker commit 837a575cf8dc
your_local_machine $ docker image ls
your_local_machine $ docker tag 4dd97cefde62 ubuntu_gvim
your_local_machine $ docker save -o ubuntu_gvim.tar ubuntu_gvim
```
copy the `ubuntu_gvim.tar` archive to IT4I supercomputers, convert to Apptainer image and run:
```console
$ ml Apptainer
$ apptainer build ubuntu_givm.sif docker-archive://ubuntu_gvim.tar
$ apptainer shell -B /usr/user/$ID ubuntu_gvim.sif
```
Note the bind to `/usr/user/$ID` directory.
## Launching the Container
The interactive shell can be invoked by the `apptainer shell` command. This is useful for development purposes. Use the `-w | --writable` option to make changes inside the container permanent.
```console
$ apptainer shell ubuntu.sif
Apptainer> cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.1 LTS"
```
A command can be run inside the container (without an interactive shell) by invoking the `apptainer exec` command.
```
$ apptainer exec ubuntu.sif cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.1 LTS""
```
An Apptainer image can contain a runscript. This script is executed inside the container after the `apptainer run` command is used. The runscript is mostly used to run an application for which the container is built. In the following example, it is the `fortune | cowsay` command:
```
$ apptainer build lolcow.sif docker://ghcr.io/apptainer/lolcow
INFO: Starting build...
Getting image source signatures
Copying blob 5ca731fc36c2 skipped: already exists
Copying blob 16ec32c2132b skipped: already exists
Copying config fd0daa4d89 done
Writing manifest to image destination
Storing signatures
2023/01/17 13:06:01 info unpack layer: sha256:16ec32c2132b43494832a05f2b02f7a822479f8250c173d0ab27b3de78b2f058
2023/01/17 13:06:01 info unpack layer: sha256:5ca731fc36c28789c5ddc3216563e8bfca2ab3ea10347e07554ebba1c953242e
INFO: Creating SIF file...
INFO: Build complete: lolcow.sif
$ apptainer exec lolcow.sif cowsay moo
_____
< moo >
-----
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
```
## Accessing /HOME and /SCRATCH Within Container
A user home directory is mounted inside the container automatically. If you need access to the **/SCRATCH** storage for your computation, this must be mounted by the `-B | --bind` option.
!!!Warning
The mounted folder has to exist inside the container or the container image has to be writable!
```console
$ apptainer shell -B /scratch ubuntu.sif
Apptainer> ls /scratch
ddn sys temp work
```
A comprehensive documentation can be found at the [Apptainer Quick Start][c] website.
[a]: https://apptainer.org/docs/user/latest/introduction.html
[b]: https://hub.docker.com/
[c]: https://apptainer.org/docs/user/latest/quick_start.html
# Spack
Spack is a package manager for supercomputers, Linux, and macOS. It makes installing scientific software easy. With Spack, you can build a package with multiple versions, configurations, platforms, and compilers, and all of these builds can coexist on the same machine.
For more information, see Spack's [documentation][a].
## Spack on IT4Innovations Clusters
```console
$ ml av Spack
---------------------- /apps/modules/devel ------------------------------
Spack/0.16.2 (D)
```
!!! note
Spack/default is the rule for setting up local installation
## First Usage Module Spack/Default
Spack will be installed into the ~/Spack folder. You can set the configuration by modifying ~/.spack/configure.yml.
```console
$ ml Spack
== Settings for first use
Couldn't import dot_parser, loading of dot files will not be possible.
== temporary log file in case of crash /tmp/eb-wLh1RT/easybuild-54vEn3.log
== processing EasyBuild easyconfig /apps/easybuild/easyconfigs-master/easybuild/easyconfigs/s/Spack/Spack-0.16.2.eb
== building and installing Spack/0.16.2...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== cleaning up...
== creating module...
== permissions...
== packaging...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /home/user/.local/easybuild/software/Spack/0.16.2/easybuild/easybuild-Spack-0.16.2-20210922.123022.log
== Build succeeded for 1 out of 1
== Temporary log file(s) /tmp/eb-wLh1RT/easybuild-54vEn3.log* have been removed.
== Temporary directory /tmp/eb-wLh1RT has been removed.
== Create folder ~/Spack
The following have been reloaded with a version change:
1) Spack/default => Spack/0.16.2
$ spack --version
0.16.2
```
## Usage Module Spack/Default
```console
$ ml Spack
$ ml
Currently Loaded Modules:
1) Spack/0.16.2
```
## Build Software Package
Packages in Spack are written in pure Python, so you can do anything in Spack that you can do in Python. Python was chosen as the implementation language for two reasons. First, Python is becoming ubiquitous in the scientific software community. Second, it is a modern language and has many powerful features to help make package writing easy.
### Search for Available Software
To install software with Spack, you need to know what software is available. Use the `spack list` command.
```console
$ spack list
==> 1114 packages.
abinit font-bh-100dpi libffi npm py-ply r-maptools tetgen
ack font-bh-75dpi libfontenc numdiff py-pmw r-markdown tethex
activeharmony font-bh-lucidatypewriter-100dpi libfs nwchem py-prettytable r-mass texinfo
adept-utils font-bh-lucidatypewriter-75dpi libgcrypt ocaml py-proj r-matrix texlive
adios font-bh-ttf libgd oce py-prompt-toolkit r-matrixmodels the-platinum-searcher
adol-c font-bh-type1 libgpg-error oclock py-protobuf r-memoise the-silver-searcher
allinea-forge font-bitstream-100dpi libgtextutils octave py-psutil r-mgcv thrift
allinea-reports font-bitstream-75dpi libhio octave-splines py-ptyprocess r-mime tinyxml
ant font-bitstream-speedo libice octopus py-pudb r-minqa tinyxml2
antlr font-bitstream-type1 libiconv ompss py-py r-multcomp tk
ape font-cronyx-cyrillic libint ompt-openmp py-py2cairo r-munsell tmux
apex font-cursor-misc libjpeg-turbo opari2 py-py2neo r-mvtnorm tmuxinator
applewmproto font-daewoo-misc libjson-c openblas py-pychecker r-ncdf4 transset
appres font-dec-misc liblbxutil opencoarrays py-pycodestyle r-networkd3 trapproto
apr font-ibm-type1 libmesh opencv py-pycparser r-nlme tree
...
```
#### Specify Software Version (For Package)
To see more available versions of a package, run `spack versions`.
```console
$ spack versions git
==> Safe versions (already checksummed):
2.29.0 2.27.0 2.25.0 2.20.1 2.19.1 2.17.1 2.15.1 2.13.0 2.12.1 2.11.1 2.9.3 2.9.1 2.8.4 2.8.2 2.8.0 2.7.1
2.28.0 2.26.0 2.21.0 2.19.2 2.18.0 2.17.0 2.14.1 2.12.2 2.12.0 2.11.0 2.9.2 2.9.0 2.8.3 2.8.1 2.7.3
==> Remote versions (not yet checksummed):
2.33.0 2.26.2 2.23.3 2.21.1 2.18.3 2.16.1 2.13.6 2.10.4 2.7.0 2.5.2 2.4.2 2.3.0 2.0.2 1.8.5.2 1.8.3.1
2.32.0 2.26.1 2.23.2 2.20.5 2.18.2 2.16.0 2.13.5 2.10.3 2.6.7 2.5.1 2.4.1 2.2.3 2.0.1 1.8.5.1 1.8.3
2.31.1 2.25.5 2.23.1 2.20.4 2.18.1 2.15.4 2.13.4 2.10.2 2.6.6 2.5.0 2.4.0 2.2.2 2.0.0 1.8.5 1.8.2.3
2.31.0 2.25.4 2.23.0 2.20.3 2.17.6 2.15.3 2.13.3 2.10.1 2.6.5 2.4.12 2.3.10 2.2.1 1.9.5 1.8.4.5 0.7
2.30.2 2.25.3 2.22.5 2.20.2 2.17.5 2.15.2 2.13.2 2.10.0 2.6.4 2.4.11 2.3.9 2.2.0 1.9.4 1.8.4.4 0.6
2.30.1 2.25.2 2.22.4 2.20.0 2.17.4 2.15.0 2.13.1 2.9.5 2.6.3 2.4.10 2.3.8 2.1.4 1.9.3 1.8.4.3 0.5
2.30.0 2.25.1 2.22.3 2.19.6 2.17.3 2.14.6 2.12.5 2.9.4 2.6.2 2.4.9 2.3.7 2.1.3 1.9.2 1.8.4.2 0.04
2.29.3 2.24.4 2.22.2 2.19.5 2.17.2 2.14.5 2.12.4 2.8.6 2.6.1 2.4.8 2.3.6 2.1.2 1.9.1 1.8.4.1 0.03
2.29.2 2.24.3 2.22.1 2.19.4 2.16.6 2.14.4 2.12.3 2.8.5 2.6.0 2.4.7 2.3.5 2.1.1 1.9.0 1.8.4.rc0 0.02
2.29.1 2.24.2 2.22.0 2.19.3 2.16.5 2.14.3 2.11.4 2.7.6 2.5.6 2.4.6 2.3.4 2.1.0 1.8.5.6 1.8.4 0.01
2.28.1 2.24.1 2.21.4 2.19.0 2.16.4 2.14.2 2.11.3 2.7.5 2.5.5 2.4.5 2.3.3 2.0.5 1.8.5.5 1.8.3.4
2.27.1 2.24.0 2.21.3 2.18.5 2.16.3 2.14.0 2.11.2 2.7.4 2.5.4 2.4.4 2.3.2 2.0.4 1.8.5.4 1.8.3.3
2.26.3 2.23.4 2.21.2 2.18.4 2.16.2 2.13.7 2.10.5 2.7.2 2.5.3 2.4.3 2.3.1 2.0.3 1.8.5.3 1.8.3.2
```
## Graph for Software Package
Spack provides the `spack graph` command to display the dependency graph. By default, the command generates an ASCII rendering of a spec’s dependency graph.
```console
$ spack graph git
==> Warning: gcc@4.8.5 cannot build optimized binaries for "zen2". Using best target possible: "x86_64"
o git
|\
| |\
| | |\
| | | |\
| | | | |\
| | | | | |\
| | | | | | |\
| | | | | | | |\
| | | | | | | | |\
| | | | | | | | | |\
| | | | | | | | | | |\
| | | | | | | | | | | |\
| | | | | | | | | | | | |\
| | | | o | | | | | | | | | openssh
| |_|_|/| | | | | | | | | |
|/| | |/| | | | | | | | | |
| | | | |\ \ \ \ \ \ \ \ \ \
| | | | | | | | | | | | o | | curl
| |_|_|_|_|_|_|_|_|_|_|/| | |
|/| | | |_|_|_|_|_|_|_|/| | |
| | | |/| | | | | |_|_|/ / /
| | | | | | | | |/| | | | |
| | | o | | | | | | | | | | openssl
| |_|/| | | | | | | | | | |
|/| |/ / / / / / / / / / /
| |/| | | | | | | | | | |
| | | | | | | | | o | | | gettext
| | | | |_|_|_|_|/| | | |
| | | |/| | | | |/| | | |
| | | | | | | | | |\ \ \ \
| | | | | | | | | | |\ \ \ \
| | | | | | | | | | | |\ \ \ \
| | | | | | | | | | | o | | | | libxml2
| |_|_|_|_|_|_|_|_|_|/| | | | |
|/| | | | | | | | |_|/| | | | |
| | | | | | | | |/| |/| | | | |
| | | | | | | | | |/| | | | | |
o | | | | | | | | | | | | | | | zlib
/ / / / / / / / / / / / / / /
| | | | | | | | o | | | | | | xz
| | | | | | | | / / / / / /
| | | | | | | | o | | | | | tar
| | | | | | | |/ / / / / /
| | | | | | | | | | | o | automake
| |_|_|_|_|_|_|_|_|_|/| |
|/| | | | | | | | | | | |
| | | | | | | | | | | |/
| | | | | | | | | | | o autoconf
| |_|_|_|_|_|_|_|_|_|/|
|/| | | | |_|_|_|_|_|/
| | | | |/| | | | | |
o | | | | | | | | | | perl
|\ \ \ \ \ \ \ \ \ \ \
o | | | | | | | | | | | gdbm
o | | | | | | | | | | | readline
| |_|/ / / / / / / / /
|/| | | | | | | | | |
| | | o | | | | | | | libedit
| |_|/ / / / / / / /
|/| | | | | | | | |
o | | | | | | | | | ncurses
| |_|_|_|_|_|/ / /
|/| | | | | | | |
o | | | | | | | | pkgconf
/ / / / / / / /
| o | | | | | | pcre2
| / / / / / /
| | o | | | | libtool
| |/ / / / /
| o | | | | m4
| | o | | | libidn2
| | o | | | libunistring
| | |/ / /
| o | | | libsigsegv
| / / /
| | o | bzip2
| | o | diffutils
| |/ /
| o | libiconv
| /
| o expat
| o libbsd
|
o berkeley-db
```
### Information for Software Package
To get more information on a particular package from `spack list`, use `spack info`.
```console
$ spack info git
AutotoolsPackage: git
Description:
Git is a free and open source distributed version control system
designed to handle everything from small to very large projects with
speed and efficiency.
Homepage: http://git-scm.com
Tags:
None
Preferred version:
2.29.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.29.0.tar.gz
Safe versions:
2.29.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.29.0.tar.gz
2.28.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.28.0.tar.gz
2.27.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.27.0.tar.gz
2.26.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.0.tar.gz
2.25.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.25.0.tar.gz
2.21.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.21.0.tar.gz
2.20.1 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.20.1.tar.gz
2.19.2 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.19.2.tar.gz
2.19.1 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.19.1.tar.gz
2.18.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.18.0.tar.gz
2.17.1 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.17.1.tar.gz
2.17.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.17.0.tar.gz
2.15.1 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.15.1.tar.gz
2.14.1 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.14.1.tar.gz
2.13.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.13.0.tar.gz
2.12.2 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.12.2.tar.gz
2.12.1 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.12.1.tar.gz
2.12.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.12.0.tar.gz
2.11.1 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.11.1.tar.gz
2.11.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.11.0.tar.gz
2.9.3 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.9.3.tar.gz
2.9.2 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.9.2.tar.gz
2.9.1 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.9.1.tar.gz
2.9.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.9.0.tar.gz
2.8.4 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.8.4.tar.gz
2.8.3 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.8.3.tar.gz
2.8.2 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.8.2.tar.gz
2.8.1 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.8.1.tar.gz
2.8.0 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.8.0.tar.gz
2.7.3 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.7.3.tar.gz
2.7.1 https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.7.1.tar.gz
Variants:
Name [Default] Allowed values Description
============== ============== ===========================================
tcltk [off] on, off Gitk: provide Tcl/Tk in the run environment
Installation Phases:
autoreconf configure build install
Build Dependencies:
autoconf automake curl expat gettext iconv libidn2 libtool m4 openssl pcre pcre2 perl tk zlib
Link Dependencies:
curl expat gettext iconv libidn2 openssl pcre pcre2 perl tk zlib
Run Dependencies:
openssh
Virtual Packages:
None
```
### Install Software Package
`spack install` will install any package shown by `spack list`. For example, to install the latest version of the `git` package, you might type `spack install git` for default version or `spack install git@version` to chose a particular one.
```console
$ spack install git@2.29.0
==> Warning: specifying a "dotkit" module root has no effect [support for "dotkit" has been dropped in v0.13.0]
==> Warning: gcc@4.8.5 cannot build optimized binaries for "zen2". Using best target possible: "x86_64"
==> Installing libsigsegv-2.12-lctnabj6w4bmnyxo7q6ct4wewke2bqin
==> No binary for libsigsegv-2.12-lctnabj6w4bmnyxo7q6ct4wewke2bqin found: installing from source
==> Fetching https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/_source-cache/archive/3a/3ae1af359eebaa4ffc5896a1aee3568c052c99879316a1ab57f8fe1789c390b6.tar.gz
######################################################################## 100.0%
==> libsigsegv: Executing phase: 'autoreconf'
==> libsigsegv: Executing phase: 'configure'
==> libsigsegv: Executing phase: 'build'
==> libsigsegv: Executing phase: 'install'
[+] /home/kru0052/Spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/libsigsegv-2.12-lctnabj6w4bmnyxo7q6ct4wewke2bqin
==> Installing berkeley-db-18.1.40-bwuaqjex546zw3bimt23bgokfctnt46y
==> No binary for berkeley-db-18.1.40-bwuaqjex546zw3bimt23bgokfctnt46y found: installing from source
==> Fetching https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/_source-cache/archive/0c/0cecb2ef0c67b166de93732769abdeba0555086d51de1090df325e18ee8da9c8.tar.gz
######################################################################## 100.0%
...
...
==> Fetching https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/_source-cache/archive/8f/8f3bf70ddb515674ce2e19572920a39b1be96af12032b77f1dd57898981fb151.tar.gz
################################################################################################################################################################################################################################ 100.0%
==> Moving resource stage
source : /tmp/kru0052/resource-git-manpages-cabbbb7qozeijgspy2wl3hf6on6f4b4c/spack-src/
destination : /tmp/kru0052/spack-stage-git-2.29.0-cabbbb7qozeijgspy2wl3hf6on6f4b4c/spack-src/git-manpages
==> git: Executing phase: 'autoreconf'
==> git: Executing phase: 'configure'
==> git: Executing phase: 'build'
==> git: Executing phase: 'install'
[+] /home/kru0052/Spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/git-2.29.0-cabbbb7qozeijgspy2wl3hf6on6f4b4c
```
!!! warning
`FTP` on cluster is not allowed, you must edit the source link.
### Edit Rule
```console
$ spack edit git
```
!!! note
To change the source link (`ftp://` to `http://`), use `spack create URL -f` to regenerate rules.
## Available Spack Module
We know that `spack list` shows you the names of available packages, but how do you figure out which are already installed?
```console
==> 19 installed packages.
-- linux-centos6-x86_64 / gcc@4.4.7 -----------------------------
autoconf@2.69 cmake@3.7.1 expat@2.2.0 git@2.11.0 libsigsegv@2.10 m4@1.4.17 openssl@1.0.2j perl@5.24.0 tar@1.29 zlib@1.2.10
bzip2@1.0.6 curl@7.50.3 gettext@0.19.8.1 libiconv@1.14 libxml2@2.9.4 ncurses@6.0 pcre@8.41 pkg-config@0.29.1 xz@5.2.2
```
Spack colorizes output.
```console
$ spack find | less -R
-- linux-centos7-x86_64 / gcc@4.8.5 -----------------------------
autoconf@2.69
automake@1.16.2
berkeley-db@18.1.40
bzip2@1.0.8
curl@7.72.0
diffutils@3.7
expat@2.2.10
gdbm@1.18.1
gettext@0.21
git@2.29.0
libbsd@0.10.0
libedit@3.1-20191231
libiconv@1.16
libidn2@2.3.0
libsigsegv@2.12
libtool@2.4.6
libunistring@0.9.10
libxml2@2.9.10
m4@1.4.18
ncurses@6.2
openssh@8.4p1
openssl@1.1.1h
pcre2@10.35
perl@5.32.0
pkgconf@1.7.3
readline@8.0
tar@1.32
xz@5.2.5
zlib@1.2.11
```
`spack find` shows the specs of installed packages. A spec is like a name, but it has a version, compiler, architecture, and build options associated with it. In Spack, you can have many installations of the same package with different specs.
## Load and Unload Module
Neither of these is particularly pretty, easy to remember, or easy to type. Luckily, Spack has its own interface for using modules and dotkits.
```console
$ spack load git
==> This command requires spack's shell integration.
To initialize spack's shell commands, you must run one of
the commands below. Choose the right command for your shell.
For bash and zsh:
. ~/.local/easybuild/software/Spack/0.16.2/share/spack/setup-env.sh
For csh and tcsh:
setenv SPACK_ROOT ~/.local/easybuild/software/Spack/0.16.2
source ~/.local/easybuild/software/Spack/0.16.2/share/spack/setup-env.csh
```
### First Usage
```console
$ . ~/.local/easybuild/software/Spack/0.16.2/share/spack/setup-env.sh
```
```console
$ git version 1.7.1
$ spack load git
$ git --version
git version 2.29.0
$ spack unload git
$ git --version
git version 1.7.1
```
## Uninstall Software Package
Spack will ask you either to provide a version number to remove the ambiguity or use the `--all` option to uninstall all of the matching packages.
You may force uninstall a package with the `--force` option.
```console
$ spack uninstall git
==> The following packages will be uninstalled :
-- linux-centos7-x86_64 / gcc@4.4.7 -----------------------------
xmh3hmb git@2.29.0%gcc
==> Do you want to proceed ? [y/n]
y
==> Successfully uninstalled git@2.29.00%gcc@4.8.5 arch=linux-centos6-x86_64 -xmh3hmb
```
[a]: https://spack.readthedocs.io/en/latest/
# Virtualization
<!--
musime proverit
-->
Running virtual machines on compute nodes.
## Introduction
There are situations when our clusters' environment is not suitable for user's needs:
* Application requires a different operating system (e.g. Windows) or it is not available for Linux;
* Application requires a different versions of base system libraries and tools;
* Application requires specific setup (installation, configuration) of a complex software stack;
* Application requires privileged access to the operating system;
* ... and combinations of the above cases.
The solution for these cases is **virtualization**. Clusters' environment allows to run virtual machines on compute nodes. Users can create their own images of the operating system with a specific software stack and run instances of these images as virtual machines on compute nodes. Run of virtual machines is provided by the standard mechanism of [Resource Allocation and Job Execution][1].
Solution is based on QEMU-KVM software stack and provides hardware-assisted x86 virtualization.
## Limitations
Clusters' infrastructure and environment were s not designed for virtualization. Compute nodes, storages, and infrastructure is intended and optimized for running HPC jobs. This implies suboptimal configuration of virtualization and limitations.
Clusters' virtualization does not provide the performance and all features of a native environment. There is significant a performance hit (degradation) in I/O performance (storage, network). The virtualization is not suitable for I/O (disk, network) intensive workloads.
Virtualization has some drawbacks, as well. It is not so easy to set up an efficient solution.
The solution described in the [HOWTO][2] section is suitable for single node tasks; it does not introduce virtual machine clustering.
!!! note
Consider virtualization as a last resort solution for your needs.
!!! warning
Consult use of virtualization with IT4Innovations' support.
For running a Windows application (when the source code and Linux native application are not available), consider use of Wine, Windows compatibility layer. Many Windows applications can be run using Wine with less effort and better performance than when using virtualization.
## Licensing
IT4Innovations does not provide any licenses for operating systems and software of virtual machines. Users are (in accordance with [Acceptable use policy document][a]) fully responsible for licensing all software running on virtual machines on clusters. Be aware of complex conditions of licensing software in virtual environments.
!!! note
Users are responsible for licensing OS (e.g. MS Windows) and all software running on their virtual machines.
## Howto
### Virtual Machine Job Workflow
We propose this job workflow:
![Workflow](../../img/virtualization-job-workflow.png)
Our recommended solution is that the job script creates a distinct shared job directory, which makes a central point for data exchange between cluster's environment, compute node (host) (e.g. HOME, SCRATCH, local scratch, and other local or cluster file systems), and virtual machine (guest). The job script links or copies input data and instructions on what to do (run script) for the virtual machine to the job directory and the virtual machine process input data according to the instructions in the job directory and store output back to the job directory. We recommend that the virtual machine is running in a so called [snapshot mode][3], the image is immutable - it does not change, so one image can be used for many concurrent jobs.
### Procedure
1. Prepare the image of your virtual machine
1. Optimize the image of your virtual machine for cluster's virtualization
1. Modify your image for running jobs
1. Create a job script for executing the virtual machine
1. Run jobs
### Prepare Image of Your Virtual Machine
You can either use your existing image or create a new image from scratch.
QEMU currently supports these image types or formats:
* raw
* cloop
* cow
* qcow
* qcow2
* vmdk - VMware 3 & 4, or 6 image format, for exchanging images with that product
* vdi - VirtualBox 1.1 compatible image format, for exchanging images with VirtualBox.
You can convert your existing image using the `qemu-img convert` command. Supported formats of this command are: `blkdebug blkverify bochs cloop cow dmg file ftp ftps host_cdrom host_device host_floppy http https nbd parallels qcow qcow2 qed raw sheepdog tftp vdi vhdx vmdk vpc vvfat`.
We recommend using an advanced QEMU native image format qcow2.
More about QEMU images [here][b].
### Optimize Image of Your Virtual Machine
Use virtio devices (for the disk/drive and the network adapter) and install virtio drivers (paravirtualized drivers) into the virtual machine. There is a significant performance gain when using virtio drivers. For more information, see [Virtio Linux][c] and [Virtio Windows][d].
Disable all unnecessary services and tasks. Restrict all unnecessary operating system operations.
Remove all unnecessary software and files.
Remove all paging space, swap files, partitions, etc.
Shrink your image. (It is recommended to zero all free space and reconvert the image using `qemu-img`.)
### Modify Your Image for Running Jobs
Your image should contian an operating system startup script. The startup script should run the application and when the application exits, run shutdown or quit the virtual machine.
We recommend that the startup script:
* maps Job Directory from host (from compute node);
* runs script (we call it "run script") from Job Directory and waits for application's exit;
* for management purposes if the run script does not exist wait for some time period (few minutes);
* shutdowns/quits OS.
For Windows operating systems, we suggest using a Local Group Policy Startup script; for Linux operating systems, use the rc.local, runlevel init script, or similar service.
Example startup script for the Windows virtual machine:
```bat
@echo off
set LOG=c:\startup.log
set MAPDRIVE=z:
set SCRIPT=%MAPDRIVE%\run.bat
set TIMEOUT=300
echo %DATE% %TIME% Running startup script>%LOG%
rem Mount share
echo %DATE% %TIME% Mounting shared drive>>%LOG%
net use z: \\10.0.2.4\qemu >>%LOG% 2>&1
dir z:\ >>%LOG% 2>&1
echo. >>%LOG%
if exist %MAPDRIVE%\ (
echo %DATE% %TIME% The drive "%MAPDRIVE%" exists>>%LOG%
if exist %SCRIPT% (
echo %DATE% %TIME% The script file "%SCRIPT%"exists>>%LOG%
echo %DATE% %TIME% Running script %SCRIPT%>>%LOG%
set TIMEOUT=0
call %SCRIPT%
) else (
echo %DATE% %TIME% The script file "%SCRIPT%"does not exist>>%LOG%
)
) else (
echo %DATE% %TIME% The drive "%MAPDRIVE%" does not exist>>%LOG%
)
echo. >>%LOG%
timeout /T %TIMEOUT%
echo %DATE% %TIME% Shut down>>%LOG%
shutdown /s /t 0
```
The example startup script maps a shared job script as a drive z: and looks for a run script called run.bat. If the run script is found, it is run, otherwise it waits for 5 minutes, then shuts down the virtual machine.
### Create Job Script for Executing Virtual Machine
Create the job script according to the recommended [Virtual Machine Job Workflow][4].
Example job for the Windows virtual machine:
```bat
#/bin/sh
JOB_DIR=/scratch/$USER/win/${$SLURM_JOBID}
#Virtual machine settings
VM_IMAGE=~/work/img/win.img
VM_MEMORY=49152
VM_SMP=16
# Prepare job dir
mkdir -p ${JOB_DIR} && cd ${JOB_DIR} || exit 1
ln -s ~/work/win .
ln -s /scratch/$USER/data .
ln -s ~/work/win/script/run/run-appl.bat run.bat
# Run virtual machine
export TMPDIR=/lscratch/${$SLURM_JOBID}
module add qemu
qemu-system-x86_64
-enable-kvm
-cpu host
-smp ${VM_SMP}
-m ${VM_MEMORY}
-vga std
-localtime
-usb -usbdevice tablet
-device virtio-net-pci,netdev=net0
-netdev user,id=net0,smb=${JOB_DIR},hostfwd=tcp::3389-:3389
-drive file=${VM_IMAGE},media=disk,if=virtio
-snapshot
-nographic
```
The job script links application data (win), input data (data), and run script (run.bat) into the job directory and runs the virtual machine.
Example run script (run.bat) for the Windows virtual machine:
```doscon
z:
cd winappl
call application.bat z:data z:output
```
The run script runs the application from the shared job directory (mapped as drive z:), processes the input data (z:data) from the job directory, and stores output to the job directory (z:output).
### Run Jobs
Run jobs as usual, see [Resource Allocation and Job Execution][1]. Use only full node allocation for virtualization jobs.
### Running Virtual Machines
Virtualization is enabled only on compute nodes, virtualization does not work on login nodes.
Load the QEMU environment module:
```console
$ module add qemu
```
Get help:
```console
$ man qemu
```
Run the virtual machine (simple):
```console
$ qemu-system-x86_64 -hda linux.img -enable-kvm -cpu host -smp 16 -m 32768 -vga std -vnc :0
$ qemu-system-x86_64 -hda win.img -enable-kvm -cpu host -smp 16 -m 32768 -vga std -localtime -usb -usbdevice tablet -vnc :0
```
You can access the virtual machine via the VNC viewer (the `-vnc` option) connecting to the IP address of the compute node. For VNC, you must use a VPN network.
Install the virtual machine from the ISO file:
```console
$ qemu-system-x86_64 -hda linux.img -enable-kvm -cpu host -smp 16 -m 32768 -vga std -cdrom linux-install.iso -boot d -vnc :0
$ qemu-system-x86_64 -hda win.img -enable-kvm -cpu host -smp 16 -m 32768 -vga std -localtime -usb -usbdevice tablet -cdrom win-install.iso -boot d -vnc :0
```
Run the virtual machine using optimized devices, user network back-end with sharing and port forwarding, in snapshot mode
```console
$ qemu-system-x86_64 -drive file=linux.img,media=disk,if=virtio -enable-kvm -cpu host -smp 16 -m 32768 -vga std -device virtio-net-pci,netdev=net0 -netdev user,id=net0,smb=/scratch/$USER/tmp,hostfwd=tcp::2222-:22 -vnc :0 -snapshot
$ qemu-system-x86_64 -drive file=win.img,media=disk,if=virtio -enable-kvm -cpu host -smp 16 -m 32768 -vga std -localtime -usb -usbdevice tablet -device virtio-net-pci,netdev=net0 -netdev user,id=net0,smb=/scratch/$USER/tmp,hostfwd=tcp::3389-:3389 -vnc :0 -snapshot
```
Port forwarding allows you to access the virtual machine via SSH (Linux) or RDP (Windows) connecting to the IP address of the compute node (and port 2222 for SSH). You must use a VPN network).
!!! note
Keep in mind, that if you use virtio devices, you must have virtio drivers installed on your virtual machine.
### Networking and Data Sharing
For a networking virtual machine, we suggest using a (default) user network back-end (sometimes called slirp). This network back-end NATs virtual machines and provides useful services for virtual machines as DHCP, DNS, SMB sharing, port forwarding.
In default configuration, IP network 10.0.2.0/24 is used, host has IP address 10.0.2.2, DNS server 10.0.2.3, SMB server 10.0.2.4 and virtual machines obtain address from range 10.0.2.15-10.0.2.31. Virtual machines have access to cluster's network via NAT on the compute node (host).
Simple network setup:
```console
$ qemu-system-x86_64 ... -net nic -net user
```
(It is default when no `-net` options are given.)
Simple network setup with sharing and port forwarding (obsolete but simpler syntax, lower performance):
```console
$ qemu-system-x86_64 ... -net nic -net user,smb=/scratch/$USER/tmp,hostfwd=tcp::3389-:3389
```
Optimized network setup with sharing and port forwarding:
```console
$ qemu-system-x86_64 ... -device virtio-net-pci,netdev=net0 -netdev user,id=net0,smb=/scratch/$USER/tmp,hostfwd=tcp::2222-:22
```
### Advanced Networking
#### Internet Access
Sometimes, your virtual machine needs an access to internet (install software, updates, software activation, etc.). We suggest using Virtual Distributed Ethernet (VDE) enabled QEMU with SLIRP running on the login node tunneled to the compute node. Note that this setup has very low performance, the worst performance of all described solutions.
Load the VDE enabled QEMU environment module (unload the standard QEMU module first if necessary):
```console
$ module add qemu/2.1.2-vde2
```
Create a virtual network switch:
```console
$ vde_switch -sock /tmp/sw0 -mgmt /tmp/sw0.mgmt -daemon
```
Run a SLIRP daemon over the SSH tunnel on the login node and connect it to the virtual network switch:
```console
$ dpipe vde_plug /tmp/sw0 = ssh login1 $VDE2_DIR/bin/slirpvde -s - --dhcp &
```
Run QEMU using the VDE network back-end, connect to the created virtual switch.
Basic setup (obsolete syntax)
```console
$ qemu-system-x86_64 ... -net nic -net vde,sock=/tmp/sw0
```
Setup using a Virtio device (obsolete syntax):
```console
$ qemu-system-x86_64 ... -net nic,model=virtio -net vde,sock=/tmp/sw0
```
Optimized setup:
```console
$ qemu-system-x86_64 ... -device virtio-net-pci,netdev=net0 -netdev vde,id=net0,sock=/tmp/sw0
```
#### TAP Interconnect
Both the user and the VDE network back-end have low performance. For fast interconnect (10 Gbit/s and more) of the compute node (host) and the virtual machine (guest), we suggest using the Linux kernel TAP device.
Clusters provide the TAP device tap0 for your job. TAP interconnect does not provide any services (like NAT, DHCP, DNS, SMB, etc.) just raw networking, so you should provide your services if you need them.
To enable the TAP interconect feature, you need to specify the `virt_network:1` Slurm resource at job submit.
```console
$ salloc ... --gres=virt_network:1
```
Run QEMU with TAP network back-end:
```console
$ qemu-system-x86_64 ... -device virtio-net-pci,netdev=net1 -netdev tap,id=net1,ifname=tap0,script=no,downscript=no
```
Interface tap0 has IP address 192.168.1.1 and network mask 255.255.255.0 (/24). In the virtual machine, use an IP address from range 192.168.1.2-192.168.1.254. For your convenience, some ports on tap0 interface are redirected to a higher numbered ports, so you as a non-privileged user can provide services on these ports.
Redirected ports:
* DNS UDP/53-&gt;UDP/3053, TCP/53-&gt;TCP/3053
* DHCP UDP/67-&gt;UDP/3067
* SMB TCP/139-&gt;TCP/3139, TCP/445-&gt;TCP/3445).
You can configure the virtual machine's IP address statically or dynamically. For dynamic addressing provide your DHCP server on port 3067 of tap0 interface, you can also provide your DNS server on port 3053 of tap0 interface for example:
```console
$ dnsmasq --interface tap0 --bind-interfaces -p 3053 --dhcp-alternate-port=3067,68 --dhcp-range=192.168.1.15,192.168.1.32 --dhcp-leasefile=/tmp/dhcp.leasefile
```
You can also provide your SMB services (on ports 3139, 3445) to obtain high performance data sharing.
Example smb.conf (not optimized):
```console
$ cat smb.conf
[global]
socket address=192.168.1.1
smb ports = 3445 3139
private dir=/tmp/qemu-smb
pid directory=/tmp/qemu-smb
lock directory=/tmp/qemu-smb
state directory=/tmp/qemu-smb
ncalrpc dir=/tmp/qemu-smb/ncalrpc
log file=/tmp/qemu-smb/log.smbd
smb passwd file=/tmp/qemu-smb/smbpasswd
security = user
map to guest = Bad User
unix extensions = no
load printers = no
printing = bsd
printcap name = /dev/null
disable spoolss = yes
log level = 1
guest account = USER
[qemu]
path=/scratch/USER/tmp
read only=no
guest ok=yes
writable=yes
follow symlinks=yes
wide links=yes
force user=USER
```
(Replace USER with your login name.)
Run SMB services
```console
$ smbd -s /tmp/qemu-smb/smb.conf
```
A virtual machine can have more than one network interface controller and can use more than one network back-end. So, you can combine, for example, a network back-end and a TAP interconnect.
### Snapshot Mode
In snapshot mode, the image is not written, changes are written to a temporary file (and discarded after the virtual machine exits). **It is a strongly recommended mode for running your jobs.** Set the `TMPDIR` environment variable to a local scratch directory for temporary files placement:
```console
$ export TMPDIR=/lscratch/${$SLURM_JOBID}
$ qemu-system-x86_64 ... -snapshot
```
### Windows Guests
For Windows guests, we recommend these options:
```console
$ qemu-system-x86_64 ... -localtime -usb -usbdevice tablet
```
[1]: ../../general/job-submission-and-execution.md
[2]: #howto
[3]: #snapshot-mode
[4]: #virtual-machine-job-workflow
[a]: http://www.it4i.cz/acceptable-use-policy.pdf
[b]: http://en.wikibooks.org/wiki/QEMU/Images
[c]: http://www.linux-kvm.org/page/Virtio
[d]: http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers
# NICE DCV
**Install NICE DCV** (user-computer)
* [Overview][a]
* [Download][b]
**Install VPN client** [VPN Access][1]
## How to Use
Read the documentation on [VNC server][3].
**Run VNC**
* Login to Barbora
```console
[username@yourPC]$ ssh -i path/to/your_id_rsa user@login1.Barbora.it4i.cz
```
* Run a Job on Barbora on the Vizserv1 (or Vizserv2)
```console
[user@login1]$ salloc -p qviz -A OPEN-XX-XX --nodes=1 --ntasks=4 --time=04:00:00 --nodelist=vizserv1 --job-name=Vizserver1
```
You are connected to Vizserv1 now.
```console
[yourusername@vizserv1 ~]$
```
**VNC Server**
* Create a VNC Server Password
!!! note
A VNC server password should be set before the first login to VNC server. Use a strong password.
```console
[yourusername@vizserv1 ~]$ vncpasswd
password:
verify:
```
* Check the display number of the connected users
```console
[yourusername@vizserv1 ~]$ ps aux | grep Xvnc | sed -rn 's/(\s) .*Xvnc.* (\:[0-9]+) .*/\1 \2/p'
username :15
username :61
.....
```
!!! note
The VNC server runs on port 59xx, where xx is the display number. To get your port number, simply add 5900 + display number, in our example 5900 + 11 = 5911. **Calculate your own port number and use it instead of 5911 from examples below**.
* Start your remote VNC server
!!! note
Choose the display number which is different from other users display number. Also remember that display number should be lower or equal 99.
```console
[yourusername@vizserv1 ~]$ vncserver :11 -geometry 1600x900 -depth 24
Running applications in /etc/vnc/xstartup
VNC Server signature: f0-3d-df-ee-6f-a4-b1-62
Log file is /home/yourusername/.vnc/vizserv1:11.log
New desktop is vizserv1:11 (195.113.250.204:11)
```
* Check the display number
```console
[yourusername@vizserv1 ~]$ ps aux | grep Xvnc | sed -rn 's/(\s) .*Xvnc.* (\:[0-9]+) .*/\1 \2/p'
username :15
username :61
yourusername :11
.....
```
!!! note
You started a new VNC server. The server is listening on port 5911 (5900 + 11 = 5911).
Your VNC server is listening on port 59xx, in our example on port 5911.
**VNC Client (your local machine)**
You are going to connect your local machine to remote VNC server.
* Create the SSH tunnel (on your local machine) for port 59xx and for the range of ports 7300-7305.
```console
[username@yourPC]$ ssh -i path/to/your_id_rsa -TN -f user@vizserv1.barbora.it4i.cz -L 5911:localhost:5911 -L 7300:localhost:7300 -L 7301:localhost:7301 -L 7302:localhost:7302 -L 7303:localhost:7303 -L 7304:localhost:7304 -L 7305:localhost:7305
```
To create an SSH tunnel on Windows, download the [PuTTY installer][d] and follow the instructions in the [VNC server][3] section.
* Run NICE DCV or one of the recommended clients on your local machine. Recommended clients are [TightVNC][f] or [TigerVNC][g] (free, open source, available for almost any platform).
* Fill in the localhost:59xx address, click Connect, and enter the password.
![](../../img/dcv_5911_1.png)
![](../../img/dcv_5911_2.png)
**Test the OpenGL functionality on Vizserv**
* Run the command in the VNC terminal
```console
[kub0393@vizserv1 ~]$ /apps/easybuild/glxgears
```
![](../../img/vizsrv_5911.png)
**Test the NICE DCV functionality**
* Run the command in VNC terminal
```console
[kub0393@vizserv1 ~]$ dcvtest
```
![](../../img/dcvtest_5911.png)
## Stop the Service Correctly After Finishing Work
**On VNC server site**
* Logout from your local VNC window
![](../../img/vizsrv_logout.png)
**On your local machine**
* Find the Process ID (PID) of your SSH tunel
```console
[username@yourPC]$ netstat -natp | grep 5911
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:5911 0.0.0.0:* LISTEN 5675/ssh
tcp6 0 0 ::1:5911 :::* LISTEN 5675/ssh
```
or
```console
[username@yourPC]$ lsof -i :5911
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ssh 5675 user 5u IPv6 571419 0t0 TCP ip6-localhost:5911 (LISTEN)
ssh 5675 user 6u IPv4 571420 0t0 TCP localhost:5911 (LISTEN)
```
!!! note
PID in our example is 5675. You also need to use the correct port number for both commands above. In this example, the port number is 5911. Your PID and port number may differ.
* Kill the process
```console
[username@yourPC]$ kill 5675
```
* If on Windows, close Putty.
[1]: ../../general/accessing-the-clusters/vpn-access.md
[2]: ../../general/accessing-the-clusters/shell-access-and-data-transfer/putty.md
[3]: ../../general/accessing-the-clusters/graphical-user-interface/vnc.md
[a]: https://aws.amazon.com/hpc/dcv/
[b]: https://www.nice-dcv.com/2016-0.html
[c]: https://winscp.net/eng/download.php
[d]: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
[e]: https://vpn.it4i.cz/user
[f]: http://www.tightvnc.com
[g]: http://sourceforge.net/apps/mediawiki/tigervnc/index.php?title=Main_Page
!!!warning
This page has not been updated yet. The page does not reflect the transition from PBS to Slurm.
# GPI-2
## Introduction
Programming Next Generation Supercomputers: GPI-2 is an API library for asynchronous interprocess, cross-node communication. It provides a flexible, scalable, and fault tolerant interface for parallel applications.
The GPI-2 library implements the GASPI specification ([Global Address Space Programming Interface][a]). GASPI is a Partitioned Global Address Space (PGAS) API. It aims at scalable, flexible, and failure-tolerant computing in massively parallel environments.
## Modules
For the current list of installed versions, use:
```console
$ ml av GPI-2
```
The module sets up environment variables required for linking and running GPI-2 enabled applications. This particular command loads the default `gpi2/1.0.2` module.
## Linking
!!! note
Link with -lGPI2 -libverbs
Load the `gpi2` module. Link using `-lGPI2` and `-libverbs` switches to link your code against GPI-2. The GPI-2 requires the OFED InfiniBand communication library ibverbs.
### Compiling and Linking With Intel Compilers
```console
$ ml intel
$ ml gpi2
$ icc myprog.c -o myprog.x -Wl,-rpath=$LIBRARY_PATH -lGPI2 -libverbs
```
### Compiling and Linking With GNU Compilers
```console
$ ml gcc
$ ml gpi2
$ gcc myprog.c -o myprog.x -Wl,-rpath=$LIBRARY_PATH -lGPI2 -libverbs
```
## Running the GPI-2 Codes
!!! note
`gaspi_run` starts the GPI-2 application
The `gaspi_run` utility is used to start and run GPI-2 applications:
```console
$ gaspi_run -m machinefile ./myprog.x
```
A machine file (** machinefile **) must be provided with the hostnames of nodes where the application will run. The machinefile lists all nodes on which to run, one entry per node per process. This file may be hand created or obtained from standard $PBS_NODEFILE:
```console
$ cut -f1 -d"." $PBS_NODEFILE > machinefile
```
machinefile:
```console
cn79
cn80
```
This machinefile will run 2 GPI-2 processes, one on node cn79 and one on node cn80.
machinefle:
```console
cn79
cn79
cn80
cn80
```
This machinefile will run 4 GPI-2 processes, two on node cn79 and two on node cn80.
!!! note
Use `mpiprocs` to control how many GPI-2 processes will run per node.
Example:
```console
$ qsub -A OPEN-0-0 -q qexp -l select=2:ncpus=16:mpiprocs=16 -I
```
This example will produce $PBS_NODEFILE with 16 entries per node.
### Gaspi_logger
!!! note
`gaspi_logger` views the output from GPI-2 application ranks.
The `gaspi_logger` utility is used to view the output from all nodes except the master node (rank 0). `gaspi_logger` is started, on another session, on the master node - the node where the `gaspi_run` is executed. The output of the application, when called with `gaspi_printf()`, will be redirected to the `gaspi_logger`. Other I/O routines (e.g. `printf`) will not.
## Example
Following is an example of GPI-2 enabled code:
```cpp
#include <GASPI.h>
#include <stdlib.h>
void success_or_exit ( const char* file, const int line, const int ec)
{
if (ec != GASPI_SUCCESS)
{
gaspi_printf ("Assertion failed in %s[%i]:%dn", file, line, ec);
exit (1);
}
}
#define ASSERT(ec) success_or_exit (__FILE__, __LINE__, ec);
int main(int argc, char *argv[])
{
gaspi_rank_t rank, num;
gaspi_return_t ret;
/* Initialize GPI-2 */
ASSERT( gaspi_proc_init(GASPI_BLOCK) );
/* Get ranks information */
ASSERT( gaspi_proc_rank(&rank) );
ASSERT( gaspi_proc_num(&num) );
gaspi_printf("Hello from rank %d of %dn",
rank, num);
/* Terminate */
ASSERT( gaspi_proc_term(GASPI_BLOCK) );
return 0;
}
```
Load the modules and compile:
```console
$ ml gcc gpi2
$ gcc helloworld_gpi.c -o helloworld_gpi.x -Wl,-rpath=$LIBRARY_PATH -lGPI2 -libverbs
```
Submit the job and run the GPI-2 application:
```console
$ qsub -q qexp -l select=2:ncpus=1:mpiprocs=1,place=scatter,walltime=00:05:00 -I
qsub: waiting for job 171247.dm2 to start
qsub: job 171247.dm2 ready
cn79 $ ml gpi2
cn79 $ cut -f1 -d"." $PBS_NODEFILE > machinefile
cn79 $ gaspi_run -m machinefile ./helloworld_gpi.x
Hello from rank 0 of 2
```
At the same time, in another session, you may start the GASPI logger:
```console
$ ssh cn79
cn79 $ gaspi_logger
GASPI Logger (v1.1)
[cn80:0] Hello from rank 1 of 2
```
In this example, we compile the helloworld_gpi.c code using the **gnu compiler**(gcc) and link it to the GPI-2 and the ibverbs library. The library search path is compiled in. For execution, we use the qexp queue, 2 nodes 1 core each. The GPI module must be loaded on the master compute node (in this example cn79), `gaspi_logger` is used from a different session to view the output of the second process.
[a]: http://www.gaspi.de/en/project.html
# In Situ Visualization
## Introduction
In situ visualization allows you to visualize your data while your computation is progressing on multiple nodes of a cluster. It is a visualization pipeline based on the [ParaView Catalyst][a] library.
To leverage the possibilities of the in situ visualization by Catalyst library, you have to write an adaptor code that will use the actual data from your simulation and process them in the way they can be passed to ParaView for visualization. We provide a simple example of such simulator/adaptor code that binds together to provide the in situ visualization.
Detailed description of the Catalyst API can be found [here][b]. We restrict ourselves to provide more of an overall description of the code together with specifications for building, and explanation about how to run the code on the cluster.
## Installed Version
For the current list of installed versions, use:
```console
$ ml av CUDA
```
## Usage
All code concerning the simulator/adaptor is available to download from [here][code]. It is a package with the following files: [CMakeLists.txt][cmakelist_txt], [FEAdaptor.h][feadaptor_h], [FEAdaptor.cxx][feadaptor_cxx], [FEDataStructures.h][fedatastructures_h], [FEDataStructures.cxx][fedatastructures_cxx], [FEDriver.cxx][fedriver_cxx], and [feslicescript.py][feslicescript].
After the download, unpack the code:
```console
$ tar xvf package_name
```
Then use CMake to manage the build process, but before that, load the appropriate modules (`CMake`, `ParaView`):
```console
$ ml CMake ParaView/5.6.0-intel-2017a-mpi
```
This module set also loads a necessary Intel compiler within the dependencies:
```console
$ mkdir build
$ cd build
$ cmake ../
```
Now you can build the simulator/adaptor code by using `make`:
```console
$ make
```
It will generate the CxxFullExampleAdaptor executable file. This will be later run together with ParaView and it will provide the in situ visualization example.
## Code Explanation
The provided example is a simple MPI program. Main executing part is written in FEDriver.cxx. It is simulator code that creates computational grid and performs simulator/adaptor interaction (see below).
Dimensions of the computational grid in terms of number of points in x, y, z directions are supplied as input parameters to the `main` function (see lines 22-24 in the code below). Based on the number of MPI ranks, each MPI process creates a different part of the overall grid. This is done by grid initialization (see line 30). The respective code for this is in FEDataStructures.cxx. The parameter nr. 4 in `main` is for the name of the Python script (we use feslicescript.py). It sets up the ParaView-Catalyst pipeline (see line 36). The simulation starts by linearly progressing the `timeStep` value in the `for` loop. Each iteration of the loop upates the grid attributes (Velocity and Pressure) by calling the `UpdateFields` method from the `Attributes` class. Next in the loop, the adaptor's `CoProcess` function is called by using actual parameters (`grid`, `time`). To provide some information about the state of the simulation, the actual time step is printed together with the name of the processor that handles the computation inside the allocated MPI rank. Before the loop ends, appropriate sleep time is used to give some time for visualization to update. After the simulation loop ends, clean up is done by calling the `Finalize` function on adaptor and `MPI_Finalize` on MPI processes.
![](insitu/img/FEDriver.png "FEDriver.cxx")
Adaptor's initialization performs several necessary steps, see the code below. It creates vtkCPProcessor using the Catalyst library and adds a pipeline to it. The pipeline is initialized by the reffered Python script:
![](insitu/img/Initialize.png "Initialization of the adaptor, within FEAdaptor.cxx")
To initialize the Catalyst pipeline, we use the feslicescript.py Python script. You enable the live visualization in here and set the proper connection port. You can also use another commands and functions to configure it for saving the data during the visualization or another tasks that are available from the ParaView environment. For more details, see the [Catalyst guide][catalyst_guide].
![](insitu/img/feslicescript.png "Catalyst pipeline setup by Python script")
The `UpdateFields` method from the `Attributes` class updates the `velocity` value in respect to the value of `time` and the value of `setting` which depends on the actual MPI rank, see the code below. This way, different processes can be visually distinguished during the simulation.
![](insitu/img/UpdateFields.png "UpdateFields method of the Attributes class from FEDataStructures.cxx")
As mentioned before, further in the simulation loop, the adaptor's `CoProcess` function is called by using actual parameters of the `grid`, `time`, and `timeStep`. In the function, proper representation and description of the data is created. Such data is then passed to the Processor that has been created during the adaptor's initialization. The code of the `CoProcess` function is shown below.
![](insitu/img/CoProcess.png "CoProcess function of the adaptor, within FEAdaptor.cxx")
### Launching the Simulator/Adaptor With ParaView
There are two standard options to launch ParaView,. You can run ParaView from your local machine in client-server mode by following the information [here][2] or you can connect to the cluster using VNC and the graphical environment by following the information on [VNC][3]. In both cases, we will use ParaView version 5.6.0 and its respective module.
Whether you use the client-server mode or VNC for ParaView, you have to allocate some computing resources. This is done by the console commands below (supply valid Project ID).
For the client-server mode of ParaView we allocate the resources by
```console
$ salloc -p qcpu -A PROJECT_ID --nodes=2
```
In the case of VNC connection, we use X11 forwarding by the `--x11` option to allow the graphical environment on the interactive session:
```console
$ salloc -p qcpu -A PROJECT_ID --nodes=2 --x11
```
The issued console commands launch the interactive session on 2 nodes. This is the minimal setup to test that the simulator/adaptor code runs on multiple nodes.
After the interactive session is opened, load the `ParaView` module:
```console
$ ml ParaView/5.6.0-intel-2017a-mpi
```
When the module is loaded and you run the client-server mode, launch the mpirun command for pvserver as used in the description for [ParaView client-server][2] but also use the `&` sign at the end of the command. Then you can use the console later for running the simulator/adaptor code. If you run the VNC session, after loading the ParaView module, set up the environmental parameter for correct keyboard input and then run the ParaView in the background using the `&` sign.
```console
$ export QT_XKB_CONFIG_ROOT=/usr/share/X11/xkb
$ paraview &
```
If you have proceeded to the end in the description for the ParaView client-server mode and run ParaView localy or you have launched ParaView remotely using VNC the further steps are the same for both options. In the ParaView, go to the *Catalyst* -> *Connect* and hit *OK* in the pop up window asking for the connection port. After that, ParaView starts listening for incomming data to the in situ visualization.
![](insitu/img/Catalyst_connect.png "Starting Catalyst connection in ParaView")
Go to your build directory and run the built simulator/adaptor code from the console:
```console
mpirun -n 2 ./CxxFullExample 30 30 30 ../feslicescript.py
```
The programs starts to compute on the allocated nodes and prints out the response:
![](insitu/img/Simulator_response.png "Simulator/adaptor response")
In ParaView, you should see a new pipeline called *input* in the *Pipeline Browser*:
![](insitu/img/Input_pipeline.png "Input pipeline")
By clicking on it, another item called *Extract: input* is added:
![](insitu/img/Extract_input.png "Extract: input")
If you click on the eye icon to the left of the *Extract: input* item, the data will appear:
![](insitu/img/Data_shown.png "Sent data")
To visualize the velocity property on the geometry, go to the *Properties* tab and choose *velocity* instead of *Solid Color* in the *Coloring* menu:
![](insitu/img/Show_velocity.png "Show velocity data")
The result will look like in the image below, where different domains dependent on the number of allocated resources can be seen and they will progress through the time:
![](insitu/img/Result.png "Velocity results")
### Cleanup
After you finish the in situ visualization, you should provide a proper cleanup.
Terminate the simulation if it is still running.
In the VNC session, close ParaView and the interactive job. Close the VNC client. Kill the VNC Server as described [here][3].
In the client-server mode of ParaView, disconnect from the server in ParaView and close the interactive job.
[1]: ../../../salomon/introduction/
[2]: ../paraview/
[3]: ../../../general/accessing-the-clusters/graphical-user-interface/vnc/
[a]: https://www.paraview.org/in-situ/
[b]: https://www.paraview.org/files/catalyst/docs/ParaViewCatalystUsersGuide_v2.pdf
[c]: http://www.paraview.org/
[code]: insitu/insitu.tar.gz
[cmakelist_txt]: insitu/CMakeLists.txt
[feadaptor_h]: insitu/FEAdaptor.h
[feadaptor_cxx]: insitu/FEAdaptor.cxx
[fedatastructures_h]: insitu/FEDataStructures.h
[fedatastructures_cxx]: insitu/FEDataStructures.cxx
[fedriver_cxx]: insitu/FEDriver.cxx
[feslicescript]: insitu/feslicescript.py
cmake_minimum_required(VERSION 3.3)
project(CatalystCxxFullExample)
set(USE_CATALYST ON CACHE BOOL "Link the simulator with Catalyst")
if(USE_CATALYST)
find_package(ParaView 5.6 REQUIRED COMPONENTS vtkPVPythonCatalyst)
include("${PARAVIEW_USE_FILE}")
set(Adaptor_SRCS
FEAdaptor.cxx
)
add_library(CxxFullExampleAdaptor ${Adaptor_SRCS})
target_link_libraries(CxxFullExampleAdaptor vtkPVPythonCatalyst vtkParallelMPI)
add_definitions("-DUSE_CATALYST")
if(NOT PARAVIEW_USE_MPI)
message(SEND_ERROR "ParaView must be built with MPI enabled")
endif()
else()
find_package(MPI REQUIRED)
include_directories(${MPI_C_INCLUDE_PATH})
endif()
add_executable(CxxFullExample FEDriver.cxx FEDataStructures.cxx)
if(USE_CATALYST)
target_link_libraries(CxxFullExample LINK_PRIVATE CxxFullExampleAdaptor)
include(vtkModuleMacros)
include(vtkMPI)
vtk_mpi_link(CxxFullExample)
else()
target_link_libraries(CxxFullExample LINK_PRIVATE ${MPI_LIBRARIES})
endif()
option(BUILD_TESTING "Build Testing" OFF)
# Setup testing.
if (BUILD_TESTING)
include(CTest)
add_test(NAME CxxFullExampleTest COMMAND CxxFullExample ${CMAKE_CURRENT_SOURCE_DIR}/feslicescript.py)
set_tests_properties(CxxFullExampleTest PROPERTIES LABELS "PARAVIEW;CATALYST")
endif()
#include "FEAdaptor.h"
#include "FEDataStructures.h"
#include <iostream>
#include <vtkCPDataDescription.h>
#include <vtkCPInputDataDescription.h>
#include <vtkCPProcessor.h>
#include <vtkCPPythonScriptPipeline.h>
#include <vtkCellData.h>
#include <vtkCellType.h>
#include <vtkDoubleArray.h>
#include <vtkFloatArray.h>
#include <vtkNew.h>
#include <vtkPointData.h>
#include <vtkPoints.h>
#include <vtkUnstructuredGrid.h>
namespace
{
vtkCPProcessor* Processor = NULL;
vtkUnstructuredGrid* VTKGrid;
void BuildVTKGrid(Grid& grid)
{
// create the points information
vtkNew<vtkDoubleArray> pointArray;
pointArray->SetNumberOfComponents(3);
pointArray->SetArray(
grid.GetPointsArray(), static_cast<vtkIdType>(grid.GetNumberOfPoints() * 3), 1);
vtkNew<vtkPoints> points;
points->SetData(pointArray.GetPointer());
VTKGrid->SetPoints(points.GetPointer());
// create the cells
size_t numCells = grid.GetNumberOfCells();
VTKGrid->Allocate(static_cast<vtkIdType>(numCells * 9));
for (size_t cell = 0; cell < numCells; cell++)
{
unsigned int* cellPoints = grid.GetCellPoints(cell);
vtkIdType tmp[8] = { cellPoints[0], cellPoints[1], cellPoints[2], cellPoints[3], cellPoints[4],
cellPoints[5], cellPoints[6], cellPoints[7] };
VTKGrid->InsertNextCell(VTK_HEXAHEDRON, 8, tmp);
}
}
void UpdateVTKAttributes(Grid& grid, Attributes& attributes, vtkCPInputDataDescription* idd)
{
if (idd->IsFieldNeeded("velocity", vtkDataObject::POINT) == true)
{
if (VTKGrid->GetPointData()->GetNumberOfArrays() == 0)
{
// velocity array
vtkNew<vtkDoubleArray> velocity;
velocity->SetName("velocity");
velocity->SetNumberOfComponents(3);
velocity->SetNumberOfTuples(static_cast<vtkIdType>(grid.GetNumberOfPoints()));
VTKGrid->GetPointData()->AddArray(velocity.GetPointer());
}
vtkDoubleArray* velocity =
vtkDoubleArray::SafeDownCast(VTKGrid->GetPointData()->GetArray("velocity"));
// The velocity array is ordered as vx0,vx1,vx2,..,vy0,vy1,vy2,..,vz0,vz1,vz2,..
// so we need to create a full copy of it with VTK's ordering of
// vx0,vy0,vz0,vx1,vy1,vz1,..
double* velocityData = attributes.GetVelocityArray();
vtkIdType numTuples = velocity->GetNumberOfTuples();
for (vtkIdType i = 0; i < numTuples; i++)
{
double values[3] = { velocityData[i], velocityData[i + numTuples],
velocityData[i + 2 * numTuples] };
velocity->SetTypedTuple(i, values);
}
}
if (idd->IsFieldNeeded("pressure", vtkDataObject::CELL) == true)
{
if (VTKGrid->GetCellData()->GetNumberOfArrays() == 0)
{
// pressure array
vtkNew<vtkFloatArray> pressure;
pressure->SetName("pressure");
pressure->SetNumberOfComponents(1);
VTKGrid->GetCellData()->AddArray(pressure.GetPointer());
}
vtkFloatArray* pressure =
vtkFloatArray::SafeDownCast(VTKGrid->GetCellData()->GetArray("pressure"));
// The pressure array is a scalar array so we can reuse
// memory as long as we ordered the points properly.
float* pressureData = attributes.GetPressureArray();
pressure->SetArray(pressureData, static_cast<vtkIdType>(grid.GetNumberOfCells()), 1);
}
}
void BuildVTKDataStructures(Grid& grid, Attributes& attributes, vtkCPInputDataDescription* idd)
{
if (VTKGrid == NULL)
{
// The grid structure isn't changing so we only build it
// the first time it's needed. If we needed the memory
// we could delete it and rebuild as necessary.
VTKGrid = vtkUnstructuredGrid::New();
BuildVTKGrid(grid);
}
UpdateVTKAttributes(grid, attributes, idd);
}
}
namespace FEAdaptor
{
void Initialize(char* script)
{
if (Processor == NULL)
{
Processor = vtkCPProcessor::New();
Processor->Initialize();
}
else
{
Processor->RemoveAllPipelines();
}
vtkNew<vtkCPPythonScriptPipeline> pipeline;
pipeline->Initialize(script);
Processor->AddPipeline(pipeline.GetPointer());
}
void Finalize()
{
if (Processor)
{
Processor->Delete();
Processor = NULL;
}
if (VTKGrid)
{
VTKGrid->Delete();
VTKGrid = NULL;
}
}
void CoProcess(
Grid& grid, Attributes& attributes, double time, unsigned int timeStep, bool lastTimeStep)
{
vtkNew<vtkCPDataDescription> dataDescription;
dataDescription->AddInput("input");
dataDescription->SetTimeData(time, timeStep);
if (lastTimeStep == true)
{
// assume that we want to execute all the pipelines if it
// is the last time step.
dataDescription->ForceOutputOn();
}
if (Processor->RequestDataDescription(dataDescription.GetPointer()) != 0)
{
vtkCPInputDataDescription* idd = dataDescription->GetInputDescriptionByName("input");
BuildVTKDataStructures(grid, attributes, idd);
idd->SetGrid(VTKGrid);
Processor->CoProcess(dataDescription.GetPointer());
}
}
} // end of Catalyst namespace
#ifndef FEADAPTOR_HEADER
#define FEADAPTOR_HEADER
class Attributes;
class Grid;
namespace FEAdaptor
{
void Initialize(char* script);
void Finalize();
void CoProcess(
Grid& grid, Attributes& attributes, double time, unsigned int timeStep, bool lastTimeStep);
}
#endif
#include "FEDataStructures.h"
#include <iostream>
#include <mpi.h>
Grid::Grid()
{
}
void Grid::Initialize(const unsigned int numPoints[3], const double spacing[3])
{
if (numPoints[0] == 0 || numPoints[1] == 0 || numPoints[2] == 0)
{
std::cerr << "Must have a non-zero amount of points in each direction.\n";
}
// in parallel, we do a simple partitioning in the x-direction.
int mpiSize = 1;
int mpiRank = 0;
MPI_Comm_rank(MPI_COMM_WORLD, &mpiRank);
MPI_Comm_size(MPI_COMM_WORLD, &mpiSize);
unsigned int startXPoint = mpiRank * numPoints[0] / mpiSize;
unsigned int endXPoint = (mpiRank + 1) * numPoints[0] / mpiSize;
if (mpiSize != mpiRank + 1)
{
endXPoint++;
}
// create the points -- slowest in the x and fastest in the z directions
double coord[3] = { 0, 0, 0 };
for (unsigned int i = startXPoint; i < endXPoint; i++)
{
coord[0] = i * spacing[0];
for (unsigned int j = 0; j < numPoints[1]; j++)
{
coord[1] = j * spacing[1];
for (unsigned int k = 0; k < numPoints[2]; k++)
{
coord[2] = k * spacing[2];
// add the coordinate to the end of the vector
std::copy(coord, coord + 3, std::back_inserter(this->Points));
}
}
}
// create the hex cells
unsigned int cellPoints[8];
unsigned int numXPoints = endXPoint - startXPoint;
for (unsigned int i = 0; i < numXPoints - 1; i++)
{
for (unsigned int j = 0; j < numPoints[1] - 1; j++)
{
for (unsigned int k = 0; k < numPoints[2] - 1; k++)
{
cellPoints[0] = i * numPoints[1] * numPoints[2] + j * numPoints[2] + k;
cellPoints[1] = (i + 1) * numPoints[1] * numPoints[2] + j * numPoints[2] + k;
cellPoints[2] = (i + 1) * numPoints[1] * numPoints[2] + (j + 1) * numPoints[2] + k;
cellPoints[3] = i * numPoints[1] * numPoints[2] + (j + 1) * numPoints[2] + k;
cellPoints[4] = i * numPoints[1] * numPoints[2] + j * numPoints[2] + k + 1;
cellPoints[5] = (i + 1) * numPoints[1] * numPoints[2] + j * numPoints[2] + k + 1;
cellPoints[6] = (i + 1) * numPoints[1] * numPoints[2] + (j + 1) * numPoints[2] + k + 1;
cellPoints[7] = i * numPoints[1] * numPoints[2] + (j + 1) * numPoints[2] + k + 1;
std::copy(cellPoints, cellPoints + 8, std::back_inserter(this->Cells));
}
}
}
}
size_t Grid::GetNumberOfPoints()
{
return this->Points.size() / 3;
}
size_t Grid::GetNumberOfCells()
{
return this->Cells.size() / 8;
}
double* Grid::GetPointsArray()
{
if (this->Points.empty())
{
return NULL;
}
return &(this->Points[0]);
}
double* Grid::GetPoint(size_t pointId)
{
if (pointId >= this->Points.size())
{
return NULL;
}
return &(this->Points[pointId * 3]);
}
unsigned int* Grid::GetCellPoints(size_t cellId)
{
if (cellId >= this->Cells.size())
{
return NULL;
}
return &(this->Cells[cellId * 8]);
}
Attributes::Attributes()
{
this->GridPtr = NULL;
}
void Attributes::Initialize(Grid* grid)
{
this->GridPtr = grid;
}
void Attributes::UpdateFields(double time)
{
size_t numPoints = this->GridPtr->GetNumberOfPoints();
this->Velocity.resize(numPoints * 3);
// provide different update setting for different parallel process
int mpiSize = 1;
int mpiRank = 0;
MPI_Comm_rank(MPI_COMM_WORLD, &mpiRank);
MPI_Comm_size(MPI_COMM_WORLD, &mpiSize);
double setting = 1.0 + (double) mpiRank / (double) mpiSize;
for (size_t pt = 0; pt < numPoints; pt++)
{
double* coord = this->GridPtr->GetPoint(pt);
this->Velocity[pt] = coord[1] * time * setting;
}
std::fill(this->Velocity.begin() + numPoints, this->Velocity.end(), 0.0);
size_t numCells = this->GridPtr->GetNumberOfCells();
this->Pressure.resize(numCells);
std::fill(this->Pressure.begin(), this->Pressure.end(), setting);
}
double* Attributes::GetVelocityArray()
{
if (this->Velocity.empty())
{
return NULL;
}
return &this->Velocity[0];
}
float* Attributes::GetPressureArray()
{
if (this->Pressure.empty())
{
return NULL;
}
return &this->Pressure[0];
}
#ifndef FEDATASTRUCTURES_HEADER
#define FEDATASTRUCTURES_HEADER
#include <cstddef>
#include <vector>
class Grid
{
public:
Grid();
void Initialize(const unsigned int numPoints[3], const double spacing[3]);
size_t GetNumberOfPoints();
size_t GetNumberOfCells();
double* GetPointsArray();
double* GetPoint(size_t pointId);
unsigned int* GetCellPoints(size_t cellId);
private:
std::vector<double> Points;
std::vector<unsigned int> Cells;
};
class Attributes
{
// A class for generating and storing point and cell fields.
// Velocity is stored at the points and pressure is stored
// for the cells.
public:
Attributes();
void Initialize(Grid* grid);
void UpdateFields(double time);
double* GetVelocityArray();
float* GetPressureArray();
private:
std::vector<double> Velocity;
std::vector<float> Pressure;
Grid* GridPtr;
};
#endif
#include "FEDataStructures.h"
#include <mpi.h>
#include <stdio.h>
#include <unistd.h>
#include <iostream>
#include <stdlib.h>
#ifdef USE_CATALYST
#include "FEAdaptor.h"
#endif
int main(int argc, char** argv)
{
// Check the input arguments
if (argc < 5) {
printf("Not all arguments supplied (grid definition, Python script name)\n");
return 0;
}
unsigned int pointsX = abs(std::stoi(argv[1]));
unsigned int pointsY = abs(std::stoi(argv[2]));
unsigned int pointsZ = abs(std::stoi(argv[3]));
// MPI_Init(&argc, &argv);
MPI_Init(NULL, NULL);
Grid grid;
unsigned int numPoints[3] = { pointsX, pointsY, pointsZ };
double spacing[3] = { 1, 1.1, 1.3 };
grid.Initialize(numPoints, spacing);
Attributes attributes;
attributes.Initialize(&grid);
#ifdef USE_CATALYST
// The argument nr. 4 is the Python script name
FEAdaptor::Initialize(argv[4]);
#endif
unsigned int numberOfTimeSteps = 1000;
for (unsigned int timeStep = 0; timeStep < numberOfTimeSteps; timeStep++)
{
// Use a time step of length 0.1
double time = timeStep * 0.1;
attributes.UpdateFields(time);
#ifdef USE_CATALYST
FEAdaptor::CoProcess(grid, attributes, time, timeStep, timeStep == numberOfTimeSteps - 1);
#endif
// Get the name of the processor
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
// Print actual time step and processor name that handles the calculation
printf("This is processor %s, time step: %0.3f\n", processor_name, time);
usleep(500000);
}
#ifdef USE_CATALYST
FEAdaptor::Finalize();
#endif
MPI_Finalize();
return 0;
}
from paraview.simple import *
from paraview import coprocessing
#--------------------------------------------------------------
# Code generated from cpstate.py to create the CoProcessor.
# ----------------------- CoProcessor definition -----------------------
def CreateCoProcessor():
def _CreatePipeline(coprocessor, datadescription):
class Pipeline:
coprocessor.CreateProducer( datadescription, "input" )
return Pipeline()
class CoProcessor(coprocessing.CoProcessor):
def CreatePipeline(self, datadescription):
self.Pipeline = _CreatePipeline(self, datadescription)
coprocessor = CoProcessor()
freqs = {'input': []}
coprocessor.SetUpdateFrequencies(freqs)
return coprocessor
#--------------------------------------------------------------
# Global variables that will hold the pipeline for each timestep
# Creating the CoProcessor object, doesn't actually create the ParaView pipeline.
# It will be automatically setup when coprocessor.UpdateProducers() is called the
# first time.
coprocessor = CreateCoProcessor()
#--------------------------------------------------------------
# Enable Live-Visualizaton with ParaView
coprocessor.EnableLiveVisualization(True)
# ---------------------- Data Selection method ----------------------
def RequestDataDescription(datadescription):
"Callback to populate the request for current timestep"
global coprocessor
if datadescription.GetForceOutput() == True:
# We are just going to request all fields and meshes from the simulation
# code/adaptor.
for i in range(datadescription.GetNumberOfInputDescriptions()):
datadescription.GetInputDescription(i).AllFieldsOn()
datadescription.GetInputDescription(i).GenerateMeshOn()
return
# setup requests for all inputs based on the requirements of the
# pipeline.
coprocessor.LoadRequestedData(datadescription)
# ------------------------ Processing method ------------------------
def DoCoProcessing(datadescription):
"Callback to do co-processing for current timestep"
global coprocessor
# Update the coprocessor by providing it the newly generated simulation data.
# If the pipeline hasn't been setup yet, this will setup the pipeline.
coprocessor.UpdateProducers(datadescription)
# Write output data, if appropriate.
coprocessor.WriteData(datadescription);
# Write image capture (Last arg: rescale lookup table), if appropriate.
coprocessor.WriteImages(datadescription, rescale_lookuptable=False)
# Live Visualization, if enabled.
coprocessor.DoLiveVisualization(datadescription, "localhost", 22222)
docs.it4i/software/viz/insitu/img/Catalyst_connect.png

548 KiB

docs.it4i/software/viz/insitu/img/CoProcess.png

54.5 KiB

docs.it4i/software/viz/insitu/img/Data_shown.png

682 KiB

docs.it4i/software/viz/insitu/img/Extract_input.png

666 KiB

docs.it4i/software/viz/insitu/img/FEDriver.png

151 KiB

docs.it4i/software/viz/insitu/img/Finalize.png

17.2 KiB

docs.it4i/software/viz/insitu/img/Initialize.png

24.4 KiB