Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision

Target

Select target project
  • sccs/docs.it4i.cz
  • soj0018/docs.it4i.cz
  • lszustak/docs.it4i.cz
  • jarosjir/docs.it4i.cz
  • strakpe/docs.it4i.cz
  • beranekj/docs.it4i.cz
  • tab0039/docs.it4i.cz
  • davidciz/docs.it4i.cz
  • gui0013/docs.it4i.cz
  • mrazek/docs.it4i.cz
  • lriha/docs.it4i.cz
  • it4i-vhapla/docs.it4i.cz
  • hol0598/docs.it4i.cz
  • sccs/docs-it-4-i-cz-fumadocs
  • siw019/docs-it-4-i-cz-fumadocs
15 results
Select Git revision
Show changes
Commits on Source (1800)
Showing
with 2842 additions and 0 deletions
site/
scripts/*.csv
stages:
- test
- build
- deploy
docs:
stage: test
image: davidhrbac/docker-mdcheck:latest
script:
- mdl -r ~MD024,~MD013,~MD033,~MD014,~MD026,~MD037,~MD036,~MD010,~MD029 *.md docs.it4i # BUGS
capitalize:
stage: test
image: davidhrbac/docker-mkdocscheck:latest
# allow_failure: true
script:
- find mkdocs.yml docs.it4i/ \( -name '*.md' -o -name '*.yml' \) -print0 | xargs -0 -n1 scripts/titlemd_test.py
#spell check:
#stage: test
#image: davidhrbac/docker-npmcheck:latest
#allow_failure: true
#script:
#- npm i markdown-spellcheck -g
#- mdspell '**/*.md' '!docs.it4i/module*.md' -rns --en-us
ext_links:
stage: test
image: davidhrbac/docker-mdcheck:latest
allow_failure: true
after_script:
# remove JSON results
- rm *.json
script:
- find docs.it4i/ -name '*.md' -exec grep --color -l http {} + | xargs awesome_bot -t 10 --allow-dupe --allow-redirect
only:
- master
mkdocs:
stage: build
image: davidhrbac/docker-mkdocscheck:latest
script:
- mkdocs -V
# add version to footer
- bash scripts/add_version.sh
# get modules list from clusters
- bash scripts/get_modules.sh
# regenerate modules matrix
- python scripts/modules-matrix.py > docs.it4i/modules-matrix.md
- python scripts/modules-json.py > docs.it4i/modules-matrix.json
- curl -f0 https://scs-test.it4i.cz/devel/apidocs/master/scs_api.server_public.md -o docs.it4i/apiv1.md
# build pages
- mkdocs build
# compress search_index.json
#- bash scripts/clean_json.sh site/mkdocs/search_index.json
# replace broken links in 404.html
- sed -i 's,href="" title=",href="/" title=",g' site/404.html
- cp site/404.html site/403.html
- sed -i 's/404 - Not found/403 - Forbidden/g' site/403.html
# compress sitemap
- gzip < site/sitemap.xml > site/sitemap.xml.gz
artifacts:
paths:
- site
expire_in: 1 week
deploy to stage:
environment: stage
stage: deploy
image: davidhrbac/docker-mkdocscheck:latest
before_script:
# install ssh-agent
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
# run ssh-agent
- eval $(ssh-agent -s)
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# disable host key checking (NOTE: makes you susceptible to man-in-the-middle attacks)
# WARNING: use only in docker container, if you use it with shell you will overwrite your user's ssh config
- mkdir -p ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- useradd -lM nginx
script:
- chown nginx:nginx site -R
- rsync -a --delete site/ root@"$SSH_HOST_STAGE":/srv/docs.it4i.cz/devel/$CI_BUILD_REF_NAME/
only:
- branches@sccs/docs.it4i.cz
deploy to production:
environment: production
stage: deploy
image: davidhrbac/docker-mkdocscheck:latest
before_script:
# install ssh-agent
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
# run ssh-agent
- eval $(ssh-agent -s)
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# disable host key checking (NOTE: makes you susceptible to man-in-the-middle attacks)
# WARNING: use only in docker container, if you use it with shell you will overwrite your user's ssh config
- mkdir -p ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- useradd -lM nginx
script:
- chown nginx:nginx site -R
- rsync -a --delete site/ root@"$SSH_HOST_STAGE":/srv/docs.it4i.cz/site/
only:
- master@sccs/docs.it4i.cz
when: manual
CAE
CUBE
GPU
GSL
LMGC90
LS-DYNA
MAPDL
GPI-2
COM
.ssh
Anselm
IT4I
IT4Innovations
PBS
Salomon
TurboVNC
VNC
DDR3
DIMM
InfiniBand
CUDA
ORCA
COMSOL
API
GNU
CUDA
NVIDIA
LiveLink
MATLAB
Allinea
LLNL
Vampir
Doxygen
VTune
TotalView
Valgrind
ParaView
OpenFOAM
MAX_FAIRSHARE
MPI4Py
MPICH2
PETSc
Trilinos
FFTW
HDF5
BiERapp
AVX
AVX2
JRE
JDK
QEMU
VMware
VirtualBox
NUMA
SMP
BLAS
LAPACK
FFTW3
Dongarra
OpenCL
cuBLAS
CESNET
Jihlava
NVIDIA
Xeon
ANSYS
CentOS
RHEL
DDR4
DIMMs
GDDR5
EasyBuild
e.g.
MPICH
MVAPICH2
OpenBLAS
ScaLAPACK
PAPI
SGI
UV2000
400GB
Mellanox
RedHat
ssh.du1.cesnet.cz
ssh.du2.cesnet.cz
ssh.du3.cesnet.cz
DECI
supercomputing
AnyConnect
X11
backfilling
backfilled
SCP
Lustre
QDR
TFLOP
ncpus
myjob
pernode
mpiprocs
ompthreads
qprace
runtime
SVS
ppn
Multiphysics
aeroacoustics
turbomachinery
CFD
LS-DYNA
APDL
MAPDL
multiphysics
AUTODYN
RSM
Molpro
initio
parallelization
NWChem
SCF
ISV
profiler
Pthreads
profilers
OTF
PAPI
PCM
uncore
pre-processing
prepend
CXX
prepended
POMP2
Memcheck
unaddressable
OTF2
GPI-2
GASPI
GPI
MKL
IPP
TBB
GSL
Omics
VNC
Scalasca
IFORT
interprocedural
IDB
cloop
qcow
qcow2
vmdk
vdi
virtio
paravirtualized
Gbit
tap0
UDP
TCP
preload
qfat
Rmpi
DCT
datasets
dataset
preconditioners
partitioners
PARDISO
PaStiX
SuiteSparse
SuperLU
ExodusII
NetCDF
ParMETIS
multigrid
HYPRE
SPAI
Epetra
EpetraExt
Tpetra
64-bit
Belos
GMRES
Amesos
IFPACK
preconditioner
Teuchos
Makefiles
SAXPY
NVCC
VCF
HGMD
HUMSAVAR
ClinVar
indels
CIBERER
exomes
tmp
SSHFS
RSYNC
unmount
Cygwin
CygwinX
RFB
TightVNC
TigerVNC
GUIs
XLaunch
UTF-8
numpad
PuTTYgen
OpenSSH
IE11
x86
r21u01n577
7120P
interprocessor
IPN
toolchains
toolchain
APIs
easyblocks
GM200
GeForce
GTX
IRUs
ASIC
backplane
ICEX
IRU
PFLOP
T950B
ifconfig
inet
addr
checkbox
appfile
programmatically
http
https
filesystem
phono3py
HDF
splitted
automize
llvm
PGI
GUPC
BUPC
IBV
Aislinn
nondeterminism
stdout
stderr
i.e.
pthreads
uninitialised
broadcasted
ITAC
hotspots
Bioinformatics
semiempirical
DFT
polyfill
ES6
HTML5Rocks
minifiers
CommonJS
PhantomJS
bundlers
Browserify
versioning
isflowing
ispaused
NPM
sublicense
Streams2
Streams3
blogpost
GPG
mississippi
Uint8Arrays
Uint8Array
endianness
styleguide
noop
MkDocs
- docs.it4i/anselm-cluster-documentation/environment-and-modules.md
MODULEPATH
bashrc
PrgEnv-gnu
bullx
MPI
PrgEnv-intel
EasyBuild
- docs.it4i/anselm-cluster-documentation/capacity-computing.md
capacity.zip
README
- docs.it4i/anselm-cluster-documentation/compute-nodes.md
DIMMs
- docs.it4i/anselm-cluster-documentation/hardware-overview.md
cn
K20
Xeon
x86-64
Virtualization
virtualization
NVIDIA
5110P
SSD
lscratch
login1
login2
dm1
Rpeak
LINPACK
Rmax
E5-2665
E5-2470
P5110
isw
- docs.it4i/anselm-cluster-documentation/introduction.md
RedHat
- docs.it4i/anselm-cluster-documentation/job-priority.md
walltime
qexp
_List.fairshare
_time
_FAIRSHARE
1E6
- docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
15209.srv11
qsub
15210.srv11
pwd
cn17.bullx
cn108.bullx
cn109.bullx
cn110.bullx
pdsh
hostname
SCRDIR
mkdir
mpiexec
qprod
Jobscript
jobscript
cn108
cn109
cn110
Name0
cn17
_NODEFILE
_O
_WORKDIR
mympiprog.x
_JOBID
myprog.x
openmpi
- docs.it4i/anselm-cluster-documentation/network.md
ib0
- docs.it4i/anselm-cluster-documentation/prace.md
PRACE
qfree
it4ifree
it4i.portal.clients
prace
1h
- docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
VPN
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md
ANSYS
CFX
cfx.pbs
_r
ane3fl
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
mapdl.pbs
_dy
- docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md
HPC
lsdyna.pbs
- docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
OpenMP
- docs.it4i/anselm-cluster-documentation/software/compilers.md
Fortran
- docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
E5-2600
- docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
Makefile
- docs.it4i/anselm-cluster-documentation/software/gpi2.md
gcc
cn79
helloworld
_gpi.c
ibverbs
gaspi
_logger
- docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
Haswell
CPUs
ipo
O3
vec
xAVX
omp
simd
ivdep
pragmas
openmp
xCORE-AVX2
axCORE-AVX2
- docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
rc.local
runlevel
RDP
DHCP
DNS
SMB
VDE
smb.conf
TMPDIR
run.bat.
slirp
NATs
- docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md
NumPy
- docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
mpiLibConf.m
matlabcode.m
output.out
matlabcodefile
sched
_feature
- docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
UV2000
maxNumCompThreads
SalomonPBSPro
- docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
_THREADS
_NUM
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md
CMake-aware
Makefile.export
_PACKAGE
_CXX
_COMPILER
_INCLUDE
_DIRS
_LIBRARY
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md
ansysdyna.pbs
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md
svsfem.cz
_
- docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
libmpiwrap-amd64-linux
O0
valgrind
malloc
_PRELOAD
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
cn204
_LIBS
MAGMAROOT
_magma
_server
_anselm
_from
_mic.sh
_dgetrf
_mic
_03.pdf
- docs.it4i/anselm-cluster-documentation/software/paraview.md
cn77
localhost
v4.0.1
- docs.it4i/anselm-cluster-documentation/storage.md
ssh.du1.cesnet.cz
Plzen
ssh.du2.cesnet.cz
ssh.du3.cesnet.cz
tier1
_home
_cache
_tape
- docs.it4i/salomon/environment-and-modules.md
icc
ictce
ifort
imkl
intel
gompi
goolf
BLACS
iompi
iccifort
- docs.it4i/salomon/hardware-overview.md
HW
E5-4627v2
- docs.it4i/salomon/job-submission-and-execution.md
15209.isrv5
r21u01n577
r21u02n578
r21u03n579
r21u04n580
qsub
15210.isrv5
pwd
r2i5n6.ib0.smc.salomon.it4i.cz
r4i6n13.ib0.smc.salomon.it4i.cz
r4i7n2.ib0.smc.salomon.it4i.cz
pdsh
r2i5n6
r4i6n13
r4i7n
r4i7n2
r4i7n0
SCRDIR
myjob
mkdir
mympiprog.x
mpiexec
myprog.x
r4i7n0.ib0.smc.salomon.it4i.cz
- docs.it4i/salomon/7d-enhanced-hypercube.md
cns1
cns576
r1i0n0
r4i7n17
cns577
cns1008
r37u31n1008
7D
- docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
qsub
it4ifree
it4i.portal.clients
x86
x64
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md
anslic
_admin
- docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
_DIR
- docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
EDU
comsol
_matlab.pbs
_job.m
mphstart
- docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md
perf-report
perf
txt
html
mympiprog
_32p
- docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
Hotspots
- docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
scorep
- docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
edu
ansys
_features
_state.txt
f1
matlab
acfd
_ansys
_acfd
_aa
_comsol
HEATTRANSFER
_HEATTRANSFER
COMSOLBATCH
_COMSOLBATCH
STRUCTURALMECHANICS
_STRUCTURALMECHANICS
_matlab
_Toolbox
_Image
_Distrib
_Comp
_Engine
_Acquisition
pmode
matlabpool
- docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
mpirun
BLAS1
FFT
KMP
_AFFINITY
GOMP
_CPU
bullxmpi-1
mpich2
- docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md
bysocket
bycore
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md
gcc3.3.3
pthread
fftw3
lfftw3
_threads-lfftw3
_omp
icc3.3.3
FFTW2
gcc2.1.5
fftw2
lfftw
_threads
icc2.1.5
fftw-mpi3
_mpi
fftw3-mpi
fftw2-mpi
IntelMPI
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md
dwt.c
mkl
lgsl
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md
icc
hdf5
_INC
_SHLIB
_CPP
_LIB
_F90
gcc49
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md
_Dist
- docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
lcublas
- docs.it4i/anselm-cluster-documentation/software/operating-system.md
6.x
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md
startxwin
cygwin64binXWin.exe
tcp
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
Xming
XWin.exe.
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/pageant.md
_rsa.ppk
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md
_keys
organization.example.com
_rsa
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md
vpnui.exe
- docs.it4i/salomon/ib-single-plane-topology.md
36-port
Mcell.pdf
r21-r38
nodes.pdf
- docs.it4i/salomon/introduction.md
E5-2680v3
- docs.it4i/salomon/network.md
r4i1n0
r4i1n1
r4i1n2
r4i1n3
ip
- docs.it4i/salomon/software/ansys/setting-license-preferences.md
ansys161
- docs.it4i/salomon/software/ansys/workbench.md
mpifile.txt
solvehandlers.xml
- docs.it4i/salomon/software/chemistry/phono3py.md
vasprun.xml
disp-XXXXX
disp
_fc3.yaml
ir
_grid
_points.yaml
gofree-cond1
- docs.it4i/salomon/software/compilers.md
HPF
- docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
ver
- docs.it4i/salomon/software/debuggers/aislinn.md
test.cpp
- docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
vtune
_update1
- docs.it4i/salomon/software/debuggers/valgrind.md
EBROOTVALGRIND
- docs.it4i/salomon/software/intel-suite/intel-advisor.md
O2
- docs.it4i/salomon/software/intel-suite/intel-compilers.md
UV1
- docs.it4i/salomon/software/numerical-languages/octave.md
octcode.m
mkoctfile
- docs.it4i/software/orca.md
pdf
- node_modules/es6-promise/README.md
rsvp.js
es6-promise
es6-promise-min
Node.js
testem
- node_modules/spawn-sync/lib/json-buffer/README.md
node.js
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/doc/wg-meetings/2015-01-30.md
WG
domenic
mikeal
io.js
sam
calvin
whatwg
compat
mathias
isaac
chris
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/core-util-is/README.md
core-util-is
v0.12.
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/isarray/README.md
isarray
Gruber
julian
juliangruber.com
NONINFRINGEMENT
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/process-nextick-args/license.md
Metcalf
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/process-nextick-args/readme.md
process-nextick-args
process.nextTick
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/string_decoder/README.md
_decoder.js
Joyent
joyent
repo
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/util-deprecate/History.md
kumavis
jsdocs
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/util-deprecate/README.md
util-deprecate
Rajlich
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/README.md
v7.0.0
userland
chrisdickinson
christopher.s.dickinson
gmail.com
9554F04D7259F04124DE6B476D5A82AC7E37093B
calvinmetcalf
calvin.metcalf
F3EF5F62A87FC27A22E643F714CE4FF5015AA242
Vagg
rvagg
vagg.org
DD8F2338BAE7501E3DD5AC78C273792F7D83545D
sonewman
newmansam
outlook.com
Buus
mafintosh
mathiasbuus
Denicola
domenic.me
Matteo
Collina
mcollina
matteo.collina
3ABC01543F22DD2239285CDD818674489FBC127E
- node_modules/spawn-sync/node_modules/concat-stream/readme.md
concat-stream
concat
cb
- node_modules/spawn-sync/node_modules/os-shim/README.md
0.10.x
os.tmpdir
os.endianness
os.EOL
os.platform
os.arch
0.4.x
Aparicio
Adesis
Netlife
S.L
- node_modules/spawn-sync/node_modules/try-thread-sleep/node_modules/thread-sleep/README.md
node-pre-gyp
npm
- node_modules/spawn-sync/README.md
iojs
>>>>>>> readme
# User documentation
This is project contain IT4Innovations user documentation source.
## Environments
* [https://docs.it4i.cz - master branch](https://docs.it4i.cz - master branch)
* [https://docs.it4i.cz/devel/$BRANCH_NAME](https://docs.it4i.cz/devel/$BRANCH_NAME) - maps the branches, available only with VPN access
## URLs
* [http://facelessuser.github.io/pymdown-extensions/](http://facelessuser.github.io/pymdown-extensions/)
* [http://squidfunk.github.io/mkdocs-material/](http://squidfunk.github.io/mkdocs-material/)
```
fair-share
InfiniBand
RedHat
CentOS
Mellanox
```
## Mathematical Formulae
### Formulas are made with:
* [https://facelessuser.github.io/pymdown-extensions/extensions/arithmatex/](https://facelessuser.github.io/pymdown-extensions/extensions/arithmatex/)
* [https://www.mathjax.org/](https://www.mathjax.org/)
You can add formula to page like this:
```
$$
MAX\_FAIRSHARE * ( 1 - \frac{usage_{Project}}{usage_{Total}} )
$$
```
To enable the MathJX on page you need to enable it by adding line ```---8<--- "mathjax.md"``` at the end of file.
# Introduction
Please, use **only** training account (DD-18-36-\*)
## Setting Up for Building
Set up an environment to use your own modules. ([EasyBuild#modulepath](https://docs.it4i.cz/software/tools/easybuild/#modulepath))
### Variant A
For temporary setup.
```console
module use $HOME/.local/easybuild/modules/all/
```
### Variant B
Modify your .bash_profile for permanent setup and reload environment (logout, login)
```console
cat ~/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
module use $HOME/.local/easybuild/modules/all/
PATH=$PATH:$HOME/bin
export PATH
```
# Creating a New Easyconfig From Scratch
Create easyconfig for `moon-buggy` software. The manual procedure is listed below.
Software: `moon-buggy`
Homepage: [https://github.com/seehuhn/moon-buggy](https://github.com/seehuhn/moon-buggy)
**Description**
Moon-buggy is a simple character graphics game, where you drive some kind of car across the moon's surface. Unfortunately there are dangerous craters there. Fortunately your car can jump over them!
## Manual Installation
```console
$ wget https://code.it4i.cz/kru0052/moon-buggy/-/archive/1.0/moon-buggy-1.0.tar.gz
$ tar xvf moon-buggy-1.0.tar.gz
$ cd moon-buggy-1.0
$ ./autogen.sh
$ ./configure --prefix=/home/kru0052/game/moon-buggy-build
$ make
$ ls
acinclude.m4 buggy.h config.h.in copying.h error.c highscore.o keyboard.c main.o meteor.o moon-buggy.info pager.o README terminal.c title.eps xmalloc.o
aclocal.m4 buggy.o config.log cursor.c error.o hpath.c keyboard.o Makefile missing moon-buggy.lsm persona.c realname.c terminal.o title.o xstrdup.c
ANNOUNCE car.img config.status cursor.o game.c hpath.o laser.c Makefile.am mode.c moon-buggy.png persona.o realname.o test-score-modes TODO xstrdup.o
AUTHORS ChangeLog config.sub darray.h game.o img.sed laser.o Makefile.in mode.o moon-buggy.texi queue.c signal.c texinfo.tex vclock.c
autogen.sh checklist configure date.c ground.c INSTALL level.c manpage.in moon-buggy moon-buggy.xpm queue.o signal.o text2c.sed vclock.o
autom4te.cache config.guess configure.ac date.o ground.o install-sh level.o mdate-sh moon-buggy.6 NEWS random.c stamp-h1 THANKS version.texi
buggy.c config.h COPYING depcomp highscore.c instcmds main.c meteor.c moon-buggy.h pager.c random.o stamp-vti title.c xmalloc.c
$ ./moon-buggy
```
## Create Easyconfog From Template
* **Task**: *create easyconfig for `moon-buggy` (use template)* ([Download template.eb](template.eb))
```console
$ cp template.eb moon-buggy-1.0.eb
```
* **Task**: *EASYBLOCK* ... choose easyblock (Analyse manual instalation - only step CONFIGURE AND MAKE -> choose easyblock `ConfigureMake`)
```python
easyblock = 'ConfigureMake'
```
* **Task**: *NAME* ... defined name of the software
```python
name = 'moon-buggy'
```
* **Task**: *VERSION* ... defined versions of the software
```python
version = "1.0"
```
* **Task**: *VERSIONSUFFIX* ... add your login
```python
versionsuffix = "-kru0052"
```
* **Task**: *HOMEPAGE* ... homepage url
```python
homepage = 'https://github.com/seehuhn/moon-buggy'
```
* **Task**: *DESCRIPTION* ... basic software information
```python
description = """Moon-buggy is a simple character graphics game, where you drive some
kind of car across the moon's surface. Unfortunately there are
dangerous craters there. Fortunately your car can jump over them!"""
```
* **Task**: *TOOLCHAIN* ... choose toolchain
```python
toolchain = {'name': 'dummy', 'version': ''}
```
* **Task**: *SOURCE_URLS* ... source urls
```python
source_urls = ['https://code.it4i.cz/kru0052/moon-buggy/-/archive/%(version)s/']
```
* **Task**: *SOURCES* ... package name definition
```python
sources = ['%(name)s-%(version)s.tar.gz']
```
* **Task**: *PRECONFIGOPTS* ... autogen.sh
```python
preconfigopts = "./autogen.sh && "
```
* **Task**: *BUILDDEPENDENCY*
```python
('Autoconf', '2.69')
```
* **Task**: *DEPENDENCY*
```python
('ncurses', '6.1'),
```
* **Task**: *SANITY_CHECK_PATH* ... you must check exists binary file
```python
sanity_check_paths = {
'files': ['bin/moon-buggy', 'com/moon-buggy/mbscore'],
'dirs': ['bin', 'com', 'share'],
}
```
* **Task**: *MODULECLASS* ... choose class
```python
moduleclass = 'tools'
```
* **Task**: *install `moon-buggy` from easyconfig*
```console
$ eb moon-buggy-1.0.eb -r
== temporary log file in case of crash /tmp/eb-ctAvZY/easybuild-GQkRPM.log
== resolving dependencies ...
== processing EasyBuild easyconfig /home/kru0052/game/moon-buggy-1.0.eb
== building and installing moon-buggy/1.0-kru0052...
== fetching files...
...
...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /home/kru0052/.local/easybuild/software/moon-buggy/1.0-kru0052/easybuild/easybuild-moon-buggy-1.0-20181016.094918.log
== Build succeeded for 1 out of 1
== Temporary log file(s) /tmp/eb-ctAvZY/easybuild-GQkRPM.log* have been removed.
== Temporary directory /tmp/eb-ctAvZY has been removed.
```
* **Task**: *load module and run `moon-buggy`*
```console
$ ml moon-buggy/1.0-kru0052
$ moon-buggy
```
# EasyBuild and Singularity
[EasyBuild](https://docs.it4i.cz/software/easybuild/)
[EasyBuild-Singularity](https://docs.it4i.cz/software/tools/easybuild-images/)
* **Tasks**: *create git-2.19.1.eb bootstrap for build singularity image*
```console
$ eb git-2.19.1.eb -C --container-base shub:shahzebsiddiqui/eb-singularity:centos-7.4.1708 --experimental
== temporary log file in case of crash /tmp/eb-E5i5Xx/easybuild-pNROsc.log
== Singularity definition file created at /home/kru0052/.local/easybuild/containers/Singularity.git-2.19.1-Py-3.6-kru0052
== Temporary log file(s) /tmp/eb-E5i5Xx/easybuild-pNROsc.log* have been removed.
== Temporary directory /tmp/eb-E5i5Xx has been removed.
[kru0052@login4.salomon game]$ cat /home/kru0052/.local/easybuild/containers/Singularity.git-2.19.1-Py-3.6-kru0052
Bootstrap: shub
From: shahzebsiddiqui/eb-singularity:centos-7.4.1708
%post
# upgrade easybuild package automatically to latest version
pip install -U easybuild
# change to 'easybuild' user
su - easybuild
eb git-2.19.1.eb --robot --installpath=/app/ --prefix=/scratch --tmpdir=/scratch/tmp
# exit from 'easybuild' user
exit
# cleanup
rm -rf /scratch/tmp/* /scratch/build /scratch/sources /scratch/ebfiles_repo
%runscript
eval "$@"
%environment
source /etc/profile
module use /app/modules/all
module load git/2.19.1-Py-3.6-kru0052
%labels
```
## Use Bootstrap for Installing Singularity Image
Remember, you must sudo **privilege**. (root password)
```console
local $ sudo singularity build image.simg simg
[sudo] password for kru0052:
Using container recipe deffile: sing
Sanitizing environment
Adding base Singularity environment to container
Progress |===================================| 100.0%
Exporting contents of shub://shahzebsiddiqui/eb-singularity:centos-7.4.1708 to /tmp/.singularity-build.rRr52f
User defined %runscript found! Taking priority.
Adding environment to container
Running post scriptlet
+ pip install -U easybuild
Collecting easybuild
Downloading https://files.pythonhosted.org/packages/db/dd/85e9eec7b3c92a7e3ba214f354c03c519ec90dcb6ac7be288dfdd426ddfd/easybuild-3.7.1.tar.gz
Collecting easybuild-easyconfigs==3.7.1 (from easybuild)
Downloading https://files.pythonhosted.org/packages/73/63/b22ff96b8c3e09e04466951c0c3aa7b2230a522792dd3ae37c5fce4c68ea/easybuild-easyconfigs-3.7.1.tar.gz (3.3MB)
100% |################################| 3.3MB 341kB/s
Collecting easybuild-easyblocks==3.7.1 (from easybuild)
Downloading https://files.pythonhosted.org/packages/50/ea/3381a6e85f9a9beee311bed81a03c4900dd11c2a25c1e952b76e9a73486b/easybuild-easyblocks-3.7.1.tar.gz (338kB)
100% |################################| 348kB 1.8MB/s
Collecting easybuild-framework==3.7.1 (from easybuild)
Downloading https://files.pythonhosted.org/packages/d0/f1/a3c897ab19ad36a9a259adc0b31e383a8d322942eda1e59eb4fedee27d09/easybuild-framework-3.7.1.tar.gz (1.7MB)
100% |################################| 1.7MB 669kB/s
Collecting setuptools>=0.6 (from easybuild-easyblocks==3.7.1->easybuild)
Downloading https://files.pythonhosted.org/packages/96/06/c8ee69628191285ddddffb277bd5abdf769166e7a14b867c2a172f0175b1/setuptools-40.4.3-py2.py3-none-any.whl (569kB)
100% |################################| 573kB 1.4MB/s
Collecting vsc-install>=0.9.19 (from easybuild-framework==3.7.1->easybuild)
Downloading https://files.pythonhosted.org/packages/b6/03/becd813f5c4e8890254c79db8d2558b658f5a3ab52157bc0c077c6c9beea/vsc-install-0.11.2.tar.gz (61kB)
100% |################################| 71kB 3.1MB/s
Collecting vsc-base>=2.5.8 (from easybuild-framework==3.7.1->easybuild)
Downloading https://files.pythonhosted.org/packages/62/e5/589612e47255627e4752d99018ae7cff8f49ab0fa6b4ba7b2226a76a05d3/vsc-base-2.8.3.tar.gz (104kB)
100% |################################| 112kB 1.8MB/s
Installing collected packages: setuptools, vsc-install, vsc-base, easybuild-framework, easybuild-easyblocks, easybuild-easyconfigs, easybuild
Found existing installation: setuptools 0.9.8
Uninstalling setuptools-0.9.8:
Successfully uninstalled setuptools-0.9.8
Found existing installation: vsc-install 0.10.27
Uninstalling vsc-install-0.10.27:
Successfully uninstalled vsc-install-0.10.27
Running setup.py install for vsc-install ... done
Found existing installation: vsc-base 2.5.8
Uninstalling vsc-base-2.5.8:
Successfully uninstalled vsc-base-2.5.8
Running setup.py install for vsc-base ... done
Found existing installation: easybuild-framework 3.5.1
Uninstalling easybuild-framework-3.5.1:
Successfully uninstalled easybuild-framework-3.5.1
Running setup.py install for easybuild-framework ... done
Found existing installation: easybuild-easyblocks 3.5.1
Uninstalling easybuild-easyblocks-3.5.1:
Successfully uninstalled easybuild-easyblocks-3.5.1
Running setup.py install for easybuild-easyblocks ... done
Found existing installation: easybuild-easyconfigs 3.5.1
Uninstalling easybuild-easyconfigs-3.5.1:
Successfully uninstalled easybuild-easyconfigs-3.5.1
Running setup.py install for easybuild-easyconfigs ... done
Found existing installation: easybuild 3.5.1
Uninstalling easybuild-3.5.1:
Successfully uninstalled easybuild-3.5.1
Running setup.py install for easybuild ... done
Successfully installed easybuild-3.7.1 easybuild-easyblocks-3.7.1 easybuild-easyconfigs-3.7.1 easybuild-framework-3.7.1 setuptools-40.4.3 vsc-base-2.8.3 vsc-install-0.11.2
You are using pip version 9.0.1, however version 18.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
+ su - easybuild
== temporary log file in case of crash /scratch/tmp/eb-NWxNRI/easybuild-EMp20L.log
ERROR: Can't find path /home/easybuild/git-2.19.1.eb
ABORT: Aborting with RETVAL=255
Cleaning up...
```
## Problem Solving
Download [git-2.19.1.eb](git-2.19.1.eb) to local computer.
Change git-2.19.1.eb to /tmp/git-2.19.1.eb into bootstrap.
```console
eb /tmp/git-2.19.1.eb --robot --installpath=/app/ --prefix=/scratch --tmpdir=/scratch/tmp
```
```console
local $ sudo singularity build image.simg simg
Using container recipe deffile: sing
Sanitizing environment
Adding base Singularity environment to container
Progress |===================================| 100.0%
Exporting contents of shub://shahzebsiddiqui/eb-singularity:centos-7.4.1708 to /tmp/.singularity-build.EBQ7qI
User defined %runscript found! Taking priority.
Adding environment to container
Running post scriptlet
+ pip install -U easybuild
Collecting easybuild
Downloading https://files.pythonhosted.org/packages/db/dd/85e9eec7b3c92a7e3ba214f354c03c519ec90dcb6ac7be288dfdd426ddfd/easybuild-3.7.1.tar.gz
Collecting easybuild-easyconfigs==3.7.1 (from easybuild)
Downloading https://files.pythonhosted.org/packages/73/63/b22ff96b8c3e09e04466951c0c3aa7b2230a522792dd3ae37c5fce4c68ea/easybuild-easyconfigs-3.7.1.tar.gz (3.3MB)
100% |################################| 3.3MB 352kB/s
Collecting easybuild-easyblocks==3.7.1 (from easybuild)
Downloading https://files.pythonhosted.org/packages/50/ea/3381a6e85f9a9beee311bed81a03c4900dd11c2a25c1e952b76e9a73486b/easybuild-easyblocks-3.7.1.tar.gz (338kB)
100% |################################| 348kB 1.9MB/s
Collecting easybuild-framework==3.7.1 (from easybuild)
Downloading https://files.pythonhosted.org/packages/d0/f1/a3c897ab19ad36a9a259adc0b31e383a8d322942eda1e59eb4fedee27d09/easybuild-framework-3.7.1.tar.gz (1.7MB)
100% |################################| 1.7MB 590kB/s
Collecting setuptools>=0.6 (from easybuild-easyblocks==3.7.1->easybuild)
Downloading https://files.pythonhosted.org/packages/96/06/c8ee69628191285ddddffb277bd5abdf769166e7a14b867c2a172f0175b1/setuptools-40.4.3-py2.py3-none-any.whl (569kB)
100% |################################| 573kB 1.5MB/s
Collecting vsc-install>=0.9.19 (from easybuild-framework==3.7.1->easybuild)
Downloading https://files.pythonhosted.org/packages/b6/03/becd813f5c4e8890254c79db8d2558b658f5a3ab52157bc0c077c6c9beea/vsc-install-0.11.2.tar.gz (61kB)
100% |################################| 71kB 2.8MB/s
Collecting vsc-base>=2.5.8 (from easybuild-framework==3.7.1->easybuild)
Downloading https://files.pythonhosted.org/packages/62/e5/589612e47255627e4752d99018ae7cff8f49ab0fa6b4ba7b2226a76a05d3/vsc-base-2.8.3.tar.gz (104kB)
100% |################################| 112kB 3.3MB/s
Installing collected packages: setuptools, vsc-install, vsc-base, easybuild-framework, easybuild-easyblocks, easybuild-easyconfigs, easybuild
Found existing installation: setuptools 0.9.8
Uninstalling setuptools-0.9.8:
Successfully uninstalled setuptools-0.9.8
Found existing installation: vsc-install 0.10.27
Uninstalling vsc-install-0.10.27:
Successfully uninstalled vsc-install-0.10.27
Running setup.py install for vsc-install ... done
Found existing installation: vsc-base 2.5.8
Uninstalling vsc-base-2.5.8:
Successfully uninstalled vsc-base-2.5.8
Running setup.py install for vsc-base ... done
Found existing installation: easybuild-framework 3.5.1
Uninstalling easybuild-framework-3.5.1:
Successfully uninstalled easybuild-framework-3.5.1
Running setup.py install for easybuild-framework ... -^done
n Found existing installation: easybuild-easyblocks 3.5.1
Uninstalling easybuild-easyblocks-3.5.1:
Successfully uninstalled easybuild-easyblocks-3.5.1
Running setup.py install for easybuild-easyblocks ... done
Found existing installation: easybuild-easyconfigs 3.5.1
^B Uninstalling easybuild-easyconfigs-3.5.1:
n Successfully uninstalled easybuild-easyconfigs-3.5.1
Running setup.py install for easybuild-easyconfigs ... done
Found existing installation: easybuild 3.5.1
Uninstalling easybuild-3.5.1:
Successfully uninstalled easybuild-3.5.1
Running setup.py install for easybuild ... done
Successfully installed easybuild-3.7.1 easybuild-easyblocks-3.7.1 easybuild-easyconfigs-3.7.1 easybuild-framework-3.7.1 setuptools-40.4.3 vsc-base-2.8.3 vsc-install-0.11.2
You are using pip version 9.0.1, however version 18.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
+ su - easybuild
== temporary log file in case of crash /scratch/tmp/eb-kEW7Dr/easybuild-tuPVTd.log
== resolving dependencies ...
== processing EasyBuild easyconfig /usr/easybuild/easyconfigs/z/zlib/zlib-1.2.11.eb
== building and installing zlib/1.2.11...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== cleaning up...
== creating module...
== permissions...
== packaging...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /app/software/zlib/1.2.11/easybuild/easybuild-zlib-1.2.11-20181023.104624.log
== processing EasyBuild easyconfig /usr/easybuild/easyconfigs/m/M4/M4-1.4.17.eb
== building and installing M4/1.4.17...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== cleaning up...
== creating module...
== permissions...
== packaging...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /app/software/M4/1.4.17/easybuild/easybuild-M4-1.4.17-20181023.104650.log
== processing EasyBuild easyconfig /usr/easybuild/easyconfigs/a/Autoconf/Autoconf-2.69.eb
== building and installing Autoconf/2.69...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== cleaning up...
== creating module...
== permissions...
== packaging...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /app/software/Autoconf/2.69/easybuild/easybuild-Autoconf-2.69-20181023.104655.log
== processing EasyBuild easyconfig /tmp/git-2.19.1.eb
== building and installing git/2.19.1-Py-3.6-kru0052...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== cleaning up...
== creating module...
== permissions...
== packaging...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /app/software/git/2.19.1-Py-3.6-kru0052/easybuild/easybuild-git-2.19.1-20181023.104742.log
== Build succeeded for 4 out of 4
== Temporary log file(s) /scratch/tmp/eb-kEW7Dr/easybuild-tuPVTd.log* have been removed.
== Temporary directory /scratch/tmp/eb-kEW7Dr has been removed.
+ rm -rf '/scratch/tmp/*' /scratch/build /scratch/sources /scratch/ebfiles_repo
Adding deffile section labels to container
Adding runscript
Found an existing definition file
Adding a bootstrap_history directory
Finalizing Singularity container
Calculating final size for metadata...
Environment variables were added, removed, and/or changed during bootstrap.
Variables unique to original image (shahzebsiddiqui/eb-singularity:centos-7.4.1708)
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Variables unique to new image (/tmp/.singularity-build.EBQ7qI)
BASH_ENV=/usr/share/lmod/lmod/init/bash
LMOD_CMD=/usr/share/lmod/lmod/libexec/lmod
LMOD_COLORIZE=yes
LMOD_DIR=/usr/share/lmod/lmod/libexec/
LMOD_FULL_SETTARG_SUPPORT=no
LMOD_PKG=/usr/share/lmod/lmod
LMOD_PREPEND_BLOCK=normal
LMOD_SETTARG_CMD=:
LMOD_arch=x86_64
LMOD_sys=Linux
MANPATH=
MODULEPATH=/etc/lmod/modules:/usr/share/lmod/lmod/modulefiles/
MODULEPATH_ROOT=/usr/modulefiles
MODULESHOME=/usr/share/lmod/lmod
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
USER=
XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop
Skipping checks
Building Singularity image...
Singularity container built: image.simg
Cleaning up...
```
## Testing the Assembled Image
```console
local $ git --version
git version 2.17.1
local $ singularity exec image.simg git --version
git version 2.19.1
```
# Building Your First Software Using Easyconfigs
Install git version 2.19.0 from ready-made easyconfig.
* **task**: *install git/2.19.0* ([Download git-2.19.0.eb](git-2.19.0.eb))
## Load Module EasyBuild
The effectively apply the changes to the environment that are specified by a module, use `ml` and specify the name of the module. For example, to set up your environment to use `ml EasyBuild`.
To get an overview of the currently loaded modules, use module list or ml (without specifying extra arguments).
```console
$ ml EasyBuild
$ ml
Currently Loaded Modules:
1) EasyBuild/3.7.1 (S)
Where:
S: Module is Sticky, requires --force to unload or purge
```
## Show Installed Dependencies
You can do a `dry-run` overview by supplying `-D/--dry-run` (typically combined with `--robot`, in the form of `-Dr`).
```console
$ eb git-2.19.0.eb -Dr
== temporary log file in case of crash /tmp/eb-xk3mg4/easybuild-f90lIR.log
Dry run: printing build status of easyconfigs and dependencies
CFGS=/apps/easybuild/easyconfigs-it4i
* [x] $CFGS/m/M4/M4-1.4.18.eb (module: M4/1.4.18)
* [ ] $CFGS/a/Autoconf/Autoconf-2.68.eb (module: Autoconf/2.68)
* [ ] $CFGS/g/git/git-2.19.0.eb (module: git/2.19.0)
== Temporary log file(s) /tmp/eb-xk3mg4/easybuild-f90lIR.log* have been removed.
== Temporary directory /tmp/eb-xk3mg4 has been removed.
```
## Install Module and All Dependencies
If we try to build git-2.19.0.eb. To enable dependency resolution, use the `--robot` command line option (or `-r` for short).
```console
$ eb git-2.19.0.eb -r
== temporary log file in case of crash /tmp/eb-WEtJ8t/easybuild-dvHmbd.log
== resolving dependencies ...
== processing EasyBuild easyconfig /apps/easybuild/easyconfigs-it4i/a/Autoconf/Autoconf-2.68.eb
== building and installing Autoconf/2.68...
...
...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /apps/all/Autoconf/2.68/easybuild/easybuild-Autoconf-2.68-20181016.085630.log
== processing EasyBuild easyconfig /apps/easybuild/easyconfigs-it4i/g/git/git-2.19.0.eb
== building and installing git/2.19.0...
== fetching files...
...
...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /apps/all/git/2.19.0/easybuild/easybuild-git-2.19.0-20181016.085647.log
== Build succeeded for 2 out of 2
== Temporary log file(s) /tmp/eb-WEtJ8t/easybuild-dvHmbd.log* have been removed.
== Temporary directory /tmp/eb-WEtJ8t has been removed.
```
## Test Installed Module
```console
$ ml git/2.19.0
$ git --version
git version 2.19.0
```
# IT4Innovations 2018
easyblock = 'ConfigureMake'
name = 'git'
version = "2.19.0"
homepage = 'http://git-scm.com/'
description = """Git is a free and open source distributed version control system designed
to handle everything from small to very large projects with speed and efficiency."""
toolchain = {'name': 'dummy', 'version': ''}
source_urls = ['https://www.kernel.org/pub/software/scm/git/']
sources = ['%(name)s-%(version)s.tar.gz']
builddependencies = [
('Autoconf', '2.68', '', True)
]
preconfigopts = 'make configure && '
moduleclass = 'tools'
# IT4Innovations 2018
easyblock = 'ConfigureMake'
name = 'git'
version = "2.19.1"
versionsuffix = "-Py-3.6-foxik"
homepage = 'http://git-scm.com/'
description = """Git is a free and open source distributed version control system designed
to handle everything from small to very large projects with speed and efficiency."""
toolchain = {'name': 'dummy', 'version': ''}
source_urls = ['https://www.kernel.org/pub/software/scm/git/']
sources = ['%(name)s-%(version)s.tar.gz']
builddependencies = [
('Autoconf', '2.69', '', True)
]
dependencies = [
('zlib', '1.2.11', '', True)
]
preconfigopts = 'make configure && '
sanity_check_paths = {
'files': ['bin/git'],
'dirs': ['bin', 'libexec', 'share'],
}
sanity_check_commands = [
('git', '--version')
]
moduleclass = 'tools'
# Modification of an Existing Easyconfig
The goal is to modify the easyconfig for git/2.19.1 so that it meets the requirements below.
## Change Name Easyconfig
* **Task**: *change name* (name-version-Py-version-login)
```console
$ cp git-2.19.0.eb git-2.19.1-Py-3.6-kru0052.eb
```
## Change Software Version
* **Task**: *change git version from 2.19.0 to 2.19.1*
```python
version = "2.19.1"
```
## Add Module Suffix
* **Task**: *add module suffix (all module name git/2.19.1-Py-version-login)*
```python
versionsuffix = "-Py-3.6-kru0052"
```
## Add Dependencies
* **Task**: *cURL/7.56.1 (dummy)*, *expat/2.2.0 (dummy)*, *add Py/3.6 dependency (dummy)*
```python
dependencies = [
('cURL', '7.56.1', '', True),
('expat', '2.2.0', '', True),
('Py', '3.6', '', True)
]
```
## Add Sanity Check Paths
* **Task**: *sanity_check_paths*
```python
sanity_check_paths = {
'files': ['bin/git'],
'dirs': ['bin', 'libexec', 'share'],
}
```
## Add Sanity Check Commands
* **Task**: *sanity_check_commands*
```python
sanity_check_commands = [
('git', '--version')
]
```
## Install Module
```console
$ eb git-2.19.1.eb -r
Couldn't import dot_parser, loading of dot files will not be possible.
== temporary log file in case of crash /tmp/eb-oYDvfO/easybuild-zCM5rM.log
== resolving dependencies ...
== processing EasyBuild easyconfig /home/kru0052/game/git-2.19.1.eb
== building and installing git/2.19.1-Py-3.6-kru0052...
...
...
== COMPLETED: Installation ended successfully
== Results of the build can be found in the log file(s) /home/kru0052/.local/easybuild/software/git/2.19.1-Py-3.6-foxik/easybuild/easybuild-git-2.19.1-20181016.092517.log
== Build succeeded for 1 out of 1
== Temporary log file(s) /tmp/eb-oYDvfO/easybuild-zCM5rM.log* have been removed.
== Temporary directory /tmp/eb-oYDvfO has been removed.
```
## Test New Module
```console
$ ml av git/
-------------------- /home/kru0052/.local/easybuild/modules/all --------------------
git/2.19.0 git/2.19.1-Py-3.6-kru0052 (D)
-------------------- /apps/modules/tools --------------------
git/2.18.0
Where:
D: Default Module
If you need software that is not listed, request it at support@it4i.cz.
$ ml git/2.19.1-Py-3.6-foxik
$ git --version
git version 2.19.1
```
# IT4Innovations 2018
easyblock =
name =
version =
versionsuffix =
homepage =
description = """ """
toolchain = {'name': '', 'version': ''}
source_urls = ['']
sources = ['']
preconfigopts =
builddependencies = []
dependencies = []
sanity_check_paths = {
'files': [],
'dirs': [],
}
moduleclass = ''
# Troubleshooting - Typical Problems
## INSTALL - Already Installed
Install module from easyconfig `git-2.15.0-GCC-7.1.0-2.28-Python-3.6.1.eb`
```console
[kru0052@login2.anselm test-build]$ eb git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb
== temporary log file in case of crash /tmp/eb-KMNHse/easybuild-Jwn_R7.log
== git/2.14.1-GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK is already installed (module found), skipping
== No easyconfigs left to be built.
== Build succeeded for 0 out of 0
== Temporary log file(s) /tmp/eb-KMNHse/easybuild-Jwn_R7.log* have been removed.
== Temporary directory /tmp/eb-KMNHse has been removed.
```
## REINSTALL - Module Already Loaded
Load module `git/2.14.1-GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK` and reinstall module
```console
[kru0052@login2.anselm test-build]$ eb git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb -f
== temporary log file in case of crash /tmp/eb-NDp_Rx/easybuild-XgYOq9.log
WARNING: Found one or more non-allowed loaded (EasyBuild-generated) modules in current environment:
* bzip2/1.0.6
* libreadline/6.3
* SQLite/3.13.0
* Tcl/8.6.5
* Tk/8.6.5
* GMP/6.1.1
* XZ/5.2.2
* zlib/1.2.11
* Python/3.6.1
* cURL/7.53.1
* expat/2.2.0
* ncurses/6.0
* gettext/0.19.8.1
* GCCcore/7.1.0
* GCCcore/7.1.0
* binutils/2.28-GCCcore-7.1.0
* Perl/5.26.0-GCC-7.1.0-2.28-bare
* git/2.14.1-GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK
This is not recommended since it may affect the installation procedure(s) performed by EasyBuild.
To make EasyBuild allow particular loaded modules, use the --allow-loaded-modules configuration option.
To specify action to take when loaded modules are detected, use --detect-loaded-modules={error,ignore,purge,unload,warn}.
See http://easybuild.readthedocs.io/en/latest/Detecting_loaded_modules.html for more information.
== processing EasyBuild easyconfig /home_lustre/kru0052/hands-on/git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb
== building and installing git/2.14.1-GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK...
== fetching files...
== creating build dir, resetting environment...
== FAILED: Installation ended unsuccessfully (build directory: /home/kru0052/.local/easybuild/build/git/2.14.1/GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK): build failed (first 300 chars): Module is already loaded (EBROOTGIT is set), installation cannot continue.
== Results of the build can be found in the log file(s) /tmp/eb-NDp_Rx/easybuild-git-2.14.1-20170911.085415.EaWTJ.log
ERROR: Build of /home_lustre/kru0052/test-build/git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb failed (err: 'build failed (first 300 chars): Module is already loaded (EBROOTGIT is set), installation cannot continue.')
```
## INSTALL/REINSTALL - Irresolvable Dependencies
Change `('Python', '3.6.1', '', True)` to `('Python', '3.6.1')` and reinstall module
```console
[kru0052@login2.anselm ~]$ eb git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb -r
== temporary log file in case of crash /tmp/eb-7toWyv/easybuild-koZYcG.log
== processing EasyBuild easyconfig /home_lustre/kru0052/hands-on/git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb
== building and installing git/2.14.1-GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK...
== fetching files...
== creating build dir, resetting environment...
== FAILED: Installation ended unsuccessfully (build directory: /home/kru0052/.local/easybuild/build/git/2.14.1/GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK): build failed (first 300 chars): Missing modules for one or more dependencies: Python/3.6.1-GCC-7.1.0-2.28
== Results of the build can be found in the log file(s) /tmp/eb-7toWyv/easybuild-git-2.14.1-20170911.084653.qAHAA.log
ERROR: Build of /home_lustre/kru0052/test-build/git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb failed (err: 'build failed (first 300 chars): Missing modules for one or more dependencies: Python/3.6.1-GCC-7.1.0-2.28')
```
## INSTALL/REINSTALL - Can't Find Path
Use bad easyconfig name `eb git-2.14.1-GCC-7.1.0-2.28.eb`
```console
[kru0052@login2.anselm test-build]$ eb git-2.14.1-GCC-7.1.0-2.28.eb -f
== temporary log file in case of crash /tmp/eb-keyVcY/easybuild-AgLBmT.log
ERROR: Can't find path /home_lustre/kru0052/test-build/git-2.14.1-GCC-7.1.0-2.28.eb
```
## INSTALL/REINSTALL - Sanity Check Failed
Change `'files': ['bin/git']` to `'files': ['bin/git2']`
```console
[kru0052@login2.anselm test-build]$ eb git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb -f
== temporary log file in case of crash /tmp/eb-Lv9b1_/easybuild-h2F2uL.log
== processing EasyBuild easyconfig /home_lustre/kru0052/test-build/git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb
== building and installing git/2.14.1-GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK...
== fetching files...
== creating build dir, resetting environment...
== unpacking...
== patching...
== preparing...
== configuring...
== building...
== testing...
== installing...
== taking care of extensions...
== postprocessing...
== sanity checking...
== FAILED: Installation ended unsuccessfully (build directory: /home/kru0052/.local/easybuild/build/git/2.14.1/GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK): build failed (first 300 chars): Sanity check failed: no file of ('bin/git2',) in /home/kru0052/.local/easybuild/software/git/2.14.1-GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK
== Results of the build can be found in the log file(s) /tmp/eb-Lv9b1_/easybuild-git-2.14.1-20170911.085750.VJyvg.log
ERROR: Build of /home_lustre/kru0052/test-build/git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb failed (err: "build failed (first 300 chars): Sanity check failed: no file of ('bin/git2',) in /home/kru0052/.local/easybuild/software/git/2.14.1-GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK")
```
## INSTALL/REINSTALL - Couldn't Find File
Change `sources = ['%(name)s-%(version)s.tar.gz']` to `sources = ['%(name)s-%(version)s-test.tar.gz']`
```console
[kru0052@login2.anselm test-build]$ eb git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb -f
== temporary log file in case of crash /tmp/eb-N64arw/easybuild-lXjC9B.log
== processing EasyBuild easyconfig /home_lustre/kru0052/hands-one/git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb
== building and installing git/2.14.1-GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK...
== fetching files...
== FAILED: Installation ended unsuccessfully (build directory: /home/kru0052/.local/easybuild/build/git/2.14.1/GCC-7.1.0-2.28-Python-3.6.1-by-FoXiK): build failed (first 300 chars): Couldn't find file git-2.14.1-test.tar.gz anywhere, and downloading it didn't work either... Paths attempted (in order): /home_lustre/kru0052/hands-on/g/git/git-2.14.1-test.tar.gz, /home_lustre/kru0052/hands-on/git/git-2.14.1-test.tar.gz, /home_lustre/kru0052/hands-on/git-2.14.1-test.tar.gz, /
== Results of the build can be found in the log file(s) /tmp/eb-N64arw/easybuild-git-2.14.1-20170911.085547.BnfZG.log
ERROR: Build of /home_lustre/kru0052/hands-on/git-2.14.1-GCC-7.1.0-2.28-Python-3.6.1.eb failed (err: "build failed (first 300 chars): Couldn't find file git-2.14.1-test.tar.gz anywhere, and downloading it didn't work either... Paths attempted (in order): /home_lustre/kru0052/hands-on/g/git/git-2.14.1-test.tar.gz, /home_lustre/kru0052/test-build/git/git-2.14.1-test.tar.gz, /home_lustre/kru0052/hands-on/git-2.14.1-test.tar.gz, /")
```
# Capacity Computing
## Introduction
In many cases, it is useful to submit a huge (>100+) number of computational jobs into the PBS queue system. A huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving the best runtime, throughput, and computer utilization.
However, executing a huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling, and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1000 per job array**
!!! note
Please follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
* Use [Job arrays](capacity-computing/#job-arrays) when running a huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
* Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
* Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
## Policy
1. A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
1. The array size is at most 1000 subjobs.
## Job Arrays
!!! note
A huge number of jobs may easily be submitted and managed as a job array.
A job array is a compact representation of many jobs, called subjobs. The subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
* each subjob has a unique index, $PBS_ARRAY_INDEX
* job Identifiers of subjobs only differ by their indices
* the state of subjobs can differ (R,Q,...etc.)
All subjobs within a job array have the same scheduling priority and schedule as independent jobs. An entire job array is submitted through a single qsub command and may be managed by qdel, qalter, qhold, qrls, and qsig commands as a single job.
### Shared Jobscript
All subjobs in a job array use the very same, single jobscript. Each subjob runs its own instance of the jobscript. The instances execute different work controlled by the $PBS_ARRAY_INDEX variable.
Example:
Assume we have 900 input files with the name of each beginning with "file" (e. g. file001, ..., file900). Assume we would like to use each of these input files with program executable myprog.x, each as a separate job.
First, we create a tasklist file (or subjobs list), listing all tasks (subjobs) - all input files in our example:
```console
$ find . -name 'file*' > tasklist
```
Then we create the jobscript:
```bash
#!/bin/bash
#PBS -A PROJECT_ID
#PBS -q qprod
#PBS -l select=1:ncpus=16,walltime=02:00:00
# change to local scratch directory
SCR=/lscratch/$PBS_JOBID
mkdir -p $SCR ; cd $SCR || exit
# get individual tasks from tasklist with index from PBS JOB ARRAY
TASK=$(sed -n "${PBS_ARRAY_INDEX}p" $PBS_O_WORKDIR/tasklist)
# copy input file and executable to scratch
cp $PBS_O_WORKDIR/$TASK input ; cp $PBS_O_WORKDIR/myprog.x .
# execute the calculation
./myprog.x < input > output
# copy output file to submit directory
cp output $PBS_O_WORKDIR/$TASK.out
```
In this example, the submit directory holds the 900 input files, the executable myprog.x, and the jobscript file. As an input for each run, we take the filename of the input file from the created tasklist file. We copy the input file to the local scratch memory /lscratch/$PBS_JOBID, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The myprog.x runs on one node only and must use threads to run in parallel. Be aware, that if the myprog.x **is not multithreaded**, then all the **jobs are run as single thread programs in a sequential** manner. Due to the allocation of the whole node, the accounted time is equal to the usage of the whole node, while using only 1/16 of the node!
If running a huge number of parallel multicore (in means of multinode multithread, e. g. MPI enabled) jobs is needed, then a job array approach should be used. The main difference as compared to previous examples using one node is that the local scratch memory should not be used (as it's not shared between nodes) and MPI or other techniques for parallel multinode processing has to be used properly.
### Submit the Job Array
To submit the job array, use the qsub -J command. The 900 jobs of the [example above](capacity-computing/#array_example) may be submitted like this:
```console
$ qsub -N JOBNAME -J 1-900 jobscript
12345[].dm2
```
In this example, we submit a job array of 900 subjobs. Each subjob will run on one full node and is assumed to take less than 2 hours (note the #PBS directives in the beginning of the jobscript file, don't forget to set your valid PROJECT_ID and desired queue).
Sometimes for testing purposes, you may need to submit a one-element only array. This is not allowed by PBSPro, but there's a workaround:
```console
$ qsub -N JOBNAME -J 9-10:2 jobscript
```
This will only choose the lower index (9 in this example) for submitting/running your job.
### Manage the Job Array
Check status of the job array using the qstat command.
```console
$ qstat -a 12345[].dm2
dm2:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
12345[].dm2 user2 qprod xx 13516 1 16 -- 00:50 B 00:02
```
When the status is B it means that some subjobs are already running.
Check the status of the first 100 subjobs using the qstat command.
```console
$ qstat -a 12345[1-100].dm2
dm2:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
12345[1].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02
12345[2].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02
12345[3].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:01
12345[4].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q --
. . . . . . . . . . .
, . . . . . . . . . .
12345[100].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q --
```
Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted.
```console
$ qdel 12345[].dm2
```
Deleting large job arrays may take a while.
Display status information for all user's jobs, job arrays, and subjobs.
```console
$ qstat -u $USER -t
```
Display status information for all user's subjobs.
```console
$ qstat -u $USER -tJ
```
Read more on job arrays in the [PBSPro Users guide](../pbspro/).
## GNU Parallel
!!! note
Use GNU parallel to run many single core tasks on one node.
GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful when running single core jobs via the queue system on Anselm.
For more information and examples see the parallel man page:
```console
$ module add parallel
$ man parallel
```
### GNU Parallel Jobscript
The GNU parallel shell executes multiple instances of the jobscript using all cores on the node. The instances execute different work, controlled by the $PARALLEL_SEQ variable.
Example:
Assume we have 101 input files with name beginning with "file" (e. g. file001, ..., file101). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
First, we create a tasklist file, listing all tasks - all input files in our example:
```console
$ find . -name 'file*' > tasklist
```
Then we create a jobscript:
```bash
#!/bin/bash
#PBS -A PROJECT_ID
#PBS -q qprod
#PBS -l select=1:ncpus=16,walltime=02:00:00
[ -z "$PARALLEL_SEQ" ] &&
{ module add parallel ; exec parallel -a $PBS_O_WORKDIR/tasklist $0 ; }
# change to local scratch directory
SCR=/lscratch/$PBS_JOBID/$PARALLEL_SEQ
mkdir -p $SCR ; cd $SCR || exit
# get individual task from tasklist
TASK=$1
# copy input file and executable to scratch
cp $PBS_O_WORKDIR/$TASK input
# execute the calculation
cat input > output
# copy output file to submit directory
cp output $PBS_O_WORKDIR/$TASK.out
```
In this example, tasks from the tasklist are executed via the GNU parallel. The jobscript executes multiple instances of itself in parallel, on all cores of the node. Once an instace of the jobscript is finished, a new instance starts until all entries in the tasklist are processed. Currently processed entries of the joblist may be retrieved via $1 variable. The variable $TASK expands to one of the input filenames from the tasklist. We copy the input file to local scratch memory, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name.
### Submit the Job
To submit the job, use the qsub command. The 101 task job of the [example above](capacity-computing/#gp_example) may be submitted as follows:
```console
$ qsub -N JOBNAME jobscript
12345.dm2
```
In this example, we submit a job of 101 tasks. 16 input files will be processed in parallel. The 101 tasks on 16 cores are assumed to complete in less than 2 hours.
!!! hint
Use #PBS directives at the beginning of the jobscript file, don't forget to set your valid PROJECT_ID and desired queue.
## Job Arrays and GNU Parallel
!!! note
Combine the Job arrays and GNU parallel for the best throughput of single core jobs
While job arrays are able to utilize all available computational nodes, the GNU parallel can be used to efficiently run multiple single-core jobs on a single node. The two approaches may be combined to utilize all available (current and future) resources to execute single core jobs.
!!! note
Every subjob in an array runs GNU parallel to utilize all cores on the node
### GNU Parallel, Shared jobscript
A combined approach, very similar to job arrays, can be taken. A job array is submitted to the queuing system. The subjobs run GNU parallel. The GNU parallel shell executes multiple instances of the jobscript using all of the cores on the node. The instances execute different work, controlled by the $PBS_JOB_ARRAY and $PARALLEL_SEQ variables.
Example:
Assume we have 992 input files with each name beginning with "file" (e. g. file001, ..., file992). Assume we would like to use each of these input files with program executable myprog.x, each as a separate single core job. We call these single core jobs tasks.
First, we create a tasklist file, listing all tasks - all input files in our example:
```console
$ find . -name 'file*' > tasklist
```
Next we create a file, controlling how many tasks will be executed in one subjob:
```console
$ seq 32 > numtasks
```
Then we create a jobscript:
```bash
#!/bin/bash
#PBS -A PROJECT_ID
#PBS -q qprod
#PBS -l select=1:ncpus=16,walltime=02:00:00
[ -z "$PARALLEL_SEQ" ] &&
{ module add parallel ; exec parallel -a $PBS_O_WORKDIR/numtasks $0 ; }
# change to local scratch directory
SCR=/lscratch/$PBS_JOBID/$PARALLEL_SEQ
mkdir -p $SCR ; cd $SCR || exit
# get individual task from tasklist with index from PBS JOB ARRAY and index form Parallel
IDX=$(($PBS_ARRAY_INDEX + $PARALLEL_SEQ - 1))
TASK=$(sed -n "${IDX}p" $PBS_O_WORKDIR/tasklist)
[ -z "$TASK" ] && exit
# copy input file and executable to scratch
cp $PBS_O_WORKDIR/$TASK input
# execute the calculation
cat input > output
# copy output file to submit directory
cp output $PBS_O_WORKDIR/$TASK.out
```
In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node. The variable $TASK expands to one of the input filenames from the tasklist. We copy the input file to local scratch memory, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once a task is finished, a new task starts, until the number of tasks in the numtasks file is reached.
!!! note
Select subjob walltime and number of tasks per subjob carefully
When deciding this values, keep in mind the following guiding rules:
1. Let n=N/16. Inequality (n+1) \* T < W should hold. N is the number of tasks per subjob, T is expected single task walltime and W is subjob walltime. Short subjob walltime improves scheduling and job throughput.
1. The number of tasks should be modulo 16.
1. These rules are valid only when all tasks have similar task walltimes T.
### Submit the Job Array (-J)
To submit the job array, use the qsub -J command. The 992 task job of the [example above](capacity-computing/#combined_example) may be submitted like this:
```console
$ qsub -N JOBNAME -J 1-992:32 jobscript
12345[].dm2
```
In this example, we submit a job array of 31 subjobs. Note the -J 1-992:**32**, this must be the same as the number sent to numtasks file. Each subjob will run on one full node and process 16 input files in parallel, 32 in total per subjob. Every subjob is assumed to complete in less than 2 hours.
!!! hint
Use #PBS directives at the beginning of the jobscript file, don't forget to set your valid PROJECT_ID and desired queue.
## Examples
Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run a huge number of jobs. We recommend trying out the examples before using this for running production jobs.
Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
```console
$ unzip capacity.zip
$ cat README
```
File added
# Compute Nodes
## Node Configuration
Anselm is cluster of x86-64 Intel based nodes built with Bull Extreme Computing bullx technology. The cluster contains four types of compute nodes.
### Compute Nodes Without Accelerators
* 180 nodes
* 2880 cores in total
* two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
* 64 GB of physical memory per node
* one 500GB SATA 2,5” 7,2 krpm HDD per node
* bullx B510 blade servers
* cn[1-180]
### Compute Nodes With a GPU Accelerator
* 23 nodes
* 368 cores in total
* two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
* 96 GB of physical memory per node
* one 500GB SATA 2,5” 7,2 krpm HDD per node
* GPU accelerator 1x NVIDIA Tesla Kepler K20m per node
* bullx B515 blade servers
* cn[181-203]
### Compute Nodes With a MIC Accelerator
* 4 nodes
* 64 cores in total
* two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
* 96 GB of physical memory per node
* one 500GB SATA 2,5” 7,2 krpm HDD per node
* MIC accelerator 1x Intel Phi 5110P per node
* bullx B515 blade servers
* cn[204-207]
### Fat Compute Nodes
* 2 nodes
* 32 cores in total
* 2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
* 512 GB of physical memory per node
* two 300GB SAS 3,5” 15krpm HDD (RAID1) per node
* two 100GB SLC SSD per node
* bullx R423-E3 servers
* cn[208-209]
![](../img/bullxB510.png)
**Anselm bullx B510 servers**
### Compute Node Summary
| Node type | Count | Range | Memory | Cores | [Access](resources-allocation-policy/) |
| ---------------------------- | ----- | ----------- | ------ | ----------- | -------------------------------------- |
| Nodes without an accelerator | 180 | cn[1-180] | 64GB | 16 @ 2.4GHz | qexp, qprod, qlong, qfree, qprace, qatlas |
| Nodes with a GPU accelerator | 23 | cn[181-203] | 96GB | 16 @ 2.3GHz | qnvidia, qexp |
| Nodes with a MIC accelerator | 4 | cn[204-207] | 96GB | 16 @ 2.3GHz | qmic, qexp |
| Fat compute nodes | 2 | cn[208-209] | 512GB | 16 @ 2.4GHz | qfat, qexp |
## Processor Architecture
Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes without accelerators and fat nodes) and Intel Xeon E5-2470 (nodes with accelerators). The processors support Advanced Vector Extensions (AVX) 256-bit instruction set.
### Intel Sandy Bridge E5-2665 Processor
* eight-core
* speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology
* peak performance: 19.2 GFLOP/s per core
* caches:
* L2: 256 KB per core
* L3: 20 MB per processor
* memory bandwidth at the level of the processor: 51.2 GB/s
### Intel Sandy Bridge E5-2470 Processor
* eight-core
* speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology
* peak performance: 18.4 GFLOP/s per core
* caches:
* L2: 256 KB per core
* L3: 20 MB per processor
* memory bandwidth at the level of the processor: 38.4 GB/s
Nodes equipped with Intel Xeon E5-2665 CPU have a set PBS resource attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU have set PBS resource attribute cpu_freq = 23.
```console
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16:cpu_freq=24 -I
```
In this example, we allocate 4 nodes, 16 cores at 2.4GHhz per node.
Intel Turbo Boost Technology is used by default, you can disable it for all nodes of job by using resource attribute cpu_turbo_boost.
```console
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
```
## Memmory Architecture
The cluster contains three types of compute nodes.
### Compute Nodes Without Accelerators
* 2 sockets
* Memory Controllers are integrated into processors.
* 8 DDR3 DIMMs per node
* 4 DDR3 DIMMs per CPU
* 1 DDR3 DIMMs per channel
* Data rate support: up to 1600MT/s
* Populated memory: 8 x 8 GB DDR3 DIMM 1600 MHz
### Compute Nodes With a GPU or MIC Accelerator
* 2 sockets
* Memory Controllers are integrated into processors.
* 6 DDR3 DIMMs per node
* 3 DDR3 DIMMs per CPU
* 1 DDR3 DIMMs per channel
* Data rate support: up to 1600MT/s
* Populated memory: 6 x 16 GB DDR3 DIMM 1600 MHz
### Fat Compute Nodes
* 2 sockets
* Memory Controllers are integrated into processors.
* 16 DDR3 DIMMs per node
* 8 DDR3 DIMMs per CPU
* 2 DDR3 DIMMs per channel
* Data rate support: up to 1600MT/s
* Populated memory: 16 x 32 GB DDR3 DIMM 1600 MHz
# Hardware Overview
The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 are GPU Kepler K20 accelerated nodes, 4 are MIC Xeon Phi 5110P accelerated nodes, and 2 are fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB of RAM, and a local hard drive. User access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked through high speed InfiniBand and Ethernet networks. All nodes share a 320 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data.
The Fat nodes are equipped with a large amount (512 GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) from a PI.
Schematic representation of the Anselm cluster. Each box represents a node (computer) or storage capacity:
![](../img/Anselm-Schematic-Representation.png)
The cluster compute nodes cn[1-207] are organized within 13 chassis.
There are four types of compute nodes:
* 180 compute nodes without an accelerator
* 23 compute nodes with a GPU accelerator - an NVIDIA Tesla Kepler K20m
* 4 compute nodes with a MIC accelerator - an Intel Xeon Phi 5110P
* 2 fat nodes - equipped with 512 GB of RAM and two 100 GB SSD drives
[More about Compute nodes](compute-nodes/).
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resources-allocation-policy/).
All of these nodes are interconnected through fast InfiniBand and Ethernet networks. [More about the Network](network/).
Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
All of the nodes share a 360 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data. These file systems are provided by the Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch. [More about Storage](storage/).
User access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing the cluster.](shell-and-data-access/)
The parameters are summarized in the following tables:
| **In general** | |
| ------------------------------------------- | -------------------------------------------- |
| Primary purpose | High Performance Computing |
| Architecture of compute nodes | x86-64 |
| Operating system | Linux (CentOS) |
| [**Compute nodes**](compute-nodes/) | |
| Total | 209 |
| Processor cores | 16 (2 x 8 cores) |
| RAM | min. 64 GB, min. 4 GB per core |
| Local disk drive | yes - usually 500 GB |
| Compute network | InfiniBand QDR, fully non-blocking, fat-tree |
| w/o accelerator | 180, cn[1-180] |
| GPU accelerated | 23, cn[181-203] |
| MIC accelerated | 4, cn[204-207] |
| Fat compute nodes | 2, cn[208-209] |
| **In total** | |
| Total theoretical peak performance (Rpeak) | 94 TFLOP/s |
| Total max. LINPACK performance (Rmax) | 73 TFLOP/s |
| Total amount of RAM | 15.136 TB |
| Node | Processor | Memory | Accelerator |
| ---------------- | --------------------------------------- | ------ | -------------------- |
| w/o accelerator | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 64 GB | - |
| GPU accelerated | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB | NVIDIA Kepler K20m |
| MIC accelerated | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB | Intel Xeon Phi 5110P |
| Fat compute node | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 512 GB | - |
For more details refer to [Compute nodes](compute-nodes/), [Storage](storage/), and [Network](network/).
# Introduction
Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totalling 3344 compute cores with 15 TB RAM, giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB of RAM, and a 500 GB hard disk drive. Nodes are interconnected through a fully non-blocking fat-tree InfiniBand network, and are equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
The cluster runs with an [operating system](software/operating-system/) which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/).
The user data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users.
The PBS Professional workload manager provides [computing resources allocations and job execution](resources-allocation-policy/).
Read more on how to [apply for resources](../general/applying-for-resources/), [obtain login credentials](../general/obtaining-login-credentials/obtaining-login-credentials/) and [access the cluster](shell-and-data-access/).
# Job Scheduling
## Job Execution Priority
The scheduler gives each job an execution priority and then uses this job execution priority to select which job(s) to run.
Job execution priority on Anselm is determined by these job properties (in order of importance):
1. queue priority
1. fair-share priority
1. eligible time
### Queue Priority
Queue priority is the priority of the queue in which the job is waiting prior to execution.
Queue priority has the biggest impact on job execution priority. The execution priority of jobs in higher priority queues is always greater than the execution priority of jobs in lower priority queues. Other properties of jobs used for determining the job execution priority (fair-share priority, eligible time) cannot compete with queue priority.
Queue priorities can be seen at <https://extranet.it4i.cz/anselm/queues>
### Fair-Share Priority
Fair-share priority is priority calculated on the basis of recent usage of resources. Fair-share priority is calculated per project, all members of a project sharing the same fair-share priority. Projects with higher recent usage have a lower fair-share priority than projects with lower or no recent usage.
Fair-share priority is used for ranking jobs with equal queue priority.
Fair-share priority is calculated as
---8<--- "fairshare_formula.md"
where MAX_FAIRSHARE has value 1E6,
usage<sub>Project</sub> is accumulated usage by all members of a selected project,
usage<sub>Total</sub> is total usage by all users, across all projects.
Usage counts allocated core-hours (`ncpus x walltime`). Usage decays, halving at intervals of 168 hours (one week).
Jobs queued in the queue qexp are not used to calculate the project's usage.
!!! note
Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/anselm/projects>.
Calculated fair-share priority can be also be seen in the Resource_List.fairshare attribute of a job.
### Eligible Time
Eligible time is the amount (in seconds) of eligible time a job accrues while waiting to run. Jobs with higher eligible time gain higher priority.
Eligible time has the least impact on execution priority. Eligible time is used for sorting jobs with equal queue priority and fair-share priority. It is very, very difficult for eligible time to compete with fair-share priority.
Eligible time can be seen in the eligible_time attribute of job.
### Formula
Job execution priority (job sort formula) is calculated as:
---8<--- "job_sort_formula.md"
### Job Backfilling
The Anselm cluster uses job backfilling.
Backfilling means fitting smaller jobs around the higher-priority jobs that the scheduler is going to run next, in such a way that the higher-priority jobs are not delayed. Backfilling allows us to keep resources from becoming idle when the top job (the job with the highest execution priority) cannot run.
The scheduler makes a list of jobs to run in order of execution priority. The scheduler looks for smaller jobs that can fit into the usage gaps around the highest-priority jobs in the list. The scheduler looks in the prioritized list of jobs and chooses the highest-priority smaller jobs that fit. Filler jobs are run only if they will not delay the start time of top jobs.
This means that jobs with lower execution priority can be run before jobs with higher execution priority.
!!! note
It is **very beneficial to specify the walltime** when submitting jobs.
Specifying more accurate walltime enables better scheduling, better execution times, and better resource usage. Jobs with suitable (small) walltime can be backfilled - and overtake job(s) with a higher priority.
---8<--- "mathjax.md"
# Job Submission and Execution
## Job Submission
When allocating computational resources for the job, specify:
1. a suitable queue for your job (the default is qprod)
1. the number of computational nodes required
1. the number of cores per node required
1. the maximum wall time allocated to your calculation, note that jobs exceeding the maximum wall time will be killed
1. your Project ID
1. a Jobscript or interactive switch
!!! note
Use the **qsub** command to submit your job to a queue for allocation of computational resources.
Submit the job using the qsub command:
```console
$ qsub -A Project_ID -q queue -l select=x:ncpus=y,walltime=[[hh:]mm:]ss[.ms] jobscript
```
The qsub command submits the job to the queue, i.e. the qsub command creates a request to the PBS Job manager for allocation of specified resources. The resources will be allocated when available, subject to the above described policies and constraints. **After the resources are allocated, the jobscript or interactive shell is executed on the first of the allocated nodes.**
!!! note
PBS statement nodes (qsub -l nodes=nodespec) are not supported on the Anselm cluster.
### Job Submission Examples
```console
$ qsub -A OPEN-0-0 -q qprod -l select=64:ncpus=16,walltime=03:00:00 ./myjob
```
In this example, we allocate 64 nodes, 16 cores per node, for 3 hours. We allocate these resources via the qprod queue, consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. The jobscript 'myjob' will be executed on the first node in the allocation.
```console
$ qsub -q qexp -l select=4:ncpus=16 -I
```
In this example, we allocate 4 nodes, 16 cores per node, for 1 hour. We allocate these resources via the qexp queue. The resources will be available interactively.
```console
$ qsub -A OPEN-0-0 -q qnvidia -l select=10:ncpus=16 ./myjob
```
In this example, we allocate 10 nvidia accelerated nodes, 16 cores per node, for 24 hours. We allocate these resources via the qnvidia queue. the jobscript 'myjob' will be executed on the first node in the allocation.
```console
$ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=16 ./myjob
```
In this example, we allocate 10 nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. The jobscript myjob will be executed on the first node in the allocation.
All qsub options may be [saved directly into the jobscript](#example-jobscript-for-mpi-calculation-with-preloaded-inputs). In such cases, it is not necessary to specify any options for qsub.
```console
$ qsub ./myjob
```
By default, the PBS batch system sends an e-mail only when the job is aborted. Disabling mail events completely can be done as follows:
```console
$ qsub -m n
```
## Advanced Job Placement
### Placement by Name
Specific nodes may be allocated via the PBS
```console
$ qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=16:host=cn171+1:ncpus=16:host=cn172 -I
```
In this example, we allocate nodes cn171 and cn172, all 16 cores per node, for 24 hours. Consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. The resources will be available interactively.
### Placement by CPU Type
Nodes equipped with an Intel Xeon E5-2665 CPU have a base clock frequency of 2.4GHz, nodes equipped with an Intel Xeon E5-2470 CPU have a base frequency of 2.3 GHz (see the section Compute Nodes for details). Nodes may be selected via the PBS resource attribute cpu_freq .
| CPU Type | base freq. | Nodes | cpu_freq attribute |
| ------------------ | ---------- | ---------------------- | ------------------ |
| Intel Xeon E5-2665 | 2.4GHz | cn[1-180], cn[208-209] | 24 |
| Intel Xeon E5-2470 | 2.3GHz | cn[181-207] | 23 |
```console
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16:cpu_freq=24 -I
```
In this example, we allocate 4 nodes, 16 cores per node, selecting only the nodes with Intel Xeon E5-2665 CPU.
### Placement by IB Switch
Groups of computational nodes are connected to chassis integrated Infiniband switches. These switches form the leaf switch layer of the [Infiniband network](network/) fat tree topology. Nodes sharing the leaf switch can communicate most efficiently. Sharing the same switch prevents hops in the network and facilitates unbiased, highly efficient network communication.
Nodes sharing the same switch may be selected via the PBS resource attribute ibswitch. Values of this attribute are iswXX, where XX is the switch number. The node-switch mapping can be seen in the [Hardware Overview](hardware-overview/) section.
We recommend allocating compute nodes to a single switch when best possible computational network performance is required to run the job efficiently:
```console
$ qsub -A OPEN-0-0 -q qprod -l select=18:ncpus=16:ibswitch=isw11 ./myjob
```
In this example, we request all of the 18 nodes sharing the isw11 switch for 24 hours. a full chassis will be allocated.
## Advanced Job Handling
### Selecting Turbo Boost Off
Intel Turbo Boost Technology is on by default. We strongly recommend keeping the default.
If necessary (such as in the case of benchmarking) you can disable the Turbo for all nodes of the job by using the PBS resource attribute cpu_turbo_boost:
```console
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
```
More information about the Intel Turbo Boost can be found in the TurboBoost section
### Advanced Examples
In the following example, we select an allocation for benchmarking a very special and demanding MPI program. We request Turbo off, and 2 full chassis of compute nodes (nodes sharing the same IB switches) for 30 minutes:
```console
$ qsub -A OPEN-0-0 -q qprod
-l select=18:ncpus=16:ibswitch=isw10:mpiprocs=1:ompthreads=16+18:ncpus=16:ibswitch=isw20:mpiprocs=16:ompthreads=1
-l cpu_turbo_boost=0,walltime=00:30:00
-N Benchmark ./mybenchmark
```
The MPI processes will be distributed differently on the nodes connected to the two switches. On the isw10 nodes, we will run 1 MPI process per node with 16 threads per process, on isw20 nodes we will run 16 plain MPI processes.
Although this example is somewhat artificial, it demonstrates the flexibility of the qsub command options.
## Job Management
!!! note
Check status of your jobs using the **qstat** and **check-pbs-jobs** commands
```console
$ qstat -a
$ qstat -a -u username
$ qstat -an -u username
$ qstat -f 12345.srv11
```
Example:
```console
$ qstat -a
srv11:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
16287.srv11 user1 qlong job1 6183 4 64 -- 144:0 R 38:25
16468.srv11 user1 qlong job2 8060 4 64 -- 144:0 R 17:44
16547.srv11 user2 qprod job3x 13516 2 32 -- 48:00 R 00:58
```
In this example user1 and user2 are running jobs named job1, job2 and job3x. The jobs job1 and job2 are using 4 nodes, 16 cores per node each. job1 has already run for 38 hours and 25 minutes, and job2 for 17 hours 44 minutes. job1 has already consumed `64 x 38.41 = 2458.6` core hours. job3x has already consumed `0.96 x 32 = 30.93` core hours. These consumed core hours will be accounted for on the respective project accounts, regardless of whether the allocated cores were actually used for computations.
The following commands allow you to; check the status of your jobs using the check-pbs-jobs command; check for the presence of user's PBS jobs' processes on execution hosts; display load and processes; display job standard and error output; continuously display (tail -f) job standard or error output;
```console
$ check-pbs-jobs --check-all
$ check-pbs-jobs --print-load --print-processes
$ check-pbs-jobs --print-job-out --print-job-err
$ check-pbs-jobs --jobid JOBID --check-all --print-all
$ check-pbs-jobs --jobid JOBID --tailf-job-out
```
Examples:
```console
$ check-pbs-jobs --check-all
JOB 35141.dm2, session_id 71995, user user2, nodes cn164,cn165
Check session id: OK
Check processes
cn164: OK
cn165: No process
```
In this example we see that job 35141.dm2 is not currently running any processes on the allocated node cn165, which may indicate an execution error.
```console
$ check-pbs-jobs --print-load --print-processes
JOB 35141.dm2, session_id 71995, user user2, nodes cn164,cn165
Print load
cn164: LOAD: 16.01, 16.01, 16.00
cn165: LOAD: 0.01, 0.00, 0.01
Print processes
%CPU CMD
cn164: 0.0 -bash
cn164: 0.0 /bin/bash /var/spool/PBS/mom_priv/jobs/35141.dm2.SC
cn164: 99.7 run-task
...
```
In this example we see that job 35141.dm2 is currently running a process run-task on node cn164, using one thread only, while node cn165 is empty, which may indicate an execution error.
```console
$ check-pbs-jobs --jobid 35141.dm2 --print-job-out
JOB 35141.dm2, session_id 71995, user user2, nodes cn164,cn165
Print job standard output:
======================== Job start ==========================
Started at : Fri Aug 30 02:47:53 CEST 2013
Script name : script
Run loop 1
Run loop 2
Run loop 3
```
In this example, we see actual output (some iteration loops) of the job 35141.dm2
!!! note
Manage your queued or running jobs, using the **qhold**, **qrls**, **qdel**, **qsig** or **qalter** commands
You may release your allocation at any time, using the qdel command
```console
$ qdel 12345.srv11
```
You may kill a running job by force, using the qsig command
```console
$ qsig -s 9 12345.srv11
```
Learn more by reading the pbs man page
```console
$ man pbs_professional
```
## Job Execution
### Jobscript
!!! note
Prepare the jobscript to run batch jobs in the PBS queue system
The Jobscript is a user made script controlling a sequence of commands for executing the calculation. It is often written in bash, though other scripts may be used as well. The jobscript is supplied to the PBS **qsub** command as an argument, and is executed by the PBS Professional workload manager.
!!! note
The jobscript or interactive shell is executed on first of the allocated nodes.
```console
$ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
$ qstat -n -u username
srv11:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
15209.srv11 username qexp Name0 5530 4 64 -- 01:00 R 00:00
cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16
```
In this example, the nodes cn17, cn108, cn109, and cn110 were allocated for 1 hour via the qexp queue. The jobscript myjob will be executed on the node cn17, while the nodes cn108, cn109, and cn110 are available for use as well.
The jobscript or interactive shell is by default executed in the home directory
```console
$ qsub -q qexp -l select=4:ncpus=16 -I
qsub: waiting for job 15210.srv11 to start
qsub: job 15210.srv11 ready
$ pwd
/home/username
```
In this example, 4 nodes were allocated interactively for 1 hour via the qexp queue. The interactive shell is executed in the home directory.
!!! note
All nodes within the allocation may be accessed via ssh. Unallocated nodes are not accessible to the user.
The allocated nodes are accessible via ssh from login nodes. The nodes may access each other via ssh as well.
Calculations on allocated nodes may be executed remotely via the MPI, ssh, pdsh or clush. You may find out which nodes belong to the allocation by reading the $PBS_NODEFILE file
```console
qsub -q qexp -l select=4:ncpus=16 -I
qsub: waiting for job 15210.srv11 to start
qsub: job 15210.srv11 ready
$ pwd
/home/username
$ sort -u $PBS_NODEFILE
cn17.bullx
cn108.bullx
cn109.bullx
cn110.bullx
$ pdsh -w cn17,cn[108-110] hostname
cn17: cn17
cn108: cn108
cn109: cn109
cn110: cn110
```
In this example, the hostname program is executed via pdsh from the interactive shell. The execution runs on all four allocated nodes. The same result would be achieved if the pdsh is called from any of the allocated nodes or from the login nodes.
### Example Jobscript for MPI Calculation
!!! note
Production jobs must use the /scratch directory for I/O
The recommended way to run production jobs is to change to the /scratch directory early in the jobscript, copy all inputs to /scratch, execute the calculations and copy outputs to the home directory.
```bash
#!/bin/bash
# change to scratch directory, exit on failure
SCRDIR=/scratch/$USER/myjob
mkdir -p $SCRDIR
cd $SCRDIR || exit
# copy input file to scratch
cp $PBS_O_WORKDIR/input .
cp $PBS_O_WORKDIR/mympiprog.x .
# load the MPI module
ml OpenMPI
# execute the calculation
mpirun -pernode ./mympiprog.x
# copy output file to home
cp output $PBS_O_WORKDIR/.
#exit
exit
```
In this example, a directory in /home holds the input file input and executable mympiprog.x . We create the directory myjob on the /scratch filesystem, copy input and executable files from the /home directory where the qsub was invoked ($PBS_O_WORKDIR) to /scratch, execute the MPI program mympiprog.x and copy the output file back to the /home directory. mympiprog.x is executed as one process per node, on all allocated nodes.
!!! note
Consider preloading inputs and executables onto [shared scratch](storage/) memory before the calculation starts.
In some cases, it may be impractical to copy the inputs to the scratch memory and the outputs to the home directory. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such cases, it is the users' responsibility to preload the input files on shared /scratch memory before the job submission, and retrieve the outputs manually after all calculations are finished.
!!! note
Store the qsub options within the jobscript. Use **mpiprocs** and **ompthreads** qsub options to control the MPI job execution.
### Example Jobscript for MPI Calculation With Preloaded Inputs
Example jobscript for an MPI job with preloaded inputs and executables, options for qsub are stored within the script:
```bash
#!/bin/bash
#PBS -q qprod
#PBS -N MYJOB
#PBS -l select=100:ncpus=16:mpiprocs=1:ompthreads=16
#PBS -A OPEN-0-0
# change to scratch directory, exit on failure
SCRDIR=/scratch/$USER/myjob
cd $SCRDIR || exit
# load the MPI module
ml OpenMPI
# execute the calculation
mpirun ./mympiprog.x
#exit
exit
```
In this example, input and executable files are assumed to be preloaded manually in the /scratch/$USER/myjob directory. Note the **mpiprocs** and **ompthreads** qsub options controlling the behavior of the MPI execution. mympiprog.x is executed as one process per node, on all 100 allocated nodes. If mympiprog.x implements OpenMP threads, it will run 16 threads per node.
More information can be found in the [Running OpenMPI](software/mpi/Running_OpenMPI/) and [Running MPICH2](software/mpi/running-mpich2/)
sections.
### Example Jobscript for Single Node Calculation
!!! note
The local scratch directory is often useful for single node jobs. Local scratch memory will be deleted immediately after the job ends.
Example jobscript for single node calculation, using [local scratch](storage/) memory on the node:
```bash
#!/bin/bash
# change to local scratch directory
cd /lscratch/$PBS_JOBID || exit
# copy input file to scratch
cp $PBS_O_WORKDIR/input .
cp $PBS_O_WORKDIR/myprog.x .
# execute the calculation
./myprog.x
# copy output file to home
cp output $PBS_O_WORKDIR/.
#exit
exit
```
In this example, a directory in /home holds the input file input and executable myprog.x . We copy input and executable files from the home directory where the qsub was invoked ($PBS_O_WORKDIR) to local scratch memory /lscratch/$PBS_JOBID, execute myprog.x and copy the output file back to the /home directory. myprog.x runs on one node only and may use threads.
### Other Jobscript Examples
Further jobscript examples may be found in the software section and the [Capacity computing](capacity-computing/) section.