Skip to content
Snippets Groups Projects
Commit 26f2d41e authored by Lubomir Prda's avatar Lubomir Prda
Browse files

Some files with good spelling merged with master

parent 18ce4b6d
No related branches found
No related tags found
6 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!197Readme,!95WIP: Spelling corrections
Showing
with 433 additions and 31 deletions
...@@ -33,6 +33,7 @@ TotalView ...@@ -33,6 +33,7 @@ TotalView
Valgrind Valgrind
ParaView ParaView
OpenFOAM OpenFOAM
MAX_FAIRSHARE
MPI4Py MPI4Py
MPICH2 MPICH2
PETSc PETSc
...@@ -86,6 +87,173 @@ AnyConnect ...@@ -86,6 +87,173 @@ AnyConnect
X11 X11
backfilling backfilling
backfilled backfilled
SCP
Lustre
QDR
TFLOP
ncpus
myjob
pernode
mpiprocs
ompthreads
qprace
runtime
SVS
ppn
Multiphysics
aeroacoustics
turbomachinery
CFD
LS-DYNA
APDL
MAPDL
multiphysics
AUTODYN
RSM
Molpro
initio
parallelization
NWChem
SCF
ISV
profiler
Pthreads
profilers
OTF
PAPI
PCM
uncore
pre-processing
prepend
CXX
prepended
POMP2
Memcheck
unaddressable
OTF2
GPI-2
GASPI
GPI
MKL
IPP
TBB
GSL
Omics
VNC
Scalasca
IFORT
interprocedural
IDB
cloop
qcow
qcow2
vmdk
vdi
virtio
paravirtualized
Gbit
tap0
UDP
TCP
preload
qfat
Rmpi
DCT
datasets
dataset
preconditioners
partitioners
PARDISO
PaStiX
SuiteSparse
SuperLU
ExodusII
NetCDF
ParMETIS
multigrid
HYPRE
SPAI
Epetra
EpetraExt
Tpetra
64-bit
Belos
GMRES
Amesos
IFPACK
preconditioner
Teuchos
Makefiles
SAXPY
NVCC
VCF
HGMD
HUMSAVAR
ClinVar
indels
CIBERER
exomes
tmp
SSHFS
RSYNC
unmount
Cygwin
CygwinX
RFB
TightVNC
TigerVNC
GUIs
XLaunch
UTF-8
numpad
PuTTYgen
OpenSSH
IE11
x86
r21u01n577
7120P
interprocessor
IPN
toolchains
toolchain
APIs
easyblocks
GM200
GeForce
GTX
IRUs
ASIC
backplane
ICEX
IRU
PFLOP
T950B
ifconfig
inet
addr
checkbox
appfile
programmatically
http
https
filesystem
phono3py
HDF
splitted
automize
llvm
PGI
GUPC
BUPC
IBV
Aislinn
nondeterminism
stdout
stderr
i.e.
pthreads
uninitialised
broadcasted
- docs.it4i/anselm-cluster-documentation/environment-and-modules.md - docs.it4i/anselm-cluster-documentation/environment-and-modules.md
MODULEPATH MODULEPATH
bashrc bashrc
...@@ -119,6 +287,7 @@ Rmax ...@@ -119,6 +287,7 @@ Rmax
E5-2665 E5-2665
E5-2470 E5-2470
P5110 P5110
isw
- docs.it4i/anselm-cluster-documentation/introduction.md - docs.it4i/anselm-cluster-documentation/introduction.md
RedHat RedHat
- docs.it4i/anselm-cluster-documentation/job-priority.md - docs.it4i/anselm-cluster-documentation/job-priority.md
...@@ -126,6 +295,8 @@ walltime ...@@ -126,6 +295,8 @@ walltime
qexp qexp
_List.fairshare _List.fairshare
_time _time
_FAIRSHARE
1E6
- docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md - docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
15209.srv11 15209.srv11
qsub qsub
...@@ -146,6 +317,15 @@ jobscript ...@@ -146,6 +317,15 @@ jobscript
cn108 cn108
cn109 cn109
cn110 cn110
Name0
cn17
_NODEFILE
_O
_WORKDIR
mympiprog.x
_JOBID
myprog.x
openmpi
- docs.it4i/anselm-cluster-documentation/network.md - docs.it4i/anselm-cluster-documentation/network.md
ib0 ib0
- docs.it4i/anselm-cluster-documentation/prace.md - docs.it4i/anselm-cluster-documentation/prace.md
...@@ -153,14 +333,19 @@ PRACE ...@@ -153,14 +333,19 @@ PRACE
qfree qfree
it4ifree it4ifree
it4i.portal.clients it4i.portal.clients
prace
1h
- docs.it4i/anselm-cluster-documentation/shell-and-data-access.md - docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
VPN VPN
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md - docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md
ANSYS ANSYS
CFX CFX
cfx.pbs cfx.pbs
_r
ane3fl
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md - docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
mapdl.pbs mapdl.pbs
_dy
- docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md - docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md
HPC HPC
lsdyna.pbs lsdyna.pbs
...@@ -175,9 +360,25 @@ Makefile ...@@ -175,9 +360,25 @@ Makefile
- docs.it4i/anselm-cluster-documentation/software/gpi2.md - docs.it4i/anselm-cluster-documentation/software/gpi2.md
gcc gcc
cn79 cn79
helloworld
_gpi.c
ibverbs
gaspi
_logger
- docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md - docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
Haswell Haswell
CPUs CPUs
ipo
O3
vec
xAVX
omp
simd
ivdep
pragmas
openmp
xCORE-AVX2
axCORE-AVX2
- docs.it4i/anselm-cluster-documentation/software/kvirtualization.md - docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
rc.local rc.local
runlevel runlevel
...@@ -189,6 +390,8 @@ VDE ...@@ -189,6 +390,8 @@ VDE
smb.conf smb.conf
TMPDIR TMPDIR
run.bat. run.bat.
slirp
NATs
- docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md - docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md
NumPy NumPy
- docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md - docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
...@@ -197,33 +400,73 @@ matlabcode.m ...@@ -197,33 +400,73 @@ matlabcode.m
output.out output.out
matlabcodefile matlabcodefile
sched sched
_feature
- docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md - docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
UV2000 UV2000
maxNumCompThreads
SalomonPBSPro
- docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md - docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
_THREADS _THREADS
_NUM
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md - docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md
CMake-aware CMake-aware
Makefile.export Makefile.export
_PACKAGE
_CXX
_COMPILER
_INCLUDE
_DIRS
_LIBRARY
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md - docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md
ansysdyna.pbs ansysdyna.pbs
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md - docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md
svsfem.cz svsfem.cz
_
- docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md - docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
libmpiwrap-amd64-linux libmpiwrap-amd64-linux
O0
valgrind
malloc
_PRELOAD
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md - docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
cn204 cn204
_LIBS
MAGMAROOT
_magma
_server
_anselm
_from
_mic.sh
_dgetrf
_mic
_03.pdf
- docs.it4i/anselm-cluster-documentation/software/paraview.md - docs.it4i/anselm-cluster-documentation/software/paraview.md
cn77 cn77
localhost localhost
v4.0.1
- docs.it4i/anselm-cluster-documentation/storage.md - docs.it4i/anselm-cluster-documentation/storage.md
ssh.du1.cesnet.cz ssh.du1.cesnet.cz
Plzen Plzen
ssh.du2.cesnet.cz ssh.du2.cesnet.cz
ssh.du3.cesnet.cz ssh.du3.cesnet.cz
tier1
_home
_cache
_tape
- docs.it4i/salomon/environment-and-modules.md - docs.it4i/salomon/environment-and-modules.md
icc icc
ictce
ifort
imkl
intel
gompi
goolf
BLACS
iompi
iccifort
- docs.it4i/salomon/hardware-overview.md - docs.it4i/salomon/hardware-overview.md
HW HW
E5-4627v2
- docs.it4i/salomon/job-submission-and-execution.md - docs.it4i/salomon/job-submission-and-execution.md
15209.isrv5 15209.isrv5
r21u01n577 r21u01n577
...@@ -248,6 +491,7 @@ mkdir ...@@ -248,6 +491,7 @@ mkdir
mympiprog.x mympiprog.x
mpiexec mpiexec
myprog.x myprog.x
r4i7n0.ib0.smc.salomon.it4i.cz
- docs.it4i/salomon/7d-enhanced-hypercube.md - docs.it4i/salomon/7d-enhanced-hypercube.md
cns1 cns1
cns576 cns576
...@@ -256,7 +500,165 @@ r4i7n17 ...@@ -256,7 +500,165 @@ r4i7n17
cns577 cns577
cns1008 cns1008
r37u31n1008 r37u31n1008
7D
- docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md - docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
qsub qsub
it4ifree it4ifree
it4i.portal.clients it4i.portal.clients
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md
anslic
_admin
- docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
_DIR
- docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
EDU
comsol
_matlab.pbs
_job.m
mphstart
- docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md
perf-report
perf
txt
html
mympiprog
_32p
- docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
Hotspots
- docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
scorep
- docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
edu
ansys
_features
_state.txt
f1
matlab
acfd
_ansys
_acfd
_aa
_comsol
HEATTRANSFER
_HEATTRANSFER
COMSOLBATCH
_COMSOLBATCH
STRUCTURALMECHANICS
_STRUCTURALMECHANICS
_matlab
_Toolbox
_Image
_Distrib
_Comp
_Engine
_Acquisition
pmode
matlabpool
- docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
mpirun
BLAS1
FFT
KMP
_AFFINITY
GOMP
_CPU
bullxmpi-1
mpich2
- docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md
bysocket
bycore
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md
gcc3.3.3
pthread
fftw3
lfftw3
_threads-lfftw3
_omp
icc3.3.3
FFTW2
gcc2.1.5
fftw2
lfftw
_threads
icc2.1.5
fftw-mpi3
_mpi
fftw3-mpi
fftw2-mpi
IntelMPI
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md
dwt.c
mkl
lgsl
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md
icc
hdf5
_INC
_SHLIB
_CPP
_LIB
_F90
gcc49
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md
_Dist
- docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
lcublas
- docs.it4i/anselm-cluster-documentation/software/operating-system.md
6.x
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md
startxwin
cygwin64binXWin.exe
tcp
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
Xming
XWin.exe.
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/pageant.md
_rsa.ppk
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md
_keys
organization.example.com
_rsa
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md
vpnui.exe
- docs.it4i/salomon/ib-single-plane-topology.md
36-port
Mcell.pdf
r21-r38
nodes.pdf
- docs.it4i/salomon/introduction.md
E5-2680v3
- docs.it4i/salomon/network.md
r4i1n0
r4i1n1
r4i1n2
r4i1n3
ip
- docs.it4i/salomon/software/ansys/setting-license-preferences.md
ansys161
- docs.it4i/salomon/software/ansys/workbench.md
mpifile.txt
solvehandlers.xml
- docs.it4i/salomon/software/chemistry/phono3py.md
vasprun.xml
disp-XXXXX
disp
_fc3.yaml
ir
_grid
_points.yaml
gofree-cond1
- docs.it4i/salomon/software/compilers.md
HPF
- docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
ver
- docs.it4i/salomon/software/debuggers/aislinn.md
test.cpp
- docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
vtune
_update1
- docs.it4i/salomon/software/debuggers/valgrind.md
EBROOTVALGRIND
- docs.it4i/salomon/software/intel-suite/intel-advisor.md
O2
- docs.it4i/salomon/software/intel-suite/intel-compilers.md
UV1
...@@ -24,7 +24,7 @@ fi ...@@ -24,7 +24,7 @@ fi
``` ```
!!! note !!! note
Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Conside utilization of SSH session interactivity for such commands as stated in the previous example. Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (SCP, PBS) of your account! Consider utilization of SSH session interactivity for such commands as stated in the previous example.
## Application Modules ## Application Modules
......
...@@ -322,7 +322,7 @@ cd $SCRDIR || exit ...@@ -322,7 +322,7 @@ cd $SCRDIR || exit
cp $PBS_O_WORKDIR/input . cp $PBS_O_WORKDIR/input .
cp $PBS_O_WORKDIR/mympiprog.x . cp $PBS_O_WORKDIR/mympiprog.x .
# load the mpi module # load the MPI module
module load openmpi module load openmpi
# execute the calculation # execute the calculation
...@@ -360,7 +360,7 @@ Example jobscript for an MPI job with preloaded inputs and executables, options ...@@ -360,7 +360,7 @@ Example jobscript for an MPI job with preloaded inputs and executables, options
SCRDIR=/scratch/$USER/myjob SCRDIR=/scratch/$USER/myjob
cd $SCRDIR || exit cd $SCRDIR || exit
# load the mpi module # load the MPI module
module load openmpi module load openmpi
# execute the calculation # execute the calculation
......
...@@ -49,7 +49,7 @@ echo Machines: $hl ...@@ -49,7 +49,7 @@ echo Machines: $hl
Header of the PBS file (above) is common and description can be find on [this site](../../job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Header of the PBS file (above) is common and description can be find on [this site](../../job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources.
Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the cfx solver via parameter -def Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the CFX solver via parameter -def
**License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**. **License** should be selected by parameter -P (Big letter **P**). Licensed products are the following: aa_r (ANSYS **Academic** Research), ane3fl (ANSYS Multiphysics)-**Commercial**.
[More about licensing here](licensing/) [More about licensing here](licensing/)
...@@ -4,7 +4,7 @@ Allinea Forge consist of two tools - debugger DDT and profiler MAP. ...@@ -4,7 +4,7 @@ Allinea Forge consist of two tools - debugger DDT and profiler MAP.
Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel Xeon Phi accelerators. DDT provides all the standard debugging features (stack trace, breakpoints, watches, view variables, threads etc.) for every thread running as part of your program, or for every process - even if these processes are distributed across a cluster using an MPI implementation. Allinea DDT, is a commercial debugger primarily for debugging parallel MPI or OpenMP programs. It also has a support for GPU (CUDA) and Intel Xeon Phi accelerators. DDT provides all the standard debugging features (stack trace, breakpoints, watches, view variables, threads etc.) for every thread running as part of your program, or for every process - even if these processes are distributed across a cluster using an MPI implementation.
Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed for profiling parallel code, which uses pthreads, OpenMP or MPI. Allinea MAP is a profiler for C/C++/Fortran HPC codes. It is designed for profiling parallel code, which uses Pthreads, OpenMP or MPI.
## License and Limitations for Anselm Users ## License and Limitations for Anselm Users
......
...@@ -29,7 +29,7 @@ Instead of [running your MPI program the usual way](../mpi/), use the the perf r ...@@ -29,7 +29,7 @@ Instead of [running your MPI program the usual way](../mpi/), use the the perf r
$ perf-report mpirun ./mympiprog.x $ perf-report mpirun ./mympiprog.x
``` ```
The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that [demanding MPI codes should be run within the queue system](../../job-submission-and-execution/). The MPI program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that [demanding MPI codes should be run within the queue system](../../job-submission-and-execution/).
## Example ## Example
......
...@@ -30,7 +30,7 @@ CUBE is a graphical application. Refer to Graphical User Interface documentation ...@@ -30,7 +30,7 @@ CUBE is a graphical application. Refer to Graphical User Interface documentation
!!! note !!! note
Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes. Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes.
After loading the appropriate module, simply launch cube command, or alternatively you can use scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available. After loading the appropriate module, simply launch cube command, or alternatively you can use Scalasca -examine command to launch the GUI. Note that for Scalasca data sets, if you do not analyze the data with `scalasca -examine` before to opening them with CUBE, not all performance data will be available.
References References
1\. <http://www.scalasca.org/software/cube-4.x/download.html> 1\. <http://www.scalasca.org/software/cube-4.x/download.html>
...@@ -10,13 +10,13 @@ PAPI can be used with parallel as well as serial programs. ...@@ -10,13 +10,13 @@ PAPI can be used with parallel as well as serial programs.
## Usage ## Usage
To use PAPI, load [module](../../environment-and-modules/) papi: To use PAPI, load [module](../../environment-and-modules/) PAPI:
```bash ```bash
$ module load papi $ module load papi
``` ```
This will load the default version. Execute module avail papi for a list of installed versions. This will load the default version. Execute module avail pap for a list of installed versions.
## Utilities ## Utilities
......
...@@ -23,7 +23,7 @@ Profiling a parallel application with Scalasca consists of three steps: ...@@ -23,7 +23,7 @@ Profiling a parallel application with Scalasca consists of three steps:
### Instrumentation ### Instrumentation
Instrumentation via " scalasca -instrument" is discouraged. Use [Score-P instrumentation](score-p/). Instrumentation via `scalasca -instrument` is discouraged. Use [Score-P instrumentation](score-p/).
### Runtime Measurement ### Runtime Measurement
...@@ -61,7 +61,7 @@ If you do not wish to launch the GUI tool, use the "-s" option : ...@@ -61,7 +61,7 @@ If you do not wish to launch the GUI tool, use the "-s" option :
scalasca -examine -s <experiment_directory> scalasca -examine -s <experiment_directory>
``` ```
Alternatively you can open CUBE and load the data directly from here. Keep in mind that in that case the preprocessing is not done and not all metrics will be shown in the viewer. Alternatively you can open CUBE and load the data directly from here. Keep in mind that in that case the pre-processing is not done and not all metrics will be shown in the viewer.
Refer to [CUBE documentation](cube/) on usage of the GUI viewer. Refer to [CUBE documentation](cube/) on usage of the GUI viewer.
......
...@@ -259,4 +259,4 @@ Prints this output : (note that there is output printed for every launched MPI p ...@@ -259,4 +259,4 @@ Prints this output : (note that there is output printed for every launched MPI p
==31319== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 4 from 4) ==31319== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 4 from 4)
``` ```
We can see that Valgrind has reported use of unitialised memory on the master process (which reads the array to be broadcast) and use of unaddresable memory on both processes. We can see that Valgrind has reported use of uninitialized memory on the master process (which reads the array to be broadcast) and use of unaddressable memory on both processes.
...@@ -155,7 +155,7 @@ Submit the job and run the GPI-2 application ...@@ -155,7 +155,7 @@ Submit the job and run the GPI-2 application
Hello from rank 0 of 2 Hello from rank 0 of 2
``` ```
At the same time, in another session, you may start the gaspi logger: At the same time, in another session, you may start the GASPI logger:
```bash ```bash
$ ssh cn79 $ ssh cn79
......
# Intel Compilers # Intel Compilers
The Intel compilers version 13.1.1 are available, via module intel. The compilers include the icc C and C++ compiler and the ifort fortran 77/90/95 compiler. The Intel compilers version 13.1.1 are available, via module Intel. The compilers include the ICC C and C++ compiler and the IFORT Fortran 77/90/95 compiler.
```bash ```bash
$ module load intel $ module load intel
...@@ -8,7 +8,7 @@ The Intel compilers version 13.1.1 are available, via module intel. The compiler ...@@ -8,7 +8,7 @@ The Intel compilers version 13.1.1 are available, via module intel. The compiler
$ ifort -v $ ifort -v
``` ```
The intel compilers provide for vectorization of the code, via the AVX instructions and support threading parallelization via OpenMP The Intel compilers provide for vectorization of the code, via the AVX instructions and support threading parallelization via OpenMP
For maximum performance on the Anselm cluster, compile your programs using the AVX instructions, with reporting where the vectorization was used. We recommend following compilation options for high performance For maximum performance on the Anselm cluster, compile your programs using the AVX instructions, with reporting where the vectorization was used. We recommend following compilation options for high performance
......
...@@ -70,4 +70,4 @@ Run the idb debugger in GUI mode. The menu Parallel contains number of tools for ...@@ -70,4 +70,4 @@ Run the idb debugger in GUI mode. The menu Parallel contains number of tools for
## Further Information ## Further Information
Exhaustive manual on idb features and usage is published at [Intel website](http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/debugger/user_guide/index.htm) Exhaustive manual on IDB features and usage is published at [Intel website](http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/composerxe/debugger/user_guide/index.htm)
...@@ -61,11 +61,13 @@ The general format of the name is `feature__APP__FEATURE`. ...@@ -61,11 +61,13 @@ The general format of the name is `feature__APP__FEATURE`.
Names of applications (APP): Names of applications (APP):
* ansys ```bash
* comsol ansys
* comsol-edu comsol
* matlab comsol-edu
* matlab-edu matlab
matlab-edu
```
To get the FEATUREs of a license take a look into the corresponding state file ([see above](isv_licenses/#Licence)), or use: To get the FEATUREs of a license take a look into the corresponding state file ([see above](isv_licenses/#Licence)), or use:
......
...@@ -111,7 +111,7 @@ Compile the above example with ...@@ -111,7 +111,7 @@ Compile the above example with
The MPI program executable must be compatible with the loaded MPI module. The MPI program executable must be compatible with the loaded MPI module.
Always compile and execute using the very same MPI module. Always compile and execute using the very same MPI module.
It is strongly discouraged to mix mpi implementations. Linking an application with one MPI implementation and running mpirun/mpiexec form other implementation may result in unexpected errors. It is strongly discouraged to mix MPI implementations. Linking an application with one MPI implementation and running mpirun/mpiexec form other implementation may result in unexpected errors.
The MPI program executable must be available within the same path on all nodes. This is automatically fulfilled on the /home and /scratch file system. You need to preload the executable, if running on the local scratch /lscratch file system. The MPI program executable must be available within the same path on all nodes. This is automatically fulfilled on the /home and /scratch file system. You need to preload the executable, if running on the local scratch /lscratch file system.
......
...@@ -274,8 +274,8 @@ You can use MATLAB on UV2000 in two parallel modes: ...@@ -274,8 +274,8 @@ You can use MATLAB on UV2000 in two parallel modes:
### Threaded Mode ### Threaded Mode
Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes. Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as `fft`, `eig`, `svd` etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes.
### Local Cluster Mode ### Local Cluster Mode
You can also use Parallel Toolbox on UV2000. Use l[ocal cluster mode](matlab/#parallel-matlab-batch-job-in-local-mode), "SalomonPBSPro" profile will not work. You can also use Parallel Toolbox on UV2000. Use [local cluster mode](matlab/#parallel-matlab-batch-job-in-local-mode), "SalomonPBSPro" profile will not work.
...@@ -84,7 +84,7 @@ Once this file is in place, user can request resources from PBS. Following examp ...@@ -84,7 +84,7 @@ Once this file is in place, user can request resources from PBS. Following examp
-l feature__matlab__MATLAB=1 -l feature__matlab__MATLAB=1
``` ```
This qsub command example shows how to run Matlab with 32 workers in following configuration: 2 nodes (use all 16 cores per node) and 16 workers = mpirocs per node (-l select=2:ncpus=16:mpiprocs=16). If user requires to run smaller number of workers per node then the "mpiprocs" parameter has to be changed. This qsub command example shows how to run Matlab with 32 workers in following configuration: 2 nodes (use all 16 cores per node) and 16 workers = mpiprocs per node (-l select=2:ncpus=16:mpiprocs=16). If user requires to run smaller number of workers per node then the "mpiprocs" parameter has to be changed.
The second part of the command shows how to request all necessary licenses. In this case 1 Matlab-EDU license and 32 Distributed Computing Engines licenses. The second part of the command shows how to request all necessary licenses. In this case 1 Matlab-EDU license and 32 Distributed Computing Engines licenses.
......
...@@ -4,7 +4,7 @@ The discrete Fourier transform in one or more dimensions, MPI parallel ...@@ -4,7 +4,7 @@ The discrete Fourier transform in one or more dimensions, MPI parallel
FFTW is a C subroutine library for computing the discrete Fourier transform in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, e.g. the discrete cosine/sine transforms or DCT/DST). The FFTW library allows for MPI parallel, in-place discrete Fourier transform, with data distributed over number of nodes. FFTW is a C subroutine library for computing the discrete Fourier transform in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, e.g. the discrete cosine/sine transforms or DCT/DST). The FFTW library allows for MPI parallel, in-place discrete Fourier transform, with data distributed over number of nodes.
Two versions, **3.3.3** and **2.1.5** of FFTW are available on Anselm, each compiled for **Intel MPI** and **OpenMPI** using **intel** and **gnu** compilers. These are available via modules: Two versions, **3.3.3** and **2.1.5** of FFTW are available on Anselm, each compiled for **Intel MPI** and **OpenMPI** using **Intel** and **gnu** compilers. These are available via modules:
| Version | Parallelization | module | linker options | | Version | Parallelization | module | linker options |
| -------------- | --------------- | ------------------- | ----------------------------------- | | -------------- | --------------- | ------------------- | ----------------------------------- |
......
...@@ -4,7 +4,7 @@ Hierarchical Data Format library. Serial and MPI parallel version. ...@@ -4,7 +4,7 @@ Hierarchical Data Format library. Serial and MPI parallel version.
[HDF5 (Hierarchical Data Format)](http://www.hdfgroup.org/HDF5/) is a general purpose library and file format for storing scientific data. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic objects, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs. [HDF5 (Hierarchical Data Format)](http://www.hdfgroup.org/HDF5/) is a general purpose library and file format for storing scientific data. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic objects, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs.
Versions **1.8.11** and **1.8.13** of HDF5 library are available on Anselm, compiled for **Intel MPI** and **OpenMPI** using **intel** and **gnu** compilers. These are available via modules: Versions **1.8.11** and **1.8.13** of HDF5 library are available on Anselm, compiled for **Intel MPI** and **OpenMPI** using **Intel** and **gnu** compilers. These are available via modules:
| Version | Parallelization | module | C linker options | C++ linker options | Fortran linker options | | Version | Parallelization | module | C linker options | C++ linker options | Fortran linker options |
| --------------------- | --------------------------------- | -------------------------- | --------------------- | ----------------------- | ----------------------- | | --------------------- | --------------------------------- | -------------------------- | --------------------- | ----------------------- | ----------------------- |
......
...@@ -32,9 +32,7 @@ PETSc needs at least MPI, BLAS and LAPACK. These dependencies are currently sati ...@@ -32,9 +32,7 @@ PETSc needs at least MPI, BLAS and LAPACK. These dependencies are currently sati
PETSc can be linked with a plethora of [external numerical libraries](http://www.mcs.anl.gov/petsc/miscellaneous/external.html), extending PETSc functionality, e.g. direct linear system solvers, preconditioners or partitioners. See below a list of libraries currently included in Anselm `petsc` modules. PETSc can be linked with a plethora of [external numerical libraries](http://www.mcs.anl.gov/petsc/miscellaneous/external.html), extending PETSc functionality, e.g. direct linear system solvers, preconditioners or partitioners. See below a list of libraries currently included in Anselm `petsc` modules.
All these libraries can be used also alone, without PETSc. Their static or shared program libraries are available in All these libraries can be used also alone, without PETSc. Their static or shared program libraries are available in
`$PETSC_DIR/$PETSC_ARCH/lib` and header files in `$PETSC_DIR/$PETSC_ARCH/include`. `PETSC_DIR` and `PETSC_ARCH` are environment variables pointing to a specific PETSc instance based on the petsc module loaded. `$PETSC_DIR/$PETSC_ARCH/lib` and header files in `$PETSC_DIR/$PETSC_ARCH/include`. `PETSC_DIR` and `PETSC_ARCH` are environment variables pointing to a specific PETSc instance based on the PETSc module loaded.
### Libraries Linked to PETSc on Anselm (As of 11 April 2015)
* dense linear algebra * dense linear algebra
* [Elemental](http://libelemental.org/) * [Elemental](http://libelemental.org/)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment