Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • sccs/docs.it4i.cz
  • soj0018/docs.it4i.cz
  • lszustak/docs.it4i.cz
  • jarosjir/docs.it4i.cz
  • strakpe/docs.it4i.cz
  • beranekj/docs.it4i.cz
  • tab0039/docs.it4i.cz
  • davidciz/docs.it4i.cz
  • gui0013/docs.it4i.cz
  • mrazek/docs.it4i.cz
  • lriha/docs.it4i.cz
  • it4i-vhapla/docs.it4i.cz
  • hol0598/docs.it4i.cz
  • sccs/docs-it-4-i-cz-fumadocs
  • siw019/docs-it-4-i-cz-fumadocs
15 results
Show changes
Commits on Source (1522)
Showing
with 1522 additions and 0 deletions
site/
scripts/*.csv
venv/
stages:
- test
- build
- deploy
- after_test
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
docs:
stage: test
image: it4innovations/docker-mdcheck:latest
allow_failure: true
script:
- find content/docs -name "*.mdx" | xargs mdl -r ~MD002,~MD007,~MD013,~MD010,~MD014,~MD024,~MD026,~MD029,~MD033,~MD036,~MD037,~MD046
pylint:
stage: test
image: it4innovations/docker-pycheck:latest
before_script:
- source /opt/.venv3/bin/activate
script:
- pylint $(find . -name "*.py" -not -name "feslicescript.py")
capitalize:
stage: test
image: it4innovations/docker-mkdocscheck:latest
allow_failure: true
before_script:
- source /opt/.venv3/bin/activate
- python -V # debug
- pip list | grep titlecase
script:
- find content/docs/ \( -name '*.mdx' -o -name '*.yml' \) ! -path '*einfracz*' -print0 | xargs -0 -n1 scripts/titlemd.py --test
ext_links:
stage: after_test
image: it4innovations/docker-mdcheck:latest
allow_failure: true
after_script:
# remove JSON results
- rm *.json
script:
- find content/docs -name '*.mdx' -exec grep --color -l http {} + | xargs awesome_bot -t 10 --allow-dupe --allow-redirect
only:
- master
404s:
stage: after_test
image: it4innovations/docker-mkdocscheck:latest
before_script:
- echo "192.168.101.10 docs.it4i.cz" >> /etc/hosts
- wget -V
- echo https://docs.it4i.cz/devel/$CI_COMMIT_REF_NAME/
- wget --spider -e robots=off -o wget.log -r -p https://docs.it4i.cz/devel/$CI_COMMIT_REF_NAME/ || true
script:
- cat wget.log | awk '/^Found [0-9]+ broken link[s]?.$/,/FINISHED/ { rc=-1; print $0 }; END { exit rc }'
mkdocs:
stage: build
image: it4innovations/docker-mkdocscheck:latest
before_script:
- source /opt/.venv3/bin/activate
- python -V # debug
- pip install -r requirements.txt
- pip freeze # debug
- mkdocs -V # debug
script:
# add version to footer
- bash scripts/add_version.sh
# get modules list from clusters
- bash scripts/get_modules.sh
# generate site_url
- (if [ "${CI_COMMIT_REF_NAME}" != 'master' ]; then sed -i "s/\(site_url.*$\)/\1devel\/$CI_COMMIT_REF_NAME\//" mkdocs.yml;fi);
# generate ULT for code link
# - sed -i "s/master/$CI_BUILD_REF_NAME/g" material/partials/toc.html
# regenerate modules matrix
- python scripts/modules_matrix.py > docs.it4i/modules-matrix.md
- python scripts/modules_matrix.py --json > docs.it4i/modules-matrix.json
- curl -f0 https://code.it4i.cz/sccs/scs-api-public/raw/master/scs_api.server_public.md -o docs.it4i/apiv1.md
# build pages
- mkdocs build
# replace broken links in 404.html
- sed -i 's,href="" title=",href="/" title=",g' site/404.html
- cp site/404.html site/403.html
- sed -i 's/404 - Not found/403 - Forbidden/g' site/403.html
# compress sitemap
- gzip < site/sitemap.xml > site/sitemap.xml.gz
artifacts:
paths:
- site
expire_in: 1 week
deploy to stage:
environment: stage
stage: deploy
image: it4innovations/docker-mkdocscheck:latest
before_script:
# install ssh-agent
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
# run ssh-agent
- eval $(ssh-agent -s)
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# disable host key checking (NOTE: makes you susceptible to man-in-the-middle attacks)
# WARNING: use only in docker container, if you use it with shell you will overwrite your user's ssh config
- mkdir -p ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
script:
- chown nginx:nginx site -R
- rsync -a --delete site/ root@"$SSH_HOST_STAGE":/srv/docs.it4i.cz/devel/$CI_COMMIT_REF_NAME/
only:
- branches@sccs/docs.it4i.cz
deploy to production:
environment: production
stage: deploy
image: it4innovations/docker-mkdocscheck:latest
before_script:
# install ssh-agent
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
# run ssh-agent
- eval $(ssh-agent -s)
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add <(echo "$SSH_PRIVATE_KEY")
# disable host key checking (NOTE: makes you susceptible to man-in-the-middle attacks)
# WARNING: use only in docker container, if you use it with shell you will overwrite your user's ssh config
- mkdir -p ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
script:
- chown nginx:nginx site -R
- rsync -a --delete site/ root@"$SSH_HOST_STAGE":/srv/docs.it4i.cz/site/
only:
- master@sccs/docs.it4i.cz
when: manual
JAN
LUMI
AI
CI/CD
AWS
CLI
FAQ
s3cmd
GUI
EESSI
hipBlas
hipSolver
LUMI
apptainer
ROCm
HIP
NVIDIA DGX-2
nvidia
smi
nvidia-smi
NICE
DGX-2
DGX
DCV
In
CAE
CUBE
GPU
GSL
LMGC90
LS-DYNA
MAPDL
GPI-2
COM
.ssh
Anselm
IT4I
IT4Innovations
PBS
vnode
vnodes
Salomon
TurboVNC
VNC
DDR3
DIMM
InfiniBand
CUDA
ORCA
COMSOL
API
GNU
CUDA
NVIDIA
LiveLink
MATLAB
Allinea
LLNL
Vampir
Doxygen
VTune
TotalView
Valgrind
ParaView
OpenFOAM
MAX_FAIRSHARE
MPI4Py
MPICH2
PETSc
Trilinos
FFTW
HDF5
BiERapp
AVX
AVX2
JRE
JDK
QEMU
VMware
VirtualBox
NUMA
SMP
BLAS
LAPACK
FFTW3
Dongarra
OpenCL
cuBLAS
CESNET
Jihlava
NVIDIA
Xeon
ANSYS
CentOS
RHEL
DDR4
DIMMs
GDDR5
EasyBuild
e.g.
MPICH
MVAPICH2
OpenBLAS
ScaLAPACK
PAPI
SGI
UV2000
VM
400GB
Mellanox
RedHat
ssh.du1.cesnet.cz
ssh.du2.cesnet.cz
ssh.du3.cesnet.cz
DECI
supercomputing
AnyConnect
X11
backfilling
backfilled
SCP
Lustre
QDR
TFLOP
ncpus
myjob
pernode
mpiprocs
ompthreads
qprace
runtime
SVS
ppn
Multiphysics
aeroacoustics
turbomachinery
CFD
LS-DYNA
APDL
MAPDL
multiphysics
AUTODYN
RSM
Molpro
initio
parallelization
NWChem
SCF
ISV
profiler
Pthreads
profilers
OTF
PAPI
PCM
uncore
pre-processing
prepend
CXX
prepended
POMP2
Memcheck
unaddressable
OTF2
GPI-2
GASPI
GPI
MKL
IPP
TBB
GSL
Omics
VNC
Scalasca
IFORT
interprocedural
IDB
cloop
qcow
qcow2
vmdk
vdi
virtio
paravirtualized
Gbit
tap0
UDP
TCP
preload
qfat
Rmpi
DCT
datasets
dataset
preconditioners
partitioners
PARDISO
PaStiX
SuiteSparse
SuperLU
ExodusII
NetCDF
ParMETIS
multigrid
HYPRE
SPAI
Epetra
EpetraExt
Tpetra
64-bit
Belos
GMRES
Amesos
IFPACK
preconditioner
Teuchos
Makefiles
SAXPY
NVCC
VCF
HGMD
HUMSAVAR
ClinVar
indels
CIBERER
exomes
tmp
SSHFS
RSYNC
unmount
Cygwin
CygwinX
RFB
TightVNC
TigerVNC
GUIs
XLaunch
UTF-8
numpad
PuTTYgen
OpenSSH
IE11
x86
r21u01n577
7120P
interprocessor
IPN
toolchains
toolchain
APIs
easyblocks
GM200
GeForce
GTX
IRUs
ASIC
backplane
ICEX
IRU
PFLOP
T950B
ifconfig
inet
addr
checkbox
appfile
programmatically
http
https
filesystem
phono3py
HDF
splitted
automize
llvm
PGI
GUPC
BUPC
IBV
Aislinn
nondeterminism
stdout
stderr
i.e.
pthreads
uninitialised
broadcasted
ITAC
hotspots
Bioinformatics
semiempirical
DFT
polyfill
ES6
HTML5Rocks
minifiers
CommonJS
PhantomJS
bundlers
Browserify
versioning
isflowing
ispaused
NPM
sublicense
Streams2
Streams3
blogpost
GPG
mississippi
Uint8Arrays
Uint8Array
endianness
styleguide
noop
MkDocs
- docs.it4i/anselm-cluster-documentation/environment-and-modules.md
MODULEPATH
bashrc
PrgEnv-gnu
bullx
MPI
PrgEnv-intel
EasyBuild
- docs.it4i/anselm-cluster-documentation/capacity-computing.md
capacity.zip
README
- docs.it4i/anselm-cluster-documentation/compute-nodes.md
DIMMs
- docs.it4i/anselm-cluster-documentation/hardware-overview.md
cn
K20
Xeon
x86-64
Virtualization
virtualization
NVIDIA
5110P
SSD
lscratch
login1
login2
dm1
Rpeak
LINPACK
Rmax
E5-2665
E5-2470
P5110
isw
- docs.it4i/anselm-cluster-documentation/introduction.md
RedHat
- docs.it4i/anselm-cluster-documentation/job-priority.md
walltime
qexp
_List.fairshare
_time
_FAIRSHARE
1E6
- docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
15209.srv11
qsub
15210.srv11
pwd
cn17.bullx
cn108.bullx
cn109.bullx
cn110.bullx
pdsh
hostname
SCRDIR
mkdir
mpiexec
qprod
Jobscript
jobscript
cn108
cn109
cn110
Name0
cn17
_NODEFILE
_O
_WORKDIR
mympiprog.x
_JOBID
myprog.x
openmpi
- docs.it4i/anselm-cluster-documentation/network.md
ib0
- docs.it4i/anselm-cluster-documentation/prace.md
PRACE
qfree
it4ifree
it4i.portal.clients
prace
1h
- docs.it4i/anselm-cluster-documentation/shell-and-data-access.md
VPN
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys-cfx.md
ANSYS
CFX
cfx.pbs
_r
ane3fl
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys-mechanical-apdl.md
mapdl.pbs
_dy
- docs.it4i/anselm-cluster-documentation/software/ansys/ls-dyna.md
HPC
lsdyna.pbs
- docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md
OpenMP
- docs.it4i/anselm-cluster-documentation/software/compilers.md
Fortran
- docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md
E5-2600
- docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md
Makefile
- docs.it4i/anselm-cluster-documentation/software/gpi2.md
gcc
cn79
helloworld
_gpi.c
ibverbs
gaspi
_logger
- docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-compilers.md
Haswell
CPUs
ipo
O3
vec
xAVX
omp
simd
ivdep
pragmas
openmp
xCORE-AVX2
axCORE-AVX2
- docs.it4i/anselm-cluster-documentation/software/kvirtualization.md
rc.local
runlevel
RDP
DHCP
DNS
SMB
VDE
smb.conf
TMPDIR
run.bat.
slirp
NATs
- docs.it4i/anselm-cluster-documentation/software/mpi/mpi4py-mpi-for-python.md
NumPy
- docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md
mpiLibConf.m
matlabcode.m
output.out
matlabcodefile
sched
_feature
- docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md
UV2000
maxNumCompThreads
SalomonPBSPro
- docs.it4i/anselm-cluster-documentation/software/numerical-languages/octave.md
_THREADS
_NUM
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/trilinos.md
CMake-aware
Makefile.export
_PACKAGE
_CXX
_COMPILER
_INCLUDE
_DIRS
_LIBRARY
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md
ansysdyna.pbs
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md
svsfem.cz
_
- docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md
libmpiwrap-amd64-linux
O0
valgrind
malloc
_PRELOAD
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md
cn204
_LIBS
MAGMAROOT
_magma
_server
_anselm
_from
_mic.sh
_dgetrf
_mic
_03.pdf
- docs.it4i/anselm-cluster-documentation/software/paraview.md
cn77
localhost
v4.0.1
- docs.it4i/anselm-cluster-documentation/storage.md
ssh.du1.cesnet.cz
Plzen
ssh.du2.cesnet.cz
ssh.du3.cesnet.cz
tier1
_home
_cache
_tape
- docs.it4i/salomon/environment-and-modules.md
icc
ictce
ifort
imkl
intel
gompi
goolf
BLACS
iompi
iccifort
- docs.it4i/salomon/hardware-overview.md
HW
E5-4627v2
- docs.it4i/salomon/job-submission-and-execution.md
15209.isrv5
r21u01n577
r21u02n578
r21u03n579
r21u04n580
qsub
15210.isrv5
pwd
r2i5n6.ib0.smc.salomon.it4i.cz
r4i6n13.ib0.smc.salomon.it4i.cz
r4i7n2.ib0.smc.salomon.it4i.cz
pdsh
r2i5n6
r4i6n13
r4i7n
r4i7n2
r4i7n0
SCRDIR
myjob
mkdir
mympiprog.x
mpiexec
myprog.x
r4i7n0.ib0.smc.salomon.it4i.cz
- docs.it4i/salomon/7d-enhanced-hypercube.md
cns1
cns576
r1i0n0
r4i7n17
cns577
cns1008
r37u31n1008
7D
- docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
qsub
it4ifree
it4i.portal.clients
x86
x64
- docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md
anslic
_admin
- docs.it4i/anselm-cluster-documentation/software/chemistry/nwchem.md
_DIR
- docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md
EDU
comsol
_matlab.pbs
_job.m
mphstart
- docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-performance-reports.md
perf-report
perf
txt
html
mympiprog
_32p
- docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md
Hotspots
- docs.it4i/anselm-cluster-documentation/software/debuggers/scalasca.md
scorep
- docs.it4i/anselm-cluster-documentation/software/isv_licenses.md
edu
ansys
_features
_state.txt
f1
matlab
acfd
_ansys
_acfd
_aa
_comsol
HEATTRANSFER
_HEATTRANSFER
COMSOLBATCH
_COMSOLBATCH
STRUCTURALMECHANICS
_STRUCTURALMECHANICS
_matlab
_Toolbox
_Image
_Distrib
_Comp
_Engine
_Acquisition
pmode
matlabpool
- docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md
mpirun
BLAS1
FFT
KMP
_AFFINITY
GOMP
_CPU
bullxmpi-1
mpich2
- docs.it4i/anselm-cluster-documentation/software/mpi/Running_OpenMPI.md
bysocket
bycore
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md
gcc3.3.3
pthread
fftw3
lfftw3
_threads-lfftw3
_omp
icc3.3.3
FFTW2
gcc2.1.5
fftw2
lfftw
_threads
icc2.1.5
fftw-mpi3
_mpi
fftw3-mpi
fftw2-mpi
IntelMPI
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/gsl.md
dwt.c
mkl
lgsl
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/hdf5.md
icc
hdf5
_INC
_SHLIB
_CPP
_LIB
_F90
gcc49
- docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md
_Dist
- docs.it4i/anselm-cluster-documentation/software/nvidia-cuda.md
lcublas
- docs.it4i/anselm-cluster-documentation/software/operating-system.md
6.x
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/cygwin-and-x11-forwarding.md
startxwin
cygwin64binXWin.exe
tcp
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md
Xming
XWin.exe.
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/pageant.md
_rsa.ppk
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/puttygen.md
_keys
organization.example.com
_rsa
- docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/vpn-connection-fail-in-win-8.1.md
vpnui.exe
- docs.it4i/salomon/ib-single-plane-topology.md
36-port
Mcell.pdf
r21-r38
nodes.pdf
- docs.it4i/salomon/introduction.md
E5-2680v3
- docs.it4i/salomon/network.md
r4i1n0
r4i1n1
r4i1n2
r4i1n3
ip
- docs.it4i/salomon/software/ansys/setting-license-preferences.md
ansys161
- docs.it4i/salomon/software/ansys/workbench.md
mpifile.txt
solvehandlers.xml
- docs.it4i/salomon/software/chemistry/phono3py.md
vasprun.xml
disp-XXXXX
disp
_fc3.yaml
ir
_grid
_points.yaml
gofree-cond1
- docs.it4i/salomon/software/compilers.md
HPF
- docs.it4i/salomon/software/comsol/licensing-and-available-versions.md
ver
- docs.it4i/salomon/software/debuggers/aislinn.md
test.cpp
- docs.it4i/salomon/software/debuggers/intel-vtune-amplifier.md
vtune
_update1
- docs.it4i/salomon/software/debuggers/valgrind.md
EBROOTVALGRIND
- docs.it4i/salomon/software/intel-suite/intel-advisor.md
O2
- docs.it4i/salomon/software/intel-suite/intel-compilers.md
UV1
- docs.it4i/salomon/software/numerical-languages/octave.md
octcode.m
mkoctfile
- docs.it4i/software/orca.md
pdf
- node_modules/es6-promise/README.md
rsvp.js
es6-promise
es6-promise-min
Node.js
testem
- node_modules/spawn-sync/lib/json-buffer/README.md
node.js
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/doc/wg-meetings/2015-01-30.md
WG
domenic
mikeal
io.js
sam
calvin
whatwg
compat
mathias
isaac
chris
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/core-util-is/README.md
core-util-is
v0.12.
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/isarray/README.md
isarray
Gruber
julian
juliangruber.com
NONINFRINGEMENT
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/process-nextick-args/license.md
Metcalf
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/process-nextick-args/readme.md
process-nextick-args
process.nextTick
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/string_decoder/README.md
_decoder.js
Joyent
joyent
repo
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/util-deprecate/History.md
kumavis
jsdocs
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/node_modules/util-deprecate/README.md
util-deprecate
Rajlich
- node_modules/spawn-sync/node_modules/concat-stream/node_modules/readable-stream/README.md
v7.0.0
userland
chrisdickinson
christopher.s.dickinson
gmail.com
9554F04D7259F04124DE6B476D5A82AC7E37093B
calvinmetcalf
calvin.metcalf
F3EF5F62A87FC27A22E643F714CE4FF5015AA242
Vagg
rvagg
vagg.org
DD8F2338BAE7501E3DD5AC78C273792F7D83545D
sonewman
newmansam
outlook.com
Buus
mafintosh
mathiasbuus
Denicola
domenic.me
Matteo
Collina
mcollina
matteo.collina
3ABC01543F22DD2239285CDD818674489FBC127E
- node_modules/spawn-sync/node_modules/concat-stream/readme.md
concat-stream
concat
cb
- node_modules/spawn-sync/node_modules/os-shim/README.md
0.10.x
os.tmpdir
os.endianness
os.EOL
os.platform
os.arch
0.4.x
Aparicio
Adesis
Netlife
S.L
- node_modules/spawn-sync/node_modules/try-thread-sleep/node_modules/thread-sleep/README.md
node-pre-gyp
npm
- node_modules/spawn-sync/README.md
iojs
UCX
Dask-ssh
SCRATCH
HOME
PROJECT
e-INFRA
e-INFRA CZ
DICE
qgpu
qcpu
it4i-portal-clients
it4icheckaccess
it4idedicatedtime
it4ifree
it4ifsusage
it4iuserfsusage
it4iprojectfsusage
it4imotd
e-INFRA
it4i-portal-clients
s3cmd
s5cmd
title:
e-INFRA CZ Cloud Ostrava
e-INFRA CZ Account
# IT4Inovations Documentation
This project contains IT4Innovations user documentation source.
## Migration
* [fumadocs](https://fumadocs.vercel.app/)
\ No newline at end of file
# Compute Nodes
## Node Configuration
Anselm is a cluster of x86-64 Intel-based nodes built with the Bull Extreme Computing bullx technology. The cluster contains four types of compute nodes.
### Compute Nodes Without Accelerators
* 180 nodes
* 2880 cores in total
* two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
* 64 GB of physical memory per node
* one 500GB SATA 2,5” 7,2 krpm HDD per node
* bullx B510 blade servers
* cn[1-180]
### Compute Nodes With a GPU Accelerator
* 23 nodes
* 368 cores in total
* two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
* 96 GB of physical memory per node
* one 500GB SATA 2,5” 7,2 krpm HDD per node
* GPU accelerator 1x NVIDIA Tesla Kepler K20m per node
* bullx B515 blade servers
* cn[181-203]
### Compute Nodes With a MIC Accelerator
* 4 nodes
* 64 cores in total
* two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
* 96 GB of physical memory per node
* one 500GB SATA 2,5” 7,2 krpm HDD per node
* MIC accelerator 1x Intel Phi 5110P per node
* bullx B515 blade servers
* cn[204-207]
### Fat Compute Nodes
* 2 nodes
* 32 cores in total
* 2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
* 512 GB of physical memory per node
* two 300GB SAS 3,5” 15krpm HDD (RAID1) per node
* two 100GB SLC SSD per node
* bullx R423-E3 servers
* cn[208-209]
![](../img/bullxB510.png)
**Anselm bullx B510 servers**
### Compute Node Summary
| Node type | Count | Range | Memory | Cores | Queues |
| ---------------------------- | ----- | ----------- | ------ | ----------- | -------------------------------------- |
| Nodes without an accelerator | 180 | cn[1-180] | 64GB | 16 @ 2.4GHz | qexp, qprod, qlong, qfree, qprace, qatlas |
| Nodes with a GPU accelerator | 23 | cn[181-203] | 96GB | 16 @ 2.3GHz | qnvidia, qexp |
| Nodes with a MIC accelerator | 4 | cn[204-207] | 96GB | 16 @ 2.3GHz | qmic, qexp |
| Fat compute nodes | 2 | cn[208-209] | 512GB | 16 @ 2.4GHz | qfat, qexp |
## Processor Architecture
Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes without accelerators and fat nodes) and Intel Xeon E5-2470 (nodes with accelerators). The processors support Advanced Vector Extensions (AVX) 256-bit instruction set.
### Intel Sandy Bridge E5-2665 Processor
* eight-core
* speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology
* peak performance: 19.2 GFLOP/s per core
* caches:
* L2: 256 KB per core
* L3: 20 MB per processor
* memory bandwidth at the level of the processor: 51.2 GB/s
### Intel Sandy Bridge E5-2470 Processor
* eight-core
* speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology
* peak performance: 18.4 GFLOP/s per core
* caches:
* L2: 256 KB per core
* L3: 20 MB per processor
* memory bandwidth at the level of the processor: 38.4 GB/s
Nodes equipped with Intel Xeon E5-2665 CPU have a set PBS resource attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU have set PBS resource attribute cpu_freq = 23.
```console
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16:cpu_freq=24 -I
```
In this example, we allocate 4 nodes, 16 cores at 2.4GHhz per node.
Intel Turbo Boost Technology is used by default, you can disable it for all nodes of job by using the cpu_turbo_boost resource attribute.
```console
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
```
## Memmory Architecture
The cluster contains three types of compute nodes.
### Compute Nodes Without Accelerators
* 2 sockets
* Memory Controllers are integrated into processors.
* 8 DDR3 DIMMs per node
* 4 DDR3 DIMMs per CPU
* 1 DDR3 DIMMs per channel
* Data rate support: up to 1600MT/s
* Populated memory: 8 x 8 GB DDR3 DIMM 1600 MHz
### Compute Nodes With a GPU or MIC Accelerator
* 2 sockets
* Memory Controllers are integrated into processors.
* 6 DDR3 DIMMs per node
* 3 DDR3 DIMMs per CPU
* 1 DDR3 DIMMs per channel
* Data rate support: up to 1600MT/s
* Populated memory: 6 x 16 GB DDR3 DIMM 1600 MHz
### Fat Compute Nodes
* 2 sockets
* Memory Controllers are integrated into processors.
* 16 DDR3 DIMMs per node
* 8 DDR3 DIMMs per CPU
* 2 DDR3 DIMMs per channel
* Data rate support: up to 1600MT/s
* Populated memory: 16 x 32 GB DDR3 DIMM 1600 MHz
# Hardware Overview
The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 are GPU Kepler K20 accelerated nodes, 4 are MIC Xeon Phi 5110P accelerated nodes, and 2 are fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB of RAM, and a local hard drive. User access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked through high speed InfiniBand and Ethernet networks. All nodes share a 320 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data.
The Fat nodes are equipped with a large amount (512 GB) of memory. Virtualization infrastructure provides resources to run long-term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request][a] from a PI.
Schematic representation of the Anselm cluster. Each box represents a node (computer) or storage capacity:
![](../img/Anselm-Schematic-Representation.png)
The cluster compute nodes cn[1-207] are organized within 13 chassis.
There are four types of compute nodes:
* 180 compute nodes without an accelerator
* 23 compute nodes with a GPU accelerator - an NVIDIA Tesla Kepler K20m
* 4 compute nodes with a MIC accelerator - an Intel Xeon Phi 5110P
* 2 fat nodes - equipped with 512 GB of RAM and two 100 GB SSD drives
[More about Compute nodes][1].
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy][2].
All of these nodes are interconnected through fast InfiniBand and Ethernet networks. [More about the Network][3].
Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
All of the nodes share a 360 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data. These file systems are provided by the Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch. [More about Storage][4].
User access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing the cluster][5].
The parameters are summarized in the following tables:
| **In general** | |
| ------------------------------------------- | -------------------------------------------- |
| Primary purpose | High Performance Computing |
| Architecture of compute nodes | x86-64 |
| Operating system | Linux (CentOS) |
| [**Compute nodes**][1] | |
| Total | 209 |
| Processor cores | 16 (2 x 8 cores) |
| RAM | min. 64 GB, min. 4 GB per core |
| Local disk drive | yes - usually 500 GB |
| Compute network | InfiniBand QDR, fully non-blocking, fat-tree |
| w/o accelerator | 180, cn[1-180] |
| GPU accelerated | 23, cn[181-203] |
| MIC accelerated | 4, cn[204-207] |
| Fat compute nodes | 2, cn[208-209] |
| **In total** | |
| Total theoretical peak performance (Rpeak) | 94 TFLOP/s |
| Total max. LINPACK performance (Rmax) | 73 TFLOP/s |
| Total amount of RAM | 15.136 TB |
| Node | Processor | Memory | Accelerator |
| ---------------- | --------------------------------------- | ------ | -------------------- |
| w/o accelerator | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 64 GB | - |
| GPU accelerated | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB | NVIDIA Kepler K20m |
| MIC accelerated | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB | Intel Xeon Phi 5110P |
| Fat compute node | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 512 GB | - |
For more details, refer to [Compute nodes][1], [Storage][4], and [Network][3].
[1]: compute-nodes.md
[2]: ../general/resources-allocation-policy.md
[3]: network.md
[4]: storage.md
[5]: ../general/shell-and-data-access.md
[a]: https://support.it4i.cz/rt
# Introduction
Welcome to the Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM, giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB of RAM, and a 500 GB hard disk drive. Nodes are interconnected through a fully non-blocking fat-tree InfiniBand network and are equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview][1].
Anselm runs with an operating system compatible with the Red Hat [Linux family][a]. We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment][2].
The user data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users.
The PBS Professional workload manager provides [computing resources allocations and job execution][3].
Read more on how to [apply for resources][4], [obtain login credentials][5] and [access the cluster][6].
[1]: hardware-overview.md
[2]: ../environment-and-modules.md
[3]: ../general/resources-allocation-policy.md
[4]: ../general/applying-for-resources.md
[5]: ../general/obtaining-login-credentials/obtaining-login-credentials.md
[6]: ../general/shell-and-data-access.md
[a]: http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg
# Network
All of the compute and login nodes of Anselm are interconnected through an [InfiniBand][a] QDR network and a Gigabit [Ethernet][b] network. Both networks may be used to transfer user data.
## InfiniBand Network
All of the compute and login nodes of Anselm are interconnected through a high-bandwidth, low-latency [InfiniBand][a] QDR network (IB 4 x QDR, 40 Gbps). The network topology is a fully non-blocking fat-tree.
The compute nodes may be accessed via the InfiniBand network using the ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native InfiniBand connection among the nodes.
!!! note
The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via the native InfiniBand protocol.
The Fat tree topology ensures that peak transfer rates are achieved between any two nodes, independent of network traffic exchanged among other nodes concurrently.
## Ethernet Network
The compute nodes may be accessed via the regular Gigabit Ethernet network interface eth0, in the address range 10.1.1.1-209, or by using aliases cn1-cn209. The network provides **114 MB/s** transfer rates via the TCP connection.
## Example
In this example, we access the node cn110 through the InfiniBand network via the ib0 interface, then from cn110 to cn108 through the Ethernet network.
```console
$ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
$ qstat -n -u username
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
15209.srv11 username qexp Name0 5530 4 64 -- 01:00 R 00:00
cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16
$ ssh 10.2.1.110
$ ssh 10.1.1.108
```
[a]: http://en.wikipedia.org/wiki/InfiniBand
[b]: http://en.wikipedia.org/wiki/Ethernet
This diff is collapsed.
# API Placeholder
This page is created automatically from the API source code.
# Introduction
This section contains documentation of decommissioned IT4Innovations' supercomputers and services.
## Salomon
The second supercomputer, built by SGI (now Hewlett Packard Enterprise), was launched in 2015. With a performance of 2 PFlop/s, it was immediately included in the TOP500 list, which ranks the world's most powerful supercomputers. It stayed there until November 2020, falling from the 40th place to 460th.
Salomon was decommissioned after six years - at the end of 2021.
### Interesting Facts
| Salomon's facts | |
| ---------------------------- | ------------------ |
| In operation | Q2 2015 - Q4 2021 |
| Theoretical peak performance | 2 PFLOP/s |
| Number of nodes | 1,008 |
| HOME storage capacity | 500 TB |
| SCRATCH storage capacity | 1,638 TB |
| Projects computed | 1,085 |
| Computing jobs run | ca. 8,700,000 |
| Corehours used | ca. 1,014,000,000 |
## Anselm
The first supercomputer, built by Atos, was launched in 2013. For the first 3 years, it was placed in makeshift containers on the campus of VSB – Technical University of Ostrava, and was subsequently moved to the data room of the newly constructed IT4Innovations building. Anselm's computational resources were available to Czech and foreign students and scientists in fields such as material sciences, computational chemistry, biosciences, and engineering.
At the end of January 2021, after more than seven years, its operation permanently ceased. In the future, it will be a part of the [World of Civilization exhibition][a] in Lower Vitkovice.
### Interesting Facts
| Anselm's facts | |
| ---------------------------- | ------------------ |
| Cost | 90,000,000 CZK |
| In operation | Q2 2013 - Q1 2021 |
| Theoretical peak performance | 94 TFLOP/s |
| Number of nodes | 209 |
| HOME storage capacity | 320 TB |
| SCRATCH storage capacity | 146 TB |
| Projects computed | 725 |
| Computing jobs run | 2,630,567 |
| Corehours used | 134,130,309 |
| Power consumption | 77 kW |
## PRACE
Partnership for Advanced Computing in Europe aims to facilitate the access to a research infrastructure that enables high-impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society. For more information, see the [official website][b].
[a]: https://www.dolnivitkovice.cz/en/science-and-technology-centre/exhibitions/
[b]: https://prace-ri.eu/
# Compute Nodes
Barbora is a cluster of x86-64 Intel-based nodes built with the BullSequana Computing technology.
The cluster contains three types of compute nodes.
## Compute Nodes Without Accelerators
* 192 nodes
* 6912 cores in total
* 2x Intel Cascade Lake 6240, 18-core, 2.6 GHz processors per node
* 192 GB DDR4 2933 MT/s of physical memory per node (12x16 GB)
* BullSequana X1120 blade servers
* 2995.2 GFLOP/s per compute node
* 1x 1 GB Ethernet
* 1x HDR100 IB port
* 3 compute nodes per X1120 blade server
* cn[1-192]
![](img/BullSequanaX1120.png)
## Compute Nodes With a GPU Accelerator
* 8 nodes
* 192 cores in total
* two Intel Skylake Gold 6126, 12-core, 2.6 GHz processors per node
* 192 GB DDR4 2933MT/s with ECC of physical memory per node (12x16 GB)
* 4x GPU accelerator NVIDIA Tesla V100-SXM2 per node
* Bullsequana X410-E5 NVLink-V blade servers
* 1996.8 GFLOP/s per compute nodes
* GPU-to-GPU All-to-All NVLINK 2.0, GPU-Direct
* 1 GB Ethernet
* 2x HDR100 IB ports
* cn[193-200]
![](img/BullSequanaX410E5GPUNVLink.jpg)
## Fat Compute Node
* 1x BullSequana X808 server
* 128 cores in total
* 8 Intel Skylake 8153, 16-core, 2.0 GHz, 125 W
* 6144 GiB DDR4 2667 MT/s of physical memory per node (92x64 GB)
* 2x HDR100 IB port
* 8192 GFLOP/s
* cn[201]
![](img/BullSequanaX808.jpg)
## Compute Node Summary
| Node type | Count | Range | Memory | Cores |
| ---------------------------- | ----- | ----------- | -------- | ------------- |
| Nodes without an accelerator | 192 | cn[1-192] | 192 GB | 36 @ 2.6 GHz |
| Nodes with a GPU accelerator | 8 | cn[193-200] | 192 GB | 24 @ 2.6 GHz |
| Fat compute nodes | 1 | cn[201] | 6144 GiB | 128 @ 2.0 GHz |
## Processor Architecture
Barbora is equipped with Intel Cascade Lake processors Intel Xeon 6240 (nodes without accelerators),
Intel Skylake Gold 6126 (nodes with accelerators) and Intel Skylake Platinum 8153.
### Intel [Cascade Lake 6240][d]
Cascade Lake core is largely identical to that of [Skylake's][a].
For in-depth detail of the Skylake core/pipeline see [Skylake (client) § Pipeline][b].
Xeon Gold 6240 is a 64-bit 18-core x86 multi-socket high performance server microprocessor set to be introduced by Intel in late 2018. This chip supports up to 4-way multiprocessing. The Gold 6240, which is based on the Cascade Lake microarchitecture and is manufactured on a 14 nm process, sports 2 AVX-512 FMA units as well as three Ultra Path Interconnect links. This microprocessor, which operates at 2.6 GHz with a TDP of 150 W and a turbo boost frequency of up to 3.9 GHz, supports up 1 TB of hexa-channel DDR4-2933 ECC memory.
* **Family**: Xeon Gold
* **Cores**: 18
* **Threads**: 36
* **L1I Cache**: 576 KiB, 18x32 KiB, 8-way set associative
* **L1D Cache**: 576 KiB, 18x32 KiB, 8-way set associative, write-back
* **L2 Cache**: 18 MiB, 18x1 MiB, 16-way set associative, write-back
* **L3 Cache**: 24.75 MiB, 18x1.375 MiB, 11-way set associative, write-back
* **Instructions**: x86-64, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA3, F16C, BMI, BMI2, VT-x, VT-d, TXT, TSX, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVE, SGX, MPX, AVX-512 (New instructions for [Vector Neural Network Instructions][c])
* **Frequency**: 2.6 GHz
* **Max turbo**: 3.9 GHz
* **Process**: 14 nm
* **TDP**: 140+ W
### Intel [Skylake Gold 6126][e]
Xeon Gold 6126 is a 64-bit dodeca-core x86 multi-socket high performance server microprocessor introduced by Intel in mid-2017. This chip supports up to 4-way multiprocessing. The Gold 6126, which is based on the server configuration of the Skylake microarchitecture and is manufactured on a 14 nm+ process, sports 2 AVX-512 FMA units as well as three Ultra Path Interconnect links. This microprocessor, which operates at 2.6 GHz with a TDP of 125 W and a turbo boost frequency of up to 3.7 GHz, supports up to 768 GiB of hexa-channel DDR4-2666 ECC memory.
* **Family**: Xeon Gold
* **Cores**: 12
* **Threads**: 24
* **L1I Cache**: 384 KiB, 12x32 KiB, 8-way set associative
* **L1D Cache**: 384 KiB, 12x32 KiB, 8-way set associative, write-back
* **L2 Cache**: 12 MiB, 12x1 MiB, 16-way set associative, write-back
* **L3 Cache**: 19.25 MiB, 14x1.375 MiB, 11-way set associative, write-back
* **Instructions**: x86-64, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA3, F16C, BMI, BMI2, VT-x, VT-d, TXT, TSX, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVE, SGX, MPX, AVX-512
* **Frequency**: 2.6 GHz
* **Max turbo**: 3.7 GHz
* **Process**: 14 nm
* **TDP**: 125 W
### Intel [Skylake Platinum 8153][f]
Xeon Platinum 8153 is a 64-bit 16-core x86 multi-socket highest performance server microprocessor introduced by Intel in mid-2017. This chip supports up to 8-way multiprocessing. The Platinum 8153, which is based on the server configuration of the Skylake microarchitecture and is manufactured on a 14 nm+ process, sports 2 AVX-512 FMA units as well as three Ultra Path Interconnect links. This microprocessor, which operates at 2 GHz with a TDP of 125 W and a turbo boost frequency of up to 2.8 GHz, supports up to 768 GiB of hexa-channel DDR4-2666 ECC memory.
* **Family**: Xeon Platinum
* **Cores**: 16
* **Threads**: 32
* **L1I Cache**: 512 KiB, 16x32 KiB, 8-way set associative
* **L1D Cache**: 512 KiB, 16x32 KiB, 8-way set associative, write-back
* **L2 Cache**: 16 MiB, 16x1 MiB, 16-way set associative, write-back
* **L3 Cache**: 22 MiB, 16x1.375 MiB, 11-way set associative, write-back
* **Instructions**: x86-64, MOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA3, F16C, BMI, BMI2, VT-x, VT-d, TXT, TSX, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVE, SGX, MPX, AVX-512
* **Frequency**: 2.0 GHz
* **Max turbo**: 2.8 GHz
* **Process**: 14 nm
* **TDP**: 125 W
## GPU Accelerator
Barbora is equipped with an [NVIDIA Tesla V100-SXM2][g] accelerator.
![](img/gpu-v100.png)
| NVIDIA Tesla V100-SXM2 | |
| ---------------------------- | -------------------------------------- |
| GPU Architecture | NVIDIA Volta |
| NVIDIA Tensor Cores | 640 |
| NVIDIA CUDA® Cores | 5120 |
| Double-Precision Performance | 7.8 TFLOP/s |
| Single-Precision Performance | 15.7 TFLOP/s |
| Tensor Performance | 125 TFLOP/s |
| GPU Memory | 16 GB HBM2 |
| Memory Bandwidth | 900 GB/sec |
| ECC | Yes |
| Interconnect Bandwidth | 300 GB/sec |
| System Interface | NVIDIA NVLink |
| Form Factor | SXM2 |
| Max Power Consumption | 300 W |
| Thermal Solution | Passive |
| Compute APIs | CUDA, DirectCompute, OpenCLTM, OpenACC |
[a]: https://en.wikichip.org/wiki/intel/microarchitectures/skylake_(server)#Core
[b]: https://en.wikichip.org/wiki/intel/microarchitectures/skylake_(client)#Pipeline
[c]: https://en.wikichip.org/wiki/x86/avx512vnni
[d]: https://en.wikichip.org/wiki/intel/xeon_gold/6240
[e]: https://en.wikichip.org/wiki/intel/xeon_gold/6126
[f]: https://en.wikichip.org/wiki/intel/xeon_platinum/8153
[g]: https://images.nvidia.com/content/technologies/volta/pdf/tesla-volta-v100-datasheet-letter-fnl-web.pdf
# Hardware Overview
The Barbora cluster consists of 201 computational nodes named **cn[001-201]**
of which 192 are regular compute nodes, 8 are GPU Tesla V100 accelerated nodes and 1 is a fat node.
Each node is a powerful x86-64 computer, equipped with 36/24/128 cores
(18-core Intel Cascade Lake 6240 / 12-core Intel Skylake Gold 6126 / 16-core Intel Skylake 8153), at least 192 GB of RAM.
User access to the Barbora cluster is provided by two login nodes **login[1,2]**.
The nodes are interlinked through high speed InfiniBand and Ethernet networks.
The fat node is equipped with 6144 GB of memory.
Virtualization infrastructure provides resources for running long-term servers and services in virtual mode.
The Accelerated nodes, fat node, and virtualization infrastructure are available [upon request][a] from a PI.
**There are three types of compute nodes:**
* 192 compute nodes without an accelerator
* 8 compute nodes with a GPU accelerator - 4x NVIDIA Tesla V100-SXM2
* 1 fat node - equipped with 6144 GB of RAM
[More about compute nodes][1].
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy][2].
All of these nodes are interconnected through fast InfiniBand and Ethernet networks.
[More about the computing network][3].
Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis,
as well as connecting the chassis to the upper level switches.
User access to Barbora is provided by two login nodes: login1 and login2.
[More about accessing the cluster][5].
The parameters are summarized in the following tables:
| **In general** | |
| ------------------------------------------- | -------------------------------------------- |
| Primary purpose | High Performance Computing |
| Architecture of compute nodes | x86-64 |
| Operating system | Linux |
| [**Compute nodes**][1] | |
| Total | 201 |
| Processor cores | 36/24/128 (2x18 cores/2x12 cores/8x16 cores) |
| RAM | min. 192 GB |
| Local disk drive | no |
| Compute network | InfiniBand HDR |
| w/o accelerator | 192, cn[001-192] |
| GPU accelerated | 8, cn[193-200] |
| Fat compute nodes | 1, cn[201] |
| **In total** | |
| Total theoretical peak performance (Rpeak) | 848.8448 TFLOP/s |
| Total amount of RAM | 44.544 TB |
| Node | Processor | Memory | Accelerator |
| ---------------- | --------------------------------------- | ------ | ---------------------- |
| Regular node | 2x Intel Cascade Lake 6240, 2.6 GHz | 192GB | - |
| GPU accelerated | 2x Intel Skylake Gold 6126, 2.6 GHz | 192GB | NVIDIA Tesla V100-SXM2 |
| Fat compute node | 2x Intel Skylake Platinum 8153, 2.0 GHz | 6144GB | - |
For more details refer to the sections [Compute Nodes][1], [Storage][4], [Visualization Servers][6], and [Network][3].
[1]: compute-nodes.md
[2]: ../general/resources-allocation-policy.md
[3]: network.md
[4]: storage.md
[5]: ../general/shell-and-data-access.md
[6]: visualization.md
[a]: https://support.it4i.cz/rt
docs.it4i/barbora/img/BullSequanaX.png

572 KiB

docs.it4i/barbora/img/BullSequanaX1120.png

184 KiB

docs.it4i/barbora/img/BullSequanaX410E5GPUNVLink.jpg

9.32 KiB

docs.it4i/barbora/img/BullSequanaX808.jpg

4.31 KiB

docs.it4i/barbora/img/QM8700.jpg

53.5 KiB

docs.it4i/barbora/img/XH2000.png

74.7 KiB

docs.it4i/barbora/img/bullsequanaX450-E5.png

33.4 KiB