Skip to content
Snippets Groups Projects
Commit 14862d36 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

new version

parent b0bb60b7
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
Showing
with 0 additions and 2592 deletions
docs.it4i.cz/++resource++jquery-ui-themes/sunburst/images/animated-overlay.gif

1.7 KiB

This diff is collapsed.
This diff is collapsed.
Introduction
============
Welcome to Anselm supercomputer cluster. The Anselm cluster consists of
209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving
over 94 Tflop/s theoretical peak performance. Each node is a <span
class="WYSIWYG_LINK">powerful</span> x86-64 computer, equipped with 16
cores, at least 64GB RAM, and 500GB harddrive. Nodes are interconnected
by fully non-blocking fat-tree Infiniband network and equipped with
Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA
Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware
Overview](https://docs.it4i.cz/anselm-cluster-documentation/hardware-overview).
The cluster runs bullx Linux [<span
class="WYSIWYG_LINK"></span>](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)[operating
system](https://docs.it4i.cz/anselm-cluster-documentation/software/operating-system),
which is compatible with the <span class="WYSIWYG_LINK">RedHat</span>
[<span class="WYSIWYG_LINK">Linux
family.</span>](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
We have installed a wide range of
[software](https://docs.it4i.cz/anselm-cluster-documentation/software)
packages targeted at different scientific domains. These packages are
accessible via the [modules
environment](https://docs.it4i.cz/anselm-cluster-documentation/environment-and-modules).
User data shared file-system (HOME, 320TB) and job data shared
file-system (SCRATCH, 146TB) are available to users.
The PBS Professional workload manager provides [computing resources
allocations and job
execution](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution).
Read more on how to [apply for
resources](https://docs.it4i.cz/get-started-with-it4innovations/applying-for-resources),
[obtain login
credentials,](https://docs.it4i.cz/get-started-with-it4innovations/obtaining-login-credentials)
and [access the
cluster](https://docs.it4i.cz/anselm-cluster-documentation/accessing-the-cluster).
docs.it4i.cz/anselm-cluster-documentation/Anselmprofile.jpg

19.6 KiB

docs.it4i.cz/anselm-cluster-documentation/Authorization_chain.png

26.8 KiB

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment