Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Introduction
============
Welcome to Anselm supercomputer cluster. The Anselm cluster consists of
209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving
over 94 Tflop/s theoretical peak performance. Each node is a
powerful x86-64 computer, equipped with 16
cores, at least 64GB RAM, and 500GB harddrive. Nodes are interconnected
by fully non-blocking fat-tree Infiniband network and equipped with
Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA
Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware
Overview](hardware-overview.html).
The cluster runs bullx Linux [
](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)[operating
system](software/operating-system.html), which is
compatible with the RedHat [
Linux
family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg)
We have installed a wide range of
[software](software.1.html) packages targeted at
different scientific domains. These packages are accessible via the
[modules environment](environment-and-modules.html).
User data shared file-system (HOME, 320TB) and job data shared
file-system (SCRATCH, 146TB) are available to users.
The PBS Professional workload manager provides [computing resources
allocations and job
execution](resource-allocation-and-job-execution.html).
Read more on how to [apply for
resources](../get-started-with-it4innovations/applying-for-resources.html),
[obtain login
credentials,](../get-started-with-it4innovations/obtaining-login-credentials.html)
and [access the cluster](accessing-the-cluster.html).