hardware-overview.md 4.99 KB
Newer Older
David Hrbáč's avatar
David Hrbáč committed
1
# Hardware Overview
Lukáš Krupčík's avatar
Lukáš Krupčík committed
2

3
The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 are GPU Kepler K20 accelerated nodes, 4 are MIC Xeon Phi 5110P accelerated nodes, and 2 are fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB of RAM, and a local hard drive. User access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked through high speed InfiniBand and Ethernet networks. All nodes share a 320 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data.
Lukáš Krupčík's avatar
Lukáš Krupčík committed
4

David Hrbáč's avatar
David Hrbáč committed
5
The Fat nodes are equipped with a large amount (512 GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request][a] from a PI.
Lukáš Krupčík's avatar
Lukáš Krupčík committed
6 7 8

Schematic representation of the Anselm cluster. Each box represents a node (computer) or storage capacity:

Lukáš Krupčík's avatar
Lukáš Krupčík committed
9
![](../img/Anselm-Schematic-Representation.png)
Lukáš Krupčík's avatar
Lukáš Krupčík committed
10 11

The cluster compute nodes cn[1-207] are organized within 13 chassis.
Lukáš Krupčík's avatar
Lukáš Krupčík committed
12 13 14

There are four types of compute nodes:

15
* 180 compute nodes without an accelerator
Lukáš Krupčík's avatar
Lukáš Krupčík committed
16
* 23 compute nodes with a GPU accelerator - an NVIDIA Tesla Kepler K20m
17 18
* 4 compute nodes with a MIC accelerator - an Intel Xeon Phi 5110P
* 2 fat nodes - equipped with 512 GB of RAM and two 100 GB SSD drives
Lukáš Krupčík's avatar
Lukáš Krupčík committed
19

David Hrbáč's avatar
David Hrbáč committed
20
[More about Compute nodes][1].
Lukáš Krupčík's avatar
Lukáš Krupčík committed
21

David Hrbáč's avatar
David Hrbáč committed
22
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy][2].
Lukáš Krupčík's avatar
Lukáš Krupčík committed
23

David Hrbáč's avatar
David Hrbáč committed
24
All of these nodes are interconnected through fast InfiniBand and Ethernet networks.  [More about the Network][3].
25
Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
Lukáš Krupčík's avatar
Lukáš Krupčík committed
26

David Hrbáč's avatar
David Hrbáč committed
27
All of the nodes share a 360 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data. These file systems are provided by the Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch.  [More about Storage][4].
Lukáš Krupčík's avatar
Lukáš Krupčík committed
28

David Hrbáč's avatar
David Hrbáč committed
29
User access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing the cluster][5].
Lukáš Krupčík's avatar
Lukáš Krupčík committed
30 31 32

The parameters are summarized in the following tables:

David Hrbáč's avatar
David Hrbáč committed
33
| **In general**                              |                                              |
Lukáš Krupčík's avatar
Lukáš Krupčík committed
34
| ------------------------------------------- | -------------------------------------------- |
David Hrbáč's avatar
David Hrbáč committed
35 36
| Primary purpose                             | High Performance Computing                   |
| Architecture of compute nodes               | x86-64                                       |
37
| Operating system                            | Linux (CentOS)                               |
David Hrbáč's avatar
David Hrbáč committed
38
| [**Compute nodes**][1]                      |                                              |
39
| Total                                       | 209                                          |
David Hrbáč's avatar
David Hrbáč committed
40 41
| Processor cores                             | 16 (2 x 8 cores)                             |
| RAM                                         | min. 64 GB, min. 4 GB per core               |
Lukáš Krupčík's avatar
Lukáš Krupčík committed
42
| Local disk drive                            | yes - usually 500 GB                         |
David Hrbáč's avatar
David Hrbáč committed
43 44 45 46 47 48 49 50 51 52 53
| Compute network                             | InfiniBand QDR, fully non-blocking, fat-tree |
| w/o accelerator                             | 180, cn[1-180]                               |
| GPU accelerated                             | 23, cn[181-203]                              |
| MIC accelerated                             | 4, cn[204-207]                               |
| Fat compute nodes                           | 2, cn[208-209]                               |
| **In total**                                |                                              |
| Total theoretical peak performance  (Rpeak) | 94 TFLOP/s                                   |
| Total max. LINPACK performance  (Rmax)      | 73 TFLOP/s                                   |
| Total amount of RAM                         | 15.136 TB                                    |

| Node             | Processor                               | Memory | Accelerator          |
Lukáš Krupčík's avatar
Lukáš Krupčík committed
54 55
| ---------------- | --------------------------------------- | ------ | -------------------- |
| w/o accelerator  | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 64 GB  | -                    |
56
| GPU accelerated  | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB  | NVIDIA Kepler K20m   |
David Hrbáč's avatar
David Hrbáč committed
57
| MIC accelerated  | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB  | Intel Xeon Phi 5110P |
Lukáš Krupčík's avatar
Lukáš Krupčík committed
58
| Fat compute node | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 512 GB | -                    |
Lukáš Krupčík's avatar
Lukáš Krupčík committed
59

David Hrbáč's avatar
David Hrbáč committed
60 61
For more details refer to [Compute nodes][1], [Storage][4], and [Network][3].

David Hrbáč's avatar
David Hrbáč committed
62
[1]: compute-nodes.md
Lukáš Krupčík's avatar
Lukáš Krupčík committed
63
[2]: ../general/resources-allocation-policy.md
David Hrbáč's avatar
David Hrbáč committed
64 65
[3]: network.md
[4]: storage.md
Lukáš Krupčík's avatar
Lukáš Krupčík committed
66
[5]: ../general/shell-and-data-access.md
David Hrbáč's avatar
David Hrbáč committed
67 68

[a]: https://support.it4i.cz/rt