-
Lukáš Krupčík authoredLukáš Krupčík authored
Hardware Overview
The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated nodes, 4 MIC Xeon Phi 5110 accelerated nodes and 2 fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64GB RAM, and local hard drive. The user access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320TB /home disk storage to store the user files. The 146TB shared /scratch storage is available for the scratch data.
The Fat nodes are equipped with large amount (512GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available upon request made by a PI.
Schematic representation of the Anselm cluster. Each box represents a node (computer) or storage capacity:
User-oriented infrastructure | Storage | Management infrastructure | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Rack 01, Switch isw5
|
Rack 01, Switch isw4
|
|
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Rack 01, Switch isw0
|
Rack 02, Switch isw10
|
Rack 02, Switch isw9
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Rack 02, Switch isw6
|
Rack 03, Switch isw15
|
Rack 03, Switch isw14
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Rack 03, Switch isw11
|
Rack 04, Switch isw20
|
Rack 04, Switch isw19
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Rack 04, Switch isw16
|
Rack 05, Switch isw21
|
|
The cluster compute nodes cn[1-207] are organized within 13 chassis.
There are four types of compute nodes:
- 180 compute nodes without the accelerator
- 23 compute nodes with GPU accelerator - equipped with NVIDIA Tesla Kepler K20
- 4 compute nodes with MIC accelerator - equipped with Intel Xeon Phi 5110P
- 2 fat nodes - equipped with 512GB RAM and two 100GB SSD drives
GPU and accelerated nodes are available upon request, see the Resources Allocation Policy.