Skip to content
Snippets Groups Projects
Commit 3c976183 authored by John Cawley's avatar John Cawley
Browse files

Update hardware-overview.md PROOFREAD

line 3; did you want to include the size of the local hard drive?
line 5; 'fat nodes and virtual servers may access 45 TB etc.'  Try: '...can access up to 45...' or '... have access to 45...'
parent ad9f5b89
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!171WIP: Resolve "John's froofreading"
# Hardware Overview # Hardware Overview
The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated nodes, 4 MIC Xeon Phi 5110P accelerated nodes and 2 fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB RAM, and local hard drive. The user access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320 TB /home disk storage to store the user files. The 146 TB shared /scratch storage is available for the scratch data. The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 are GPU Kepler K20 accelerated nodes, 4 are MIC Xeon Phi 5110P accelerated nodes, and 2 are fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB of RAM, and a local hard drive. User access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked through high speed InfiniBand and Ethernet networks. All nodes share a 320 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data.
The Fat nodes are equipped with large amount (512 GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) made by a PI. The Fat nodes are equipped with a large amount (512 GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) from a PI.
Schematic representation of the Anselm cluster. Each box represents a node (computer) or storage capacity: Schematic representation of the Anselm cluster. Each box represents a node (computer) or storage capacity:
...@@ -12,21 +12,21 @@ The cluster compute nodes cn[1-207] are organized within 13 chassis. ...@@ -12,21 +12,21 @@ The cluster compute nodes cn[1-207] are organized within 13 chassis.
There are four types of compute nodes: There are four types of compute nodes:
* 180 compute nodes without the accelerator * 180 compute nodes without an accelerator
* 23 compute nodes with GPU accelerator - equipped with NVIDIA Tesla Kepler K20 * 23 compute nodes with a GPU accelerator - an NVIDIA Tesla Kepler K20
* 4 compute nodes with MIC accelerator - equipped with Intel Xeon Phi 5110P * 4 compute nodes with a MIC accelerator - an Intel Xeon Phi 5110P
* 2 fat nodes - equipped with 512 GB RAM and two 100 GB SSD drives * 2 fat nodes - equipped with 512 GB of RAM and two 100 GB SSD drives
[More about Compute nodes](compute-nodes/). [More about Compute nodes](compute-nodes/).
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resources-allocation-policy/). GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resources-allocation-policy/).
All these nodes are interconnected by fast InfiniBand network and Ethernet network. [More about the Network](network/). All of these nodes are interconnected through fast InfiniBand and Ethernet networks. [More about the Network](network/).
Every chassis provides InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches. Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
All nodes share 360 TB /home disk storage to store user files. The 146 TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch. [More about Storage](storage/). All of the nodes share a 360 TB /home disk for storage of user files. The 146 TB shared /scratch storage is available for scratch data. These file systems are provided by the Lustre parallel file system. There is also local disk storage available on all compute nodes in /lscratch. [More about Storage](storage/).
The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](shell-and-data-access/) User access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing the cluster.](shell-and-data-access/)
The parameters are summarized in the following tables: The parameters are summarized in the following tables:
...@@ -36,7 +36,7 @@ The parameters are summarized in the following tables: ...@@ -36,7 +36,7 @@ The parameters are summarized in the following tables:
| Architecture of compute nodes | x86-64 | | Architecture of compute nodes | x86-64 |
| Operating system | Linux | | Operating system | Linux |
| [**Compute nodes**](compute-nodes/) | | | [**Compute nodes**](compute-nodes/) | |
| Totally | 209 | | Total | 209 |
| Processor cores | 16 (2 x 8 cores) | | Processor cores | 16 (2 x 8 cores) |
| RAM | min. 64 GB, min. 4 GB per core | | RAM | min. 64 GB, min. 4 GB per core |
| Local disk drive | yes - usually 500 GB | | Local disk drive | yes - usually 500 GB |
...@@ -57,4 +57,4 @@ The parameters are summarized in the following tables: ...@@ -57,4 +57,4 @@ The parameters are summarized in the following tables:
| MIC accelerated | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB | Intel Xeon Phi 5110P | | MIC accelerated | 2 x Intel Sandy Bridge E5-2470, 2.3 GHz | 96 GB | Intel Xeon Phi 5110P |
| Fat compute node | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 512 GB | - | | Fat compute node | 2 x Intel Sandy Bridge E5-2665, 2.4 GHz | 512 GB | - |
For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/), and [Network](network/). For more details please refer to [Compute nodes](compute-nodes/), [Storage](storage/), and [Network](network/).
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment