Skip to content
Snippets Groups Projects
Commit 450e20b5 authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Update hardware-overview.md

parent 9cbb291e
No related branches found
No related tags found
No related merge requests found
Pipeline #30514 passed with warnings
# Hardware Overview # Hardware Overview
The Barbora cluster consists of 201 computational nodes named **cn[001-201]** of which 192 are regular compute nodes, 8 are GPU Tesla V100 accelerated nodes and 1 is a fat node. Each node is a powerful x86-64 computer, equipped with 36/24/128 cores (18-core Intel Cascade Lake 6240 / 12-core Intel Skylake Gold 6126 / 16-core Intel Skylake 8153), at least 192GB of RAM. User access to the Barbora cluster is provided by two login nodes **login[1,2]**. The nodes are interlinked through high speed InfiniBand and Ethernet networks. The Barbora cluster consists of 201 computational nodes named **cn[001-201]**
of which 192 are regular compute nodes, 8 are GPU Tesla V100 accelerated nodes and 1 is a fat node.
Each node is a powerful x86-64 computer, equipped with 36/24/128 cores
(18-core Intel Cascade Lake 6240 / 12-core Intel Skylake Gold 6126 / 16-core Intel Skylake 8153), at least 192 GB of RAM.
User access to the Barbora cluster is provided by two login nodes **login[1,2]**.
The nodes are interlinked through high speed InfiniBand and Ethernet networks.
The fat node is equipped with a large amount (6144GB) of memory. Virtualization infrastructure provides resources to run long-term servers and services in virtual mode. The Accelerated nodes, fat node, and Virtualization infrastructure are available [upon request][a] from a PI. The fat node is equipped with 6144 GB of memory.
Virtualization infrastructure provides resources for running long-term servers and services in virtual mode.
The Accelerated nodes, fat node, and virtualization infrastructure are available [upon request][a] from a PI.
**There are three types of compute nodes:** **There are three types of compute nodes:**
* 192 compute nodes without an accelerator * 192 compute nodes without an accelerator
* 8 compute nodes with a GPU accelerator - 4x NVIDIA Tesla V100-SXM2 * 8 compute nodes with a GPU accelerator - 4x NVIDIA Tesla V100-SXM2
* 1 fat node - equipped with 6144GB of RAM * 1 fat node - equipped with 6144 GB of RAM
[More about compute nodes][1]. [More about compute nodes][1].
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy][2]. GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy][2].
All of these nodes are interconnected through fast InfiniBand and Ethernet networks. [More about the computing network][3]. All of these nodes are interconnected through fast InfiniBand and Ethernet networks.
Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches. [More about the computing network][3].
Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis,
as well as connecting the chassis to the upper level switches.
User access to Barbora is provided by two login nodes: login1 and login2. [More about accessing the cluster][5]. User access to Barbora is provided by two login nodes: login1 and login2.
[More about accessing the cluster][5].
The parameters are summarized in the following tables: The parameters are summarized in the following tables:
...@@ -29,21 +39,21 @@ The parameters are summarized in the following tables: ...@@ -29,21 +39,21 @@ The parameters are summarized in the following tables:
| [**Compute nodes**][1] | | | [**Compute nodes**][1] | |
| Total | 201 | | Total | 201 |
| Processor cores | 36/24/128 (2x18 cores/2x12 cores/8x16 cores) | | Processor cores | 36/24/128 (2x18 cores/2x12 cores/8x16 cores) |
| RAM | min. 192GB | | RAM | min. 192 GB |
| Local disk drive | no | | Local disk drive | no |
| Compute network | InfiniBand HDR | | Compute network | InfiniBand HDR |
| w/o accelerator | 192, cn[001-192] | | w/o accelerator | 192, cn[001-192] |
| GPU accelerated | 8, cn[193-200] | | GPU accelerated | 8, cn[193-200] |
| Fat compute nodes | 1, cn[201] | | Fat compute nodes | 1, cn[201] |
| **In total** | | | **In total** | |
| Total theoretical peak performance (Rpeak) | 848.8448TFLOP/s | | Total theoretical peak performance (Rpeak) | 848.8448 TFLOP/s |
| Total amount of RAM | 44.544TB | | Total amount of RAM | 44.544 TB |
| Node | Processor | Memory | Accelerator | | Node | Processor | Memory | Accelerator |
| ---------------- | -------------------------------------- | ------ | ---------------------- | | ---------------- | --------------------------------------- | ------ | ---------------------- |
| Regular node | 2x Intel Cascade Lake 6240, 2.6GHz | 192GB | - | | Regular node | 2x Intel Cascade Lake 6240, 2.6 GHz | 192GB | - |
| GPU accelerated | 2x Intel Skylake Gold 6126, 2.6GHz | 192GB | NVIDIA Tesla V100-SXM2 | | GPU accelerated | 2x Intel Skylake Gold 6126, 2.6 GHz | 192GB | NVIDIA Tesla V100-SXM2 |
| Fat compute node | 2x Intel Skylake Platinum 8153, 2.0GHz | 6144GB | - | | Fat compute node | 2x Intel Skylake Platinum 8153, 2.0 GHz | 6144GB | - |
For more details refer to the sections [Compute Nodes][1], [Storage][4], [Visualization Servers][6], and [Network][3]. For more details refer to the sections [Compute Nodes][1], [Storage][4], [Visualization Servers][6], and [Network][3].
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment