The Barbora cluster consists of 201 computational nodes named **cn[1-201]** of which 192 are regular compute nodes, 8 are GPU Tesla V100 accelerated nodes and 1 is a fat node. Each node is a powerful x86-64 computer, equipped with 36/24/128 cores (18-core Intel Cascade Lake 6240 / 12-core Intel Skylake Gold 6126 / 16-core Intel Skylake 8153), at least 192GB of RAM. User access to the Barbora cluster is provided by two login nodes **login[1,2]**. The nodes are interlinked through high speed InfiniBand and Ethernet networks.
The Barbora cluster consists of 201 computational nodes named **cn[001-201]** of which 192 are regular compute nodes, 8 are GPU Tesla V100 accelerated nodes and 1 is a fat node. Each node is a powerful x86-64 computer, equipped with 36/24/128 cores (18-core Intel Cascade Lake 6240 / 12-core Intel Skylake Gold 6126 / 16-core Intel Skylake 8153), at least 192GB of RAM. User access to the Barbora cluster is provided by two login nodes **login[1,2]**. The nodes are interlinked through high speed InfiniBand and Ethernet networks.
The Fat node is equipped with a large amount (6144GB) of memory. Virtualization infrastructure provides resources to run long-term servers and services in virtual mode. The Accelerated nodes, Fat node, and Virtualization infrastructure are available [upon request][a] from a PI.
The fat node is equipped with a large amount (6144GB) of memory. Virtualization infrastructure provides resources to run long-term servers and services in virtual mode. The Accelerated nodes, fat node, and Virtualization infrastructure are available [upon request][a] from a PI.
**There are three types of compute nodes:**
* 192 compute nodes without an accelerator
* 8 compute nodes with a GPU accelerator - 4x NVIDIA Tesla V100-SXM2
* 1 fat node - equipped with 6144GB of RAM
* 1 fat node - equipped with 6144GB of RAM
[More about Compute nodes][1].
[More about compute nodes][1].
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy][2].
All of these nodes are interconnected through fast InfiniBand and Ethernet networks. [More about the Network][3].
All of these nodes are interconnected through fast InfiniBand and Ethernet networks. [More about the computing network][3].
Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
User access to Barbora is provided by two login nodes: login1 and login2. [More about accessing the cluster][5].
...
...
@@ -29,23 +29,23 @@ The parameters are summarized in the following tables: