Skip to content
Snippets Groups Projects
hardware-overview.md 3.93 KiB
Newer Older
  • Learn to ignore specific revisions
  • Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    # Hardware Overview
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    The Barbora cluster consists of 201 computational nodes named **cn[001-201]**
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    of which 192 are regular compute nodes, 8 are GPU Tesla V100 accelerated nodes and 1 is a fat node.
    Each node is a powerful x86-64 computer, equipped with 36/24/128 cores
    (18-core Intel Cascade Lake 6240 / 12-core Intel Skylake Gold 6126 / 16-core Intel Skylake 8153), at least 192 GB of RAM.
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    User access to the Barbora cluster is provided by two login nodes **login[1,2]**.
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    The nodes are interlinked through high speed InfiniBand and Ethernet networks.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    The fat node is equipped with 6144 GB of memory.
    Virtualization infrastructure provides resources for running long-term servers and services in virtual mode.
    The Accelerated nodes, fat node, and virtualization infrastructure are available [upon request][a] from a PI.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    **There are three types of compute nodes:**
    
    * 192 compute nodes without an accelerator
    * 8 compute nodes with a GPU accelerator - 4x NVIDIA Tesla V100-SXM2
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    * 1 fat node - equipped with 6144 GB of RAM
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    [More about compute nodes][1].
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy][2].
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    All of these nodes are interconnected through fast InfiniBand and Ethernet networks.
    [More about the computing network][3].
    Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis,
    as well as connecting the chassis to the upper level switches.
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    User access to Barbora is provided by two login nodes: login1 and login2.
    [More about accessing the cluster][5].
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    The parameters are summarized in the following tables:
    
    | **In general**                              |                                              |
    | ------------------------------------------- | -------------------------------------------- |
    | Primary purpose                             | High Performance Computing                   |
    | Architecture of compute nodes               | x86-64                                       |
    | Operating system                            | Linux                                        |
    | [**Compute nodes**][1]                      |                                              |
    | Total                                       | 201                                          |
    | Processor cores                             | 36/24/128 (2x18 cores/2x12 cores/8x16 cores) |
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    | RAM                                         | min. 192 GB                                  |
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    | Local disk drive                            | no                                           |
    | Compute network                             | InfiniBand HDR                               |
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    | w/o accelerator                             | 192, cn[001-192]                             |
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    | GPU accelerated                             | 8, cn[193-200]                               |
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    | Fat compute nodes                           | 1, cn[201]                                   |
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    | **In total**                               |                                             |
    | Total theoretical peak performance  (Rpeak) | 848.8448 TFLOP/s                             |
    | Total amount of RAM                         | 44.544 TB                                    |
    
    | Node             | Processor                               | Memory | Accelerator            |
    | ---------------- | --------------------------------------- | ------ | ---------------------- |
    | Regular node     | 2x Intel Cascade Lake 6240, 2.6 GHz     | 192GB  | -                      |
    | GPU accelerated  | 2x Intel Skylake Gold 6126, 2.6 GHz     | 192GB  | NVIDIA Tesla V100-SXM2 |
    | Fat compute node | 2x Intel Skylake Platinum 8153, 2.0 GHz | 6144GB | -                      |
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    
    Jan Siwiec's avatar
    Jan Siwiec committed
    For more details refer to the sections [Compute Nodes][1], [Storage][4], [Visualization Servers][6], and [Network][3].
    
    Lukáš Krupčík's avatar
    Lukáš Krupčík committed
    
    [1]: compute-nodes.md
    [2]: ../general/resources-allocation-policy.md
    [3]: network.md
    [4]: storage.md
    [5]: ../general/shell-and-data-access.md
    [6]: visualization.md
    
    [a]: https://support.it4i.cz/rt