diff --git a/docs.it4i/barbora/network.md b/docs.it4i/barbora/network.md index 58b28ec3cb7fb3f79e1a081bc6eedf8d35619322..0f9ab3766767b32c1526f2b46f8ee8ffe6b5efd3 100644 --- a/docs.it4i/barbora/network.md +++ b/docs.it4i/barbora/network.md @@ -2,9 +2,11 @@ All of the compute and login nodes of Barbora are interconnected through a [InfiniBand][a] HDR 200 Gbps network and a Gigabit Ethernet network. -Compute nodes and the service infrastructure is connected by the HDR100 technology that allows one 200Gbps HDR port (aggregation 4x 50Gbps) to be divided into two HDR100 ports with 100Gbps (2x 50Gbps) bandwidth. +Compute nodes and the service infrastructure is connected by the HDR100 technology +that allows one 200 Gbps HDR port (aggregation 4x 50 Gbps) to be divided into two HDR100 ports with 100 Gbps (2x 50 Gbps) bandwidth. -The cabling between the L1 and L2 layer is realized by HDR cabling, connecting the end devices is realized by so called Y or splitter cable (1x HRD200 - 2x HDR100). +The cabling between the L1 and L2 layer is realized by HDR cabling, +connecting the end devices is realized by so called Y or splitter cable (1x HRD200 - 2x HDR100).  @@ -21,9 +23,9 @@ The cabling between the L1 and L2 layer is realized by HDR cabling, connecting t **Performance** -* 40x HDR 200Gb/s ports in a 1U switch -* 80x HDR100 100Gb/s ports in a 1U switch -* 16Tb/s aggregate switch throughput +* 40x HDR 200 Gb/s ports in a 1U switch +* 80x HDR100 100 Gb/s ports in a 1U switch +* 16 Tb/s aggregate switch throughput * Up to 15.8 billion messages-per-second * 90ns switch latency