From 1b9117496e957ae91b16f66c1320ca3fee4ac3d6 Mon Sep 17 00:00:00 2001
From: Jan Siwiec <jan.siwiec@vsb.cz>
Date: Mon, 13 Mar 2023 10:49:33 +0100
Subject: [PATCH] OJ proofreading

---
 docs.it4i/barbora/hardware-overview.md | 30 +++++++++++++-------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/docs.it4i/barbora/hardware-overview.md b/docs.it4i/barbora/hardware-overview.md
index cc3f350fa..00fbd3906 100644
--- a/docs.it4i/barbora/hardware-overview.md
+++ b/docs.it4i/barbora/hardware-overview.md
@@ -1,20 +1,20 @@
 # Hardware Overview
 
-The Barbora cluster consists of 201 computational nodes named **cn[1-201]** of which 192 are regular compute nodes, 8 are GPU Tesla V100 accelerated nodes and 1 is a fat node. Each node is a powerful x86-64 computer, equipped with 36/24/128 cores (18-core Intel Cascade Lake 6240 / 12-core Intel Skylake Gold 6126 / 16-core Intel Skylake 8153), at least 192 GB of RAM. User access to the Barbora cluster is provided by two login nodes **login[1,2]**. The nodes are interlinked through high speed InfiniBand and Ethernet networks.
+The Barbora cluster consists of 201 computational nodes named **cn[001-201]** of which 192 are regular compute nodes, 8 are GPU Tesla V100 accelerated nodes and 1 is a fat node. Each node is a powerful x86-64 computer, equipped with 36/24/128 cores (18-core Intel Cascade Lake 6240 / 12-core Intel Skylake Gold 6126 / 16-core Intel Skylake 8153), at least 192GB of RAM. User access to the Barbora cluster is provided by two login nodes **login[1,2]**. The nodes are interlinked through high speed InfiniBand and Ethernet networks.
 
-The Fat node is equipped with a large amount (6144 GB) of memory. Virtualization infrastructure provides resources to run long-term servers and services in virtual mode. The Accelerated nodes, Fat node, and Virtualization infrastructure are available [upon request][a] from a PI.
+The fat node is equipped with a large amount (6144GB) of memory. Virtualization infrastructure provides resources to run long-term servers and services in virtual mode. The Accelerated nodes, fat node, and Virtualization infrastructure are available [upon request][a] from a PI.
 
 **There are three types of compute nodes:**
 
 * 192 compute nodes without an accelerator
 * 8 compute nodes with a GPU accelerator - 4x NVIDIA Tesla V100-SXM2
-* 1 fat node - equipped with 6144 GB of RAM
+* 1 fat node - equipped with 6144GB of RAM
 
-[More about Compute nodes][1].
+[More about compute nodes][1].
 
 GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy][2].
 
-All of these nodes are interconnected through fast InfiniBand and Ethernet networks.  [More about the Network][3].
+All of these nodes are interconnected through fast InfiniBand and Ethernet networks. [More about the computing network][3].
 Every chassis provides an InfiniBand switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
 
 User access to Barbora is provided by two login nodes: login1 and login2. [More about accessing the cluster][5].
@@ -29,23 +29,23 @@ The parameters are summarized in the following tables:
 | [**Compute nodes**][1]                      |                                              |
 | Total                                       | 201                                          |
 | Processor cores                             | 36/24/128 (2x18 cores/2x12 cores/8x16 cores) |
-| RAM                                         | min. 192 GB                                  |
+| RAM                                         | min. 192GB                                   |
 | Local disk drive                            | no                                           |
 | Compute network                             | InfiniBand HDR                               |
-| w/o accelerator                             | 192, cn[1-192]                               |
+| w/o accelerator                             | 192, cn[001-192]                             |
 | GPU accelerated                             | 8, cn[193-200]                               |
 | Fat compute nodes                           | 1, cn[201]                                   |
 | **In total**                                |                                              |
-| Total theoretical peak performance  (Rpeak) | 848.8448 TFLOP/s                             |
-| Total amount of RAM                         | 44.544 TB                                    |
+| Total theoretical peak performance  (Rpeak) | 848.8448TFLOP/s                             |
+| Total amount of RAM                         | 44.544TB                                    |
 
-| Node             | Processor                                | Memory  | Accelerator            |
-| ---------------- | ---------------------------------------  | ------  | ---------------------- |
-| w/o accelerator  | 2 x Intel Cascade Lake 6240, 2.6 GHz     | 192 GB  | -                      |
-| GPU accelerated  | 2 x Intel Skylake Gold 6126, 2.6 GHz     | 192 GB  | NVIDIA Tesla V100-SXM2 |
-| Fat compute node | 2 x Intel Skylake Platinum 8153, 2.0 GHz | 6144 GB | -                      |
+| Node             | Processor                              | Memory | Accelerator            |
+| ---------------- | -------------------------------------- | ------ | ---------------------- |
+| Regular node     | 2x Intel Cascade Lake 6240, 2.6GHz     | 192GB  | -                      |
+| GPU accelerated  | 2x Intel Skylake Gold 6126, 2.6GHz     | 192GB  | NVIDIA Tesla V100-SXM2 |
+| Fat compute node | 2x Intel Skylake Platinum 8153, 2.0GHz | 6144GB | -                      |
 
-For more details refer to [Compute nodes][1], [Storage][4], [Visualization servers][6], and [Network][3].
+For more details refer to the sections [Compute Nodes][1], [Storage][4], [Visualization Servers][6], and [Network][3].
 
 [1]: compute-nodes.md
 [2]: ../general/resources-allocation-policy.md
-- 
GitLab