From 2c035b8e06228b051a339b0447b0f1520dfdfdb5 Mon Sep 17 00:00:00 2001 From: Lubomir Prda <lubomir.prda@vsb.cz> Date: Mon, 23 Jan 2017 10:20:16 +0100 Subject: [PATCH] Capitalization of FLOP letters --- docs.it4i/anselm-cluster-documentation/compute-nodes.md | 4 ++-- docs.it4i/anselm-cluster-documentation/hardware-overview.md | 4 ++-- docs.it4i/anselm-cluster-documentation/introduction.md | 2 +- docs.it4i/salomon/compute-nodes.md | 4 ++-- docs.it4i/salomon/hardware-overview.md | 2 +- 5 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs.it4i/anselm-cluster-documentation/compute-nodes.md b/docs.it4i/anselm-cluster-documentation/compute-nodes.md index c9eecd5f5..86a89350b 100644 --- a/docs.it4i/anselm-cluster-documentation/compute-nodes.md +++ b/docs.it4i/anselm-cluster-documentation/compute-nodes.md @@ -69,7 +69,7 @@ Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes - eight-core - speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology -- peak performance: 19.2 Gflop/s per +- peak performance: 19.2 GFLOP/s per core - caches: - L2: 256 KB per core @@ -80,7 +80,7 @@ Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes - eight-core - speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology -- peak performance: 18.4 Gflop/s per +- peak performance: 18.4 GFLOP/s per core - caches: - L2: 256 KB per core diff --git a/docs.it4i/anselm-cluster-documentation/hardware-overview.md b/docs.it4i/anselm-cluster-documentation/hardware-overview.md index 42b3cba4c..9d129933b 100644 --- a/docs.it4i/anselm-cluster-documentation/hardware-overview.md +++ b/docs.it4i/anselm-cluster-documentation/hardware-overview.md @@ -47,8 +47,8 @@ The parameters are summarized in the following tables: |MIC accelerated|4, cn[204-207]| |Fat compute nodes|2, cn[208-209]| |**In total**|| -|Total theoretical peak performance (Rpeak)|94 Tflop/s| -|Total max. LINPACK performance (Rmax)|73 Tflop/s| +|Total theoretical peak performance (Rpeak)|94 TFLOP/s| +|Total max. LINPACK performance (Rmax)|73 TFLOP/s| |Total amount of RAM|15.136 TB| |Node|Processor|Memory|Accelerator| diff --git a/docs.it4i/anselm-cluster-documentation/introduction.md b/docs.it4i/anselm-cluster-documentation/introduction.md index 520b4d616..574bf41d3 100644 --- a/docs.it4i/anselm-cluster-documentation/introduction.md +++ b/docs.it4i/anselm-cluster-documentation/introduction.md @@ -1,7 +1,7 @@ Introduction ============ -Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM and giving over 94 Tflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB RAM, and 500 GB hard disk drive. Nodes are interconnected by fully non-blocking fat-tree InfiniBand network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/). +Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM and giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB RAM, and 500 GB hard disk drive. Nodes are interconnected by fully non-blocking fat-tree InfiniBand network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/). The cluster runs bullx Linux ([bull](http://www.bull.com/bullx-logiciels/systeme-exploitation.html)) [operating system](software/operating-system/), which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). diff --git a/docs.it4i/salomon/compute-nodes.md b/docs.it4i/salomon/compute-nodes.md index 0504b444b..df903c1d6 100644 --- a/docs.it4i/salomon/compute-nodes.md +++ b/docs.it4i/salomon/compute-nodes.md @@ -61,7 +61,7 @@ Salomon is equipped with Intel Xeon processors Intel Xeon E5-2680v3. Processors - 12-core - speed: 2.5 GHz, up to 3.3 GHz using Turbo Boost Technology -- peak performance: 19.2 Gflop/s per core +- peak performance: 19.2 GFLOP/s per core - caches: - Intel® Smart Cache: 30 MB - memory bandwidth at the level of the processor: 68 GB/s @@ -71,7 +71,7 @@ Salomon is equipped with Intel Xeon processors Intel Xeon E5-2680v3. Processors - 61-core - speed: 1.238 GHz, up to 1.333 GHz using Turbo Boost Technology -- peak performance: 18.4 Gflop/s per core +- peak performance: 18.4 GFLOP/s per core - caches: - L2: 30.5 MB - memory bandwidth at the level of the processor: 352 GB/s diff --git a/docs.it4i/salomon/hardware-overview.md b/docs.it4i/salomon/hardware-overview.md index c00647be2..990312f7c 100644 --- a/docs.it4i/salomon/hardware-overview.md +++ b/docs.it4i/salomon/hardware-overview.md @@ -28,7 +28,7 @@ General information |w/o accelerator|576| |MIC accelerated|432| |**In total**|| -|Total theoretical peak performance (Rpeak)|2011 Tflop/s| +|Total theoretical peak performance (Rpeak)|2011 TFLOP/s| |Total amount of RAM|129.024 TB| Compute nodes -- GitLab