Skip to content
Snippets Groups Projects
Commit 93a90d28 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

oprava internich linku test

parent 5c2270cd
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
......@@ -54,7 +54,7 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu
### Compute Nodes Summary
|Node type|Count|Range|Memory|Cores|[Access](resource-allocation-and-job-execution/resources-allocation-policy.html)|
|Node type|Count|Range|Memory|Cores|[Access](resource-allocation-and-job-execution/resources-allocation-policy/)|
|---|---|---|---|---|---|
|Nodes without accelerator|180|cn[1-180]|64GB|16 @ 2.4Ghz|qexp, qprod, qlong, qfree|
|Nodes with GPU accelerator|23|cn[181-203]|96GB|16 @ 2.3Ghz|qgpu, qprod|
......
......@@ -480,14 +480,14 @@ There are four types of compute nodes:
[More about Compute nodes](compute-nodes.html).
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resource-allocation-and-job-execution/resources-allocation-policy.html).
GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resource-allocation-and-job-execution/resources-allocation-policy/).
All these nodes are interconnected by fast InfiniBand network and Ethernet network. [More about the Network](network.html).
All these nodes are interconnected by fast InfiniBand network and Ethernet network. [More about the Network](network/).
Every chassis provides Infiniband switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
All nodes share 360TB /home disk storage to store user files. The 146TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes /lscratch. [More about Storage](storage.html).
All nodes share 360TB /home disk storage to store user files. The 146TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes /lscratch. [More about Storage](storage/).
The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](accessing-the-cluster.html)
The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](accessing-the-cluster/)
The parameters are summarized in the following tables:
......@@ -496,7 +496,7 @@ The parameters are summarized in the following tables:
|Primary purpose|High Performance Computing|
|Architecture of compute nodes|x86-64|
|Operating system|Linux|
|[**Compute nodes**](compute-nodes.html)||
|[**Compute nodes**](compute-nodes/)||
|Totally|209|
|Processor cores|16 (2x8 cores)|
|RAM|min. 64 GB, min. 4 GB per core|
......@@ -518,5 +518,5 @@ The parameters are summarized in the following tables:
|MIC accelerated|2x Intel Sandy Bridge E5-2470, 2.3GHz|96GB|Intel Xeon Phi P5110|
|Fat compute node|2x Intel Sandy Bridge E5-2665, 2.4GHz|512GB|-|
For more details please refer to the [Compute nodes](compute-nodes.html), [Storage](storage.html), and [Network](network.html).
For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/), and [Network](network/).
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment