Skip to content
Snippets Groups Projects
Commit 5c6f9f43 authored by Jan Siwiec's avatar Jan Siwiec
Browse files

Update hardware-overview.md

parent a8e66b50
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
......@@ -2,7 +2,7 @@
## Introduction
The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128 GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5 PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes.
The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a powerful x86-64 computer equipped with 24 cores (two twelve-core Intel Xeon processors) and 128 GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5 PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. User access to the Salomon cluster is provided by four login nodes.
[More about][1] schematic representation of the Salomon cluster compute nodes IB topology.
......@@ -36,11 +36,11 @@ The parameters are summarized in the following tables:
| w/o accelerator | 576 | 2 x Intel Xeon E5-2680v3, 2.5 GHz | 24 | 128 GB | - |
| MIC accelerated | 432 | 2 x Intel Xeon E5-2680v3, 2.5 GHz | 24 | 128 GB | 2 x Intel Xeon Phi 7120P, 61 cores, 16 GB RAM |
For more details refer to the [Compute nodes][2].
For more details, refer to the [Compute nodes][2] section.
## Remote Visualization Nodes
For remote visualization two nodes with NICE DCV software are available each configured:
For remote visualization, two nodes with NICE DCV software are available each configured:
| Node | Count | Processor | Cores | Memory | GPU Accelerator |
| ------------- | ----- | --------------------------------- | ----- | ------ | ----------------------------- |
......@@ -48,7 +48,7 @@ For remote visualization two nodes with NICE DCV software are available each con
## SGI Uv 2000
For large memory computations a special SMP/NUMA SGI UV 2000 server is available:
For large memory computations, a special SMP/NUMA SGI UV 2000 server is available:
| Node | Count | Processor | Cores | Memory | Extra HW |
| ------ | ----- | ------------------------------------------- | ----- | --------------------- | ------------------------------------------------------------------------ |
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment