Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
docs.it4i.cz
Manage
Activity
Members
Labels
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container registry
Model registry
Operate
Environments
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
SCS
docs.it4i.cz
Commits
9abfa9b2
Commit
9abfa9b2
authored
3 years ago
by
Filip Staněk
Browse files
Options
Downloads
Patches
Plain Diff
Update compute-nodes.md
parent
a0049c86
Branches
Branches containing commit
No related tags found
1 merge request
!330
Update compute-nodes.md
Pipeline
#20673
passed
3 years ago
Stage: test
Stage: build
Stage: deploy
Stage: after_test
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
docs.it4i/karolina/compute-nodes.md
+8
-11
8 additions, 11 deletions
docs.it4i/karolina/compute-nodes.md
with
8 additions
and
11 deletions
docs.it4i/karolina/compute-nodes.md
+
8
−
11
View file @
9abfa9b2
...
@@ -11,7 +11,6 @@ Standard compute nodes without accelerators (such as GPUs or FPGAs) are based on
...
@@ -11,7 +11,6 @@ Standard compute nodes without accelerators (such as GPUs or FPGAs) are based on
*
2x AMD EPYC™ 7H12, 64-core, 2.6 GHz processors per node
*
2x AMD EPYC™ 7H12, 64-core, 2.6 GHz processors per node
*
256 GB DDR4 3200MT/s of physical memory per node
*
256 GB DDR4 3200MT/s of physical memory per node
*
5,324.8 GFLOP/s per compute node
*
5,324.8 GFLOP/s per compute node
*
1x 100 Gb/s Ethernet
*
1x 100 Gb/s IB port
*
1x 100 Gb/s IB port
*
Cn[001-720]
*
Cn[001-720]
...
@@ -25,9 +24,8 @@ Accelerated compute nodes deliver most of the compute power usable for HPC as we
...
@@ -25,9 +24,8 @@ Accelerated compute nodes deliver most of the compute power usable for HPC as we
*
9,216 cores in total
*
9,216 cores in total
*
2x AMD EPYC™ 7763, 64-core, 2.45 GHz processors per node
*
2x AMD EPYC™ 7763, 64-core, 2.45 GHz processors per node
*
1024 GB DDR4 3200MT/s of physical memory per node
*
1024 GB DDR4 3200MT/s of physical memory per node
*
8x GPU accelerator NVIDIA A100 per node
*
8x GPU accelerator NVIDIA A100 per node
, 320GB HBM2 memory per node
*
5,017.6 GFLOP/s per compute node
*
5,017.6 GFLOP/s per compute node
*
4x 200 Gb/s Ethernet
*
4x 200 Gb/s IB port
*
4x 200 Gb/s IB port
*
Acn[01-72]
*
Acn[01-72]
...
@@ -41,10 +39,9 @@ Data analytics compute node is oriented on supporting huge memory jobs by implem
...
@@ -41,10 +39,9 @@ Data analytics compute node is oriented on supporting huge memory jobs by implem
*
768 cores in total
*
768 cores in total
*
32x Intel® Xeon® Platinum, 24-core, 2.9 GHz, 205W
*
32x Intel® Xeon® Platinum, 24-core, 2.9 GHz, 205W
*
24 TB DDR4 2993MT/s of physical memory per node
*
24 TB DDR4 2993MT/s of physical memory per node
*
2x 200 Gb/s Ethernet
*
2x 200 Gb/s IB port
*
2x 200 Gb/s IB port
*
71.2704 TFLOP/s
*
71.2704 TFLOP/s
*
DAcn
1
*
Sdf
1


...
@@ -58,17 +55,17 @@ Cloud compute nodes support both the research and operation of the Infrastructur
...
@@ -58,17 +55,17 @@ Cloud compute nodes support both the research and operation of the Infrastructur
*
256 GB DDR4 3200MT/s of physical memory per node
*
256 GB DDR4 3200MT/s of physical memory per node
*
HPE ProLiant XL225n Gen10 Plus servers
*
HPE ProLiant XL225n Gen10 Plus servers
*
5,324.8 GFLOP/s per compute node
*
5,324.8 GFLOP/s per compute node
*
1
x 10
0
Gb/s Ethernet
*
2
x 10 Gb/s Ethernet
*
1x 100 Gb/s IB port
*
1x 100 Gb/s IB port
*
CLn[01-36]
*
CLn[01-36]
## Compute Node Summary
## Compute Node Summary
| Node type | Count | Range | Memory | Cores | Queues (?)
|
| Node type | Count | Range
| Memory | Cores | Queues (?) |
| ---------------------------- | ----- | ----------- | ------
| -----------
| -------------------------- |
| ---------------------------- | ----- | -----------
-
| ------
-
| -----------
---
| -------------------------- |
| Nodes without an accelerator | 720 | Cn[001-720] | 256 GB | 128 @ 2.6 GHz | qexp, qprod, qlong, qfree |
| Nodes without an accelerator | 720 | Cn[001-720] | 256 GB | 128 @ 2.6 GHz | qexp, qprod, qlong, qfree |
| Nodes with a GPU accelerator | 72 | Acn[01-72] | 1024 GB
| 64 @ 2.45 GHz | qnvidia |
| Nodes with a GPU accelerator | 72 | Acn[01-72] | 1024 GB | 64 @ 2.45 GHz | qnvidia |
| Data analytics nodes | 1 |
DAcn1
| 24 TB | 768 @ 2.9 GHz | qfat |
| Data analytics nodes | 1 |
Sdf1
| 24 TB | 768 @ 2.9 GHz | qfat |
| Cloud partiton | 36 | CLn[01-36] | 256 GB | 128 @ 2.6 GHz | |
| Cloud partiton | 36 | CLn[01-36] | 256 GB | 128 @ 2.6 GHz | |
## Processor Architecture
## Processor Architecture
...
@@ -86,7 +83,7 @@ EPYC™ 7H12 is a 64-bit 64-core x86 server microprocessor designed and introduc
...
@@ -86,7 +83,7 @@ EPYC™ 7H12 is a 64-bit 64-core x86 server microprocessor designed and introduc
*
**L1D Cache**
: 2 MiB, 64x32 KiB, 8-way set associative
*
**L1D Cache**
: 2 MiB, 64x32 KiB, 8-way set associative
*
**L2 Cache**
: 32 MiB, 64x512 KiB, 8-way set associative, write-back
*
**L2 Cache**
: 32 MiB, 64x512 KiB, 8-way set associative, write-back
*
**L3 Cache**
: 256 MiB, 16x16 MiB
*
**L3 Cache**
: 256 MiB, 16x16 MiB
*
**Instructions**
(?)
: x86-64, M
OVBE
, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2,
POPCNT
, AVX, AVX2, AES,
P
CLMUL,
FSGSBASE, RDRN
D, FMA3, F16C, BMI, BMI2,
VT-x, VT-d, TXT, TSX, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVE, SGX, MPX, AVX-512 (New instructions for
[
Vector Neural Network Instructions
][
c
]
)
*
**Instructions**
:
x86-16, x86-32,
x86-64, M
MX
,
E
MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2,
SSE4a
, AVX, AVX2, AES, CLMUL,
RdRan
D, FMA3, F16C,
ABM,
BMI
1
, BMI2,
AMD-Vi, AMD-V, SHA, ADX, Real, Protected, SMM, FPU, NX, SMT, SME, TSME, SEV, SenseMI, Boost2
*
**Frequency**
: 2.6 GHz
*
**Frequency**
: 2.6 GHz
*
**Max turbo**
: 3.3 GHz
*
**Max turbo**
: 3.3 GHz
*
**Process**
: 7 nm, 14 nm
*
**Process**
: 7 nm, 14 nm
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment