Skip to content
Snippets Groups Projects
Commit d18f4706 authored by Branislav Jansik's avatar Branislav Jansik
Browse files

Update introduction.md

parent df9a0683
No related branches found
No related tags found
4 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section
......@@ -20,11 +20,11 @@ With NVLink2, it enables 16x Nvidia V100-SXM3 GPUs in a single system, for a tot
Featuring pair of Xeon 8168 CPUs, 1.5 TB of memory, and 30 TB of NVMe storage,
we get a system that consumes 10 kW, weighs 163.29 kg, but offers double precision perfomance in excess of 130TF.
Further, the DGX-2 offers a total of ~2 PFLOPs of half precision performance in a single system, when using the tensor cores.
The DGX-2 is designed to be a powerful server in its own right.
On the storage side the DGX-2 comes with 30TB of NVMe-based solid state storage.
For clustering or further inter-system communications, it also offers InfiniBand and 100GigE connectivity, up to eight of them.
<div align="center">
<iframe src="https://www.youtube.com/embed/OTOGw0BRqK0" width="50%" height="195" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
Further, the [DGX-2][b] offers a total of ~2 PFLOPs of half precision performance in a single system, when using the tensor cores.
![](../img/dgx1.png)
......@@ -34,16 +34,15 @@ The DGX-2 is able to complete the training process
for FAIRSEQ – a neural network model for language translation – 10x faster than a DGX-1 system,
bringing it down to less than two days total rather than 15 days.
![](../img/dgx3.png)
The DGX-2 is designed to be a powerful server in its own right.
On the storage side the DGX-2 comes with 30TB of NVMe-based solid state storage.
For clustering or further inter-system communications, it also offers InfiniBand and 100GigE connectivity, up to eight of them.
![](../img/dgx2-nvlink.png)
The new NVSwitches means that the PCIe lanes of the CPUs can be redirected elsewhere, most notably towards storage and networking connectivity.
The topology of the DGX-2 means that all 16 GPUs are able to pool their memory into a unified memory space,
though with the usual tradeoffs involved if going off-chip.
![](../img/dgx2-nvlink.png)
[a]: https://www.nvidia.com/content/dam/en-zz/es_em/Solutions/Data-Center/dgx-2/nvidia-dgx-2-datasheet.pdf
[b]: https://www.youtube.com/embed/OTOGw0BRqK0
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment