diff --git a/docs.it4i/dgx2/introduction.md b/docs.it4i/dgx2/introduction.md index 13c40b605de9696ac94b8f286ead575113c84625..482b98eff794ba4ef3ab36fb23a5ca6ccfdca472 100644 --- a/docs.it4i/dgx2/introduction.md +++ b/docs.it4i/dgx2/introduction.md @@ -2,7 +2,7 @@ The [DGX-2][a] introduces NVIDIA’s new NVSwitch, enabling 300 GB/s chip-to-chip communication at 12 times the speed of PCIe. -With NVLink2, it enables sixteen Nvidia V100-SXM3 GPUs in a single system, for a total bandwidth going beyond 14 TB/s. +With NVLink2, it enables sixteen Nvidia V100-SXM3 GPUs in a single system, for a total bandwidth going beyond 14 TB/s. Featuring pair of Xeon 8168 CPUs, 1.5 TB of memory, and 30 TB of NVMe storage, we get a system that consumes 10 kW, weighs 163.29 kg, but offers perfomance in excess of 130TF. @@ -30,17 +30,17 @@ NVIDIA likes to tout that this means it offers a total of ~2 PFLOPs of compute p AlexNET, the network that 'started' the latest machine learning revolution, now takes 18 minutes -The topology of the DGX-2 means that all 16 GPUs are able to pool their memory into a unified memory space, +The topology of the DGX-2 means that all 16 GPUs are able to pool their memory into a unified memory space, though with the usual tradeoffs involved if going off-chip. -The DGX-2 is able to complete the training process +The DGX-2 is able to complete the training process for FAIRSEQ – a neural network model for language translation – 10x faster than a DGX-1 system, bringing it down to less than two days total rather than 15.  -The DGX-2 is designed to be a powerful server in its own right. -On the storage side the DGX-2 comes with 30TB of NVMe-based solid state storage. +The DGX-2 is designed to be a powerful server in its own right. +On the storage side the DGX-2 comes with 30TB of NVMe-based solid state storage. For clustering or further inter-system communications, it also offers InfiniBand and 100GigE connectivity, up to eight of them. { width=50% }