diff --git a/docs.it4i/dgx2/introduction.md b/docs.it4i/dgx2/introduction.md
index 2c9a1074c917ae675794ea9748dca29aca26d620..cd6731d736f8184dbde3f28af34b51343e23df0b 100644
--- a/docs.it4i/dgx2/introduction.md
+++ b/docs.it4i/dgx2/introduction.md
@@ -20,11 +20,11 @@ With NVLink2, it enables 16x Nvidia V100-SXM3 GPUs in a single system, for a tot
 Featuring pair of Xeon 8168 CPUs, 1.5 TB of memory, and 30 TB of NVMe storage,
 we get a system that consumes 10 kW, weighs 163.29 kg, but offers double precision perfomance in excess of 130TF.
 
-Further, the DGX-2 offers  a total of ~2 PFLOPs of half precision performance in a single system, when using the tensor cores.
+The DGX-2 is designed to be a powerful server in its own right.
+On the storage side the DGX-2 comes with 30TB of NVMe-based solid state storage.
+For clustering or further inter-system communications, it also offers InfiniBand and 100GigE connectivity, up to eight of them.
 
-<div align="center">
-  <iframe src="https://www.youtube.com/embed/OTOGw0BRqK0" width="50%" height="195" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
-</div>
+Further, the [DGX-2][b] offers  a total of ~2 PFLOPs of half precision performance in a single system, when using the tensor cores.
 
 ![](../img/dgx1.png)
 
@@ -34,16 +34,15 @@ The DGX-2 is able to complete the training process
 for FAIRSEQ – a neural network model for language translation – 10x faster than a DGX-1 system,
 bringing it down to less than two days total rather than 15 days.
 
-![](../img/dgx3.png)
-
-The DGX-2 is designed to be a powerful server in its own right.
-On the storage side the DGX-2 comes with 30TB of NVMe-based solid state storage.
-For clustering or further inter-system communications, it also offers InfiniBand and 100GigE connectivity, up to eight of them.
 
-![](../img/dgx2-nvlink.png)
 
 The new NVSwitches means that the PCIe lanes of the CPUs can be redirected elsewhere, most notably towards storage and networking connectivity.
 The topology of the DGX-2 means that all 16 GPUs are able to pool their memory into a unified memory space,
 though with the usual tradeoffs involved if going off-chip.
 
+![](../img/dgx2-nvlink.png)
+
+
+
 [a]: https://www.nvidia.com/content/dam/en-zz/es_em/Solutions/Data-Center/dgx-2/nvidia-dgx-2-datasheet.pdf
+[b]: https://www.youtube.com/embed/OTOGw0BRqK0