diff --git a/docs.it4i/anselm-cluster-documentation/compute-nodes.md b/docs.it4i/anselm-cluster-documentation/compute-nodes.md
index ce32d1d6bba053066155adfead818b7acc4dcdae..4ab91620251da545090def7ffbf1be0c894e7c4f 100644
--- a/docs.it4i/anselm-cluster-documentation/compute-nodes.md
+++ b/docs.it4i/anselm-cluster-documentation/compute-nodes.md
@@ -54,7 +54,7 @@ Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bu
 
 ### Compute Nodes Summary
 
-  |Node type|Count|Range|Memory|Cores|[Access](resource-allocation-and-job-execution/resources-allocation-policy.html)|
+  |Node type|Count|Range|Memory|Cores|[Access](resource-allocation-and-job-execution/resources-allocation-policy/)|
   |---|---|---|---|---|---|
   |Nodes without accelerator|180|cn[1-180]|64GB|16 @ 2.4Ghz|qexp, qprod, qlong, qfree|
   |Nodes with GPU accelerator|23|cn[181-203]|96GB|16 @ 2.3Ghz|qgpu, qprod|
diff --git a/docs.it4i/anselm-cluster-documentation/hardware-overview.md b/docs.it4i/anselm-cluster-documentation/hardware-overview.md
index 32110d310a1cb39b0f1cf4eb60b6cd95caee843e..8e461a0194f992f283431180d3a273d8405f0200 100644
--- a/docs.it4i/anselm-cluster-documentation/hardware-overview.md
+++ b/docs.it4i/anselm-cluster-documentation/hardware-overview.md
@@ -480,14 +480,14 @@ There are four types of compute nodes:
 
 [More about Compute nodes](compute-nodes.html).
 
-GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resource-allocation-and-job-execution/resources-allocation-policy.html).
+GPU and accelerated nodes are available upon request, see the [Resources Allocation Policy](resource-allocation-and-job-execution/resources-allocation-policy/).
 
-All these nodes are interconnected by fast InfiniBand network and Ethernet network.  [More about the Network](network.html).
+All these nodes are interconnected by fast InfiniBand network and Ethernet network.  [More about the Network](network/).
 Every chassis provides Infiniband switch, marked **isw**, connecting all nodes in the chassis, as well as connecting the chassis to the upper level switches.
 
-All nodes share 360TB /home disk storage to store user files. The 146TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes /lscratch.  [More about Storage](storage.html).
+All nodes share 360TB /home disk storage to store user files. The 146TB shared /scratch storage is available for the scratch data. These file systems are provided by Lustre parallel file system. There is also local disk storage available on all compute nodes /lscratch.  [More about Storage](storage/).
 
-The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](accessing-the-cluster.html)
+The user access to the Anselm cluster is provided by two login nodes login1, login2, and data mover node dm1. [More about accessing cluster.](accessing-the-cluster/)
 
 The parameters are summarized in the following tables:
 
@@ -496,7 +496,7 @@ The parameters are summarized in the following tables:
 |Primary purpose|High Performance Computing|
 |Architecture of compute nodes|x86-64|
 |Operating system|Linux|
-|[**Compute nodes**](compute-nodes.html)||
+|[**Compute nodes**](compute-nodes/)||
 |Totally|209|
 |Processor cores|16 (2x8 cores)|
 |RAM|min. 64 GB, min. 4 GB per core|
@@ -518,5 +518,5 @@ The parameters are summarized in the following tables:
   |MIC accelerated|2x Intel Sandy Bridge E5-2470, 2.3GHz|96GB|Intel Xeon Phi P5110|
   |Fat compute node|2x Intel Sandy Bridge E5-2665, 2.4GHz|512GB|-|
 
-For more details please refer to the [Compute nodes](compute-nodes.html), [Storage](storage.html), and [Network](network.html).
+For more details please refer to the [Compute nodes](compute-nodes/), [Storage](storage/), and [Network](network/).