diff --git a/docs.it4i/barbora-ng/hardware-overview.md b/docs.it4i/barbora-ng/hardware-overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..7fa55ff1c46060e34dc48d5eeccb0be906c62cb8
--- /dev/null
+++ b/docs.it4i/barbora-ng/hardware-overview.md
@@ -0,0 +1,45 @@
+# Hardware Overview
+
+!!!important Work in progress
+    Barbora NG documentation is a WIP.
+    The documentation is still being developed (reflecting changes in technical specifications) and may be updated frequently.
+
+    The launch of Barbora NG is planned for October/November.
+    In the meantime, the first computational resources have already been allocated in the latest Open Access Grant Competition.
+
+Barbora NG consists of 141 non-accelerated compute nodes named **cn[?-???]**.
+Each node is a powerful x86-64 computer equipped with 192 cores
+(2x Intel Xeon 6952P with 96 CPU cores) and 768 GB RAM.
+User access to the Barbora NG cluster is provided by two login nodes **login[1-2]**.
+The nodes are interlinked through high speed InfiniBand NDR and Ethernet networks.
+
+The parameters are summarized in the following tables:
+
+| **In general**                       |                       |
+| ------------------------------------ | --------------------- |
+| Architecture of compute nodes        | x86-64                |
+| Operating system                     | Linux                 |
+| [**Compute nodes**][1]               |                       |
+| Total                                | 141                   |
+| Processor Type                       | [Intel Xeon 6952P][b] |
+| Architecture                         | Granite Rapids        |
+| Processor cores                      | 96                    |
+| Processors per node                  | 2                     |
+| RAM                                  | 768 GB                |
+| Local disk drive                     | no                    |
+| Compute network                      | InfiniBand HDR        |
+| non-accelerated                      | 141, cn[?-???]        |
+| **In total**                         |                       |
+| Theoretical peak performance (Rpeak) | ??? TFLOP/s           |
+| Cores                                | 27072                 |
+| RAM                                  | 108.288 TB            |
+
+[1]: compute-nodes.md
+[2]: ../general/resources-allocation-policy.md
+[3]: network.md
+[4]: storage.md
+[5]: ../general/shell-and-data-access.md
+[6]: visualization.md
+
+[a]: https://support.it4i.cz/rt
+[b]: https://www.intel.com/content/www/us/en/products/sku/241643/intel-xeon-6952p-processor-480m-cache-2-10-ghz/specifications.html
\ No newline at end of file
diff --git a/docs.it4i/barbora-ng/introduction.md b/docs.it4i/barbora-ng/introduction.md
new file mode 100644
index 0000000000000000000000000000000000000000..873b70ddc8ee2fd37bcd211b624dca7edec1532b
--- /dev/null
+++ b/docs.it4i/barbora-ng/introduction.md
@@ -0,0 +1,36 @@
+# Introduction
+
+!!!important Work in progress
+    Barbora NG documentation is a WIP.
+    The documentation is still being developed (reflecting changes in technical specifications) and may be updated frequently.
+
+    The launch of Barbora NG is planned for October/November.
+    In the meantime, the first computational resources have already been allocated in the latest Open Access Grant Competition.
+
+Welcome to Barbora Next Gen (NG) supercomputer cluster.
+Barbora NG is our latest supercomputer which consists of 141 compute nodes,
+totaling 27072 compute cores with 108288 GB RAM, giving over ??? TFLOP/s theoretical peak performance.
+
+Nodes are interconnected through a fully non-blocking fat-tree InfiniBand NDR network
+and are equipped with Intel Granite Rapids processors.
+Read more in [Hardware Overview][1].
+
+The cluster runs with an operating system compatible with the Red Hat [Linux family][a]. We have installed a wide range of software packages targeted at different scientific domains.
+These packages are accessible via the [modules environment][2].
+
+The user data shared file system and job data shared file system are available to users.
+
+The [Slurm][b] workload manager provides [computing resources allocations and job execution][3].
+
+Read more on how to [apply for resources][4], [obtain login credentials][5] and [access the cluster][6].
+
+
+[1]: hardware-overview.md
+[2]: ../environment-and-modules.md
+[3]: ../general/resources-allocation-policy.md
+[4]: ../general/applying-for-resources.md
+[5]: ../general/obtaining-login-credentials/obtaining-login-credentials.md
+[6]: ../general/shell-and-data-access.md
+
+[a]: http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg
+[b]: https://slurm.schedmd.com/
diff --git a/mkdocs.yml b/mkdocs.yml
index 3b4e10055772ab407768f31950854f4b3af0825c..625307d526b2683970480d1018780ef5da9bd61b 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -144,6 +144,9 @@ nav:
       - Storage: barbora/storage.md
       - Network: barbora/network.md
       - Visualization Servers: barbora/visualization.md
+    - Barbora NG:
+      - Introduction: barbora-ng/introduction.md
+      - Hardware Overview: barbora-ng/hardware-overview.md
     - NVIDIA DGX-2:
       - Introduction: dgx2/introduction.md
       - Accessing DGX-2: dgx2/accessing.md