From 52cff47e65c02f01d0f70392f4b070910318d880 Mon Sep 17 00:00:00 2001
From: Lukas Krupcik <lukas.krupcik@vsb.cz>
Date: Mon, 3 Mar 2025 10:41:30 +0100
Subject: [PATCH] 	modified:   anselm/network.mdx 	modified:  
 anselm/storage.mdx 	modified:   barbora/storage.mdx 	modified:  
 cloud/it4i-cloud.mdx 	modified:   cs/guides/amd.mdx 	modified:  
 cs/guides/grace.mdx 	modified:   cs/guides/hm_management.mdx 
 modified:   cs/guides/horizon.mdx 	modified:   cs/guides/power10.mdx 
 modified:   cs/guides/xilinx.mdx 	modified:   cs/job-scheduling.mdx 
 modified:   dgx2/accessing.mdx 	modified:   dgx2/job_execution.mdx 
 modified:   dice.mdx 	modified:   einfracz-migration.mdx 	modified:  
 environment-and-modules.mdx 	modified:  
 general/access/einfracz-account.mdx 	modified:  
 general/access/project-access.mdx 	modified:  
 general/accessing-the-clusters/graphical-user-interface/ood.mdx 
 modified:   general/accessing-the-clusters/graphical-user-interface/vnc.mdx 
 modified:  
 general/accessing-the-clusters/graphical-user-interface/x-window-system.mdx 
 modified:   general/accessing-the-clusters/graphical-user-interface/xorg.mdx 
 modified:  
 general/accessing-the-clusters/shell-access-and-data-transfer/putty.mdx 
 modified:  
 general/accessing-the-clusters/shell-access-and-data-transfer/ssh-key-management.mdx
 	modified:  
 general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.mdx 
 modified:   general/accessing-the-clusters/vpn-access.mdx 	modified:  
 general/applying-for-resources.mdx 	modified:  
 general/barbora-partitions.mdx 	modified:  
 general/capacity-computing.mdx 	modified:   general/hyperqueue.mdx 
 modified:   general/job-arrays.mdx 	modified:   general/job-priority.mdx 
 modified:   general/job-submission-and-execution.mdx 	modified:  
 general/karolina-mpi.mdx 	modified:   general/karolina-partitions.mdx 
 modified:   general/karolina-slurm.mdx 	modified:  
 general/obtaining-login-credentials/obtaining-login-credentials.mdx 
 modified:   general/pbs-job-submission-and-execution.mdx 	modified:  
 general/resource-accounting.mdx 	modified:  
 general/resource_allocation_and_job_execution.mdx 	modified:  
 general/resources-allocation-policy.mdx 	modified:  
 general/shell-and-data-access.mdx 	modified:  
 general/slurm-job-submission-and-execution.mdx 	modified:  
 general/tools/cicd.mdx 	modified:   general/tools/portal-clients.mdx 
 modified:   index.mdx 	modified:   job-features.mdx 	modified:  
 karolina/introduction.mdx 	modified:   karolina/storage.mdx 
 modified:   lumi/openfoam.mdx 	modified:   prace.mdx 	modified:  
 salomon/software/numerical-libraries/Clp.mdx 	modified:  
 salomon/storage.mdx 	modified:  
 software/bio/omics-master/diagnostic-component-team.mdx 	modified:  
 software/bio/omics-master/priorization-component-bierapp.mdx 	modified:  
 software/cae/comsol/comsol-multiphysics.mdx 	modified:  
 software/chemistry/gaussian.mdx 	modified:  
 software/chemistry/molpro.mdx 	modified:   software/chemistry/orca.mdx 
 modified:   software/chemistry/vasp.mdx 	modified:  
 software/data-science/dask.mdx 	modified:  
 software/debuggers/allinea-ddt.mdx 	modified:  
 software/debuggers/cube.mdx 	modified:  
 software/debuggers/intel-vtune-profiler.mdx 	modified:  
 software/debuggers/scalasca.mdx 	modified:  
 software/debuggers/total-view.mdx 	modified:  
 software/intel/intel-suite/intel-compilers.mdx 	modified:  
 software/isv_licenses.mdx 	modified:   software/karolina-compilation.mdx 
 modified:   software/lang/conda.mdx 	modified:   software/lang/csc.mdx 
 modified:   software/lang/python.mdx 	modified:  
 software/machine-learning/netket.mdx 	modified:  
 software/machine-learning/tensorflow.mdx 	modified:  
 software/modules/lmod.mdx 	modified:   software/mpi/mpi.mdx 
 modified:   software/numerical-languages/matlab.mdx 	modified:  
 software/numerical-languages/opencoarrays.mdx 	modified:  
 software/numerical-languages/r.mdx 	modified:  
 software/numerical-libraries/intel-numerical-libraries.mdx 	modified:  
 software/nvidia-cuda.mdx 	modified:   software/sdk/openacc-mpi.mdx 
 modified:   software/tools/apptainer.mdx 	modified:  
 software/tools/easybuild-images.mdx 	modified:  
 software/tools/easybuild.mdx 	modified:   software/tools/singularity.mdx 
 modified:   software/tools/spack.mdx 	modified:  
 software/tools/virtualization.mdx 	modified:  
 software/viz/NICEDCVsoftware.mdx 	modified:   software/viz/gpi2.mdx 
 modified:   software/viz/openfoam.mdx 	modified:   software/viz/paraview.mdx 
 modified:   software/viz/qtiplot.mdx 	modified:   software/viz/vesta.mdx 
 modified:   software/viz/vgl.mdx 	modified:   storage/cesnet-s3.mdx 
 modified:   storage/cesnet-storage.mdx 	modified:  
 storage/project-storage.mdx

---
 content/docs/anselm/network.mdx               |  3 +-
 content/docs/anselm/storage.mdx               | 48 ++++++++++++-------
 content/docs/barbora/storage.mdx              | 21 +++++---
 content/docs/cloud/it4i-cloud.mdx             |  3 +-
 content/docs/cs/guides/amd.mdx                |  3 +-
 content/docs/cs/guides/grace.mdx              | 18 ++++---
 content/docs/cs/guides/hm_management.mdx      |  5 +-
 content/docs/cs/guides/horizon.mdx            |  3 +-
 content/docs/cs/guides/power10.mdx            |  3 +-
 content/docs/cs/guides/xilinx.mdx             |  3 +-
 content/docs/cs/job-scheduling.mdx            |  6 ++-
 content/docs/dgx2/accessing.mdx               |  6 ++-
 content/docs/dgx2/job_execution.mdx           |  3 +-
 content/docs/dice.mdx                         |  9 ++--
 content/docs/einfracz-migration.mdx           |  3 +-
 content/docs/environment-and-modules.mdx      |  9 ++--
 .../docs/general/access/einfracz-account.mdx  |  3 +-
 .../docs/general/access/project-access.mdx    |  3 +-
 .../graphical-user-interface/ood.mdx          |  3 +-
 .../graphical-user-interface/vnc.mdx          | 21 +++++---
 .../x-window-system.mdx                       | 15 ++++--
 .../graphical-user-interface/xorg.mdx         |  6 ++-
 .../shell-access-and-data-transfer/putty.mdx  |  3 +-
 .../ssh-key-management.mdx                    |  3 +-
 .../ssh-keys.mdx                              |  3 +-
 .../accessing-the-clusters/vpn-access.mdx     |  3 +-
 .../docs/general/applying-for-resources.mdx   |  6 ++-
 content/docs/general/barbora-partitions.mdx   |  3 +-
 content/docs/general/capacity-computing.mdx   |  3 +-
 content/docs/general/hyperqueue.mdx           |  6 ++-
 content/docs/general/job-arrays.mdx           |  3 +-
 content/docs/general/job-priority.mdx         |  3 +-
 .../general/job-submission-and-execution.mdx  | 10 ++--
 content/docs/general/karolina-mpi.mdx         |  3 +-
 content/docs/general/karolina-partitions.mdx  |  3 +-
 content/docs/general/karolina-slurm.mdx       |  6 ++-
 .../obtaining-login-credentials.mdx           | 12 +++--
 .../pbs-job-submission-and-execution.mdx      | 40 +++++++++++-----
 content/docs/general/resource-accounting.mdx  |  4 +-
 .../resource_allocation_and_job_execution.mdx |  3 +-
 .../general/resources-allocation-policy.mdx   |  6 ++-
 .../docs/general/shell-and-data-access.mdx    | 27 +++++++----
 .../slurm-job-submission-and-execution.mdx    | 10 ++--
 content/docs/general/tools/cicd.mdx           |  6 ++-
 content/docs/general/tools/portal-clients.mdx |  3 +-
 content/docs/index.mdx                        |  6 ++-
 content/docs/job-features.mdx                 | 33 ++++++++-----
 content/docs/karolina/introduction.mdx        |  3 --
 content/docs/karolina/storage.mdx             |  9 ++--
 content/docs/lumi/openfoam.mdx                |  4 +-
 content/docs/prace.mdx                        |  6 ++-
 .../software/numerical-libraries/Clp.mdx      |  3 +-
 content/docs/salomon/storage.mdx              | 25 ++++++----
 .../diagnostic-component-team.mdx             |  3 +-
 .../priorization-component-bierapp.mdx        |  3 +-
 .../cae/comsol/comsol-multiphysics.mdx        |  6 ++-
 content/docs/software/chemistry/gaussian.mdx  |  6 ++-
 content/docs/software/chemistry/molpro.mdx    |  3 +-
 content/docs/software/chemistry/orca.mdx      |  6 ++-
 content/docs/software/chemistry/vasp.mdx      |  3 +-
 content/docs/software/data-science/dask.mdx   |  6 ++-
 .../docs/software/debuggers/allinea-ddt.mdx   |  7 +--
 content/docs/software/debuggers/cube.mdx      |  3 +-
 .../debuggers/intel-vtune-profiler.mdx        |  6 ++-
 content/docs/software/debuggers/scalasca.mdx  |  3 +-
 .../docs/software/debuggers/total-view.mdx    |  7 +--
 .../intel/intel-suite/intel-compilers.mdx     |  9 ++--
 content/docs/software/isv_licenses.mdx        |  3 +-
 .../docs/software/karolina-compilation.mdx    |  6 ++-
 content/docs/software/lang/conda.mdx          |  3 +-
 content/docs/software/lang/csc.mdx            |  3 +-
 content/docs/software/lang/python.mdx         |  6 ++-
 .../docs/software/machine-learning/netket.mdx |  3 +-
 .../software/machine-learning/tensorflow.mdx  |  6 ++-
 content/docs/software/modules/lmod.mdx        | 24 ++++++----
 content/docs/software/mpi/mpi.mdx             | 10 ++--
 .../software/numerical-languages/matlab.mdx   |  3 +-
 .../numerical-languages/opencoarrays.mdx      |  6 ++-
 .../docs/software/numerical-languages/r.mdx   |  2 +-
 .../intel-numerical-libraries.mdx             |  3 +-
 content/docs/software/nvidia-cuda.mdx         |  3 +-
 content/docs/software/sdk/openacc-mpi.mdx     |  9 ++--
 content/docs/software/tools/apptainer.mdx     | 18 ++++---
 .../docs/software/tools/easybuild-images.mdx  |  6 ++-
 content/docs/software/tools/easybuild.mdx     |  3 +-
 content/docs/software/tools/singularity.mdx   |  3 +-
 content/docs/software/tools/spack.mdx         |  9 ++--
 .../docs/software/tools/virtualization.mdx    | 12 +++--
 content/docs/software/viz/NICEDCVsoftware.mdx | 15 ++++--
 content/docs/software/viz/gpi2.mdx            | 16 +++++--
 content/docs/software/viz/openfoam.mdx        |  9 ++--
 content/docs/software/viz/paraview.mdx        |  3 +-
 content/docs/software/viz/qtiplot.mdx         |  3 +-
 content/docs/software/viz/vesta.mdx           |  3 +-
 content/docs/software/viz/vgl.mdx             | 32 ++++++-------
 content/docs/storage/cesnet-s3.mdx            |  3 +-
 content/docs/storage/cesnet-storage.mdx       |  9 ++--
 content/docs/storage/project-storage.mdx      | 12 +++--
 98 files changed, 500 insertions(+), 272 deletions(-)

diff --git a/content/docs/anselm/network.mdx b/content/docs/anselm/network.mdx
index 7fa65d1f..b4055bc8 100644
--- a/content/docs/anselm/network.mdx
+++ b/content/docs/anselm/network.mdx
@@ -9,8 +9,9 @@ All of the compute and login nodes of Anselm are interconnected through a high-b
 
 The compute nodes may be accessed via the InfiniBand network using the ib0 network interface, in address range 10.2.1.1-209. The MPI may be used to establish native InfiniBand connection among the nodes.
 
-!!! note
+<Callout>
     The network provides **2170 MB/s** transfer rates via the TCP connection (single stream) and up to **3600 MB/s** via the native InfiniBand protocol.
+</Callout>
 
 The Fat tree topology ensures that peak transfer rates are achieved between any two nodes, independent of network traffic exchanged among other nodes concurrently.
 
diff --git a/content/docs/anselm/storage.mdx b/content/docs/anselm/storage.mdx
index caee0b11..c5781e57 100644
--- a/content/docs/anselm/storage.mdx
+++ b/content/docs/anselm/storage.mdx
@@ -25,8 +25,9 @@ There is a default stripe configuration for Anselm Lustre filesystems. However,
 1. stripe_count the number of OSTs to stripe across; default is 1 for Anselm Lustre filesystems one can specify -1 to use all OSTs in the filesystem.
 1. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
 
-!!! note
+<Callout>
     Setting stripe size and stripe count correctly may significantly affect the I/O performance.
+</Callout>
 
 Use the lfs getstripe to get the stripe parameters. Use the lfs setstripe command to set the stripe parameters for optimal I/O performance. The correct stripe setting depends on your needs and file access patterns.
 
@@ -59,15 +60,17 @@ $ man lfs
 
 ### Hints on Lustre Stripping
 
-!!! note
+<Callout>
     Increase the stripe_count for parallel I/O to the same file.
+</Callout>
 
 When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file.
 
 Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes.
 
-!!! note
+<Callout>
     Using a large stripe size can improve performance when accessing very large files.
+</Callout>
 
 Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file.
 
@@ -101,8 +104,9 @@ The architecture of Lustre on Anselm is composed of two metadata servers (MDS) a
 
 The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 320TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request.
 
-!!! note
+<Callout>
     The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
+</Callout>
 
 The HOME filesystem should not be used to archive data of past Projects or other unrelated data.
 
@@ -113,8 +117,9 @@ The filesystem is backed up, such that it can be restored in case of catastrophi
 The HOME filesystem is realized as Lustre parallel filesystem and is available on all login and computational nodes.
 Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for the HOME filesystem.
 
-!!! note
+<Callout>
     Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.
+</Callout>
 
 | HOME filesystem      |        |
 | -------------------- | ------ |
@@ -131,18 +136,21 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t
 
 The SCRATCH filesystem is mounted in directory /scratch. Users may freely create subdirectories and files on the filesystem. Accessible capacity is 146TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. If 100TB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request.
 
-!!! note
+<Callout>
     The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
 
     Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files.
+</Callout>
 
-!!! warning
+<Callout type=warn>
     Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
+</Callout>
 
 The SCRATCH filesystem is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 10 OSTs dedicated for the SCRATCH filesystem.
 
-!!! note
+<Callout>
     Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.
+</Callout>
 
 | SCRATCH filesystem   |          |
 | -------------------- | -------- |
@@ -256,8 +264,9 @@ Default ACL mechanism can be used to replace setuid/setgid permissions on direct
 
 ### Local Scratch
 
-!!! note
+<Callout>
     Every computational node is equipped with 330GB local scratch disk.
+</Callout>
 
 Use local scratch in case you need to access large amount of small files during your calculation.
 
@@ -265,8 +274,9 @@ The local scratch disk is mounted as /lscratch and is accessible to user at /lsc
 
 The local scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs that access large number of small files within the calculation must use the local scratch filesystem as their working directory. This is required for performance reasons, as frequent access to number of small files may overload the metadata servers (MDS) of the Lustre filesystem.
 
-!!! note
+<Callout>
     The local scratch directory /lscratch/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
+</Callout>
 
 | local SCRATCH filesystem |                      |
 | ------------------------ | -------------------- |
@@ -280,15 +290,17 @@ The local scratch filesystem is intended for temporary scratch data generated du
 
 Every computational node is equipped with filesystem realized in memory, so called RAM disk.
 
-!!! note
+<Callout>
     Use RAM disk in case you need a fast access to your data of limited size during your calculation. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
+</Callout>
 
 The local RAM disk is mounted as /ramdisk and is accessible to user at /ramdisk/$PBS_JOBID directory.
 
 The local RAM disk filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk filesystem is limited. Be very careful, use of RAM disk filesystem is at the expense of operational memory.  It is not recommended to allocate large amount of memory and use large amount of data in RAM disk filesystem at the same time.
 
-!!! note
+<Callout>
     The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
+</Callout>
 
 | RAM disk    |                                                                                                          |
 | ----------- | -------------------------------------------------------------------------------------------------------- |
@@ -316,8 +328,9 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir
 
 Do not use shared filesystems at IT4Innovations as a backup for large amount of data or long-term archiving purposes.
 
-!!! note
+<Callout>
     The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service][f].
+</Callout>
 
 The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic.
 
@@ -333,15 +346,17 @@ The procedure to obtain the CESNET access is quick and trouble-free.
 
 ### Understanding CESNET Storage
 
-!!! note
+<Callout>
     It is very important to understand the CESNET storage before uploading data. [Read][i] first.
+</Callout>
 
 Once registered for CESNET Storage, you may [access the storage][j] in number of ways. We recommend the SSHFS and RSYNC methods.
 
 ### SSHFS Access
 
-!!! note
+<Callout>
     SSHFS: The storage will be mounted like a local hard drive
+</Callout>
 
 The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
 
@@ -385,8 +400,9 @@ $ fusermount -u cesnet
 
 ### RSYNC Access
 
-!!! info
+<Callout>
     RSYNC provides delta transfer for best performance and can resume interrupted transfers.
+</Callout>
 
 RSYNC is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination.  RSYNC is widely used for backups and mirroring and as an improved copy command for everyday use.
 
diff --git a/content/docs/barbora/storage.mdx b/content/docs/barbora/storage.mdx
index cb33d49e..8fc87ad4 100644
--- a/content/docs/barbora/storage.mdx
+++ b/content/docs/barbora/storage.mdx
@@ -33,8 +33,9 @@ There is default stripe configuration for Barbora Lustre filesystems. However, u
 1. `stripe_count` the number of OSTs to stripe across; default is 1 for Barbora Lustre filesystems one can specify -1 to use all OSTs in the filesystem.
 1. `stripe_offset` the index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
 
-!!! note
+<Callout>
     Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.
+</Callout>
 
 Use the `lfs getstripe` command for getting the stripe parameters. Use `lfs setstripe` for setting the stripe parameters to get optimal I/O performance. The correct stripe setting depends on your needs and file access patterns.
 
@@ -62,15 +63,17 @@ $ man lfs
 
 ### Hints on Lustre Stripping
 
-!!! note
+<Callout>
     Increase the `stripe_count` for parallel I/O to the same file.
+</Callout>
 
 When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the `stripe_count` is set to a larger value. The stripe count sets the number of OSTs to which the file will be written. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the `ls -l` command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file.
 
 Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes.
 
-!!! note
+<Callout>
     Using a large stripe size can improve performance when accessing very large files
+</Callout>
 
 Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file.
 
@@ -99,8 +102,9 @@ The architecture of Lustre on Barbora is composed of two metadata servers (MDS)
 
 The HOME filesystem is mounted in directory /home. Users home directories /home/username reside on this filesystem. Accessible capacity is 28TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 25GB per user. Should 25GB prove insufficient, contact [support][d], the quota may be lifted upon request.
 
-!!! note
+<Callout>
     The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
+</Callout>
 
 The HOME filesystem should not be used to archive data of past Projects or other unrelated data.
 
@@ -123,18 +127,21 @@ The SCRATCH is realized as Lustre parallel file system and is available from all
 
 The SCRATCH filesystem is mounted in the `/scratch/project/PROJECT_ID` directory created automatically with the `PROJECT_ID` project. Accessible capacity is 310TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 10TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. Should 10TB prove insufficient, contact [support][d], the quota may be lifted upon request.
 
-!!! note
+<Callout>
     The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high-performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
 
     Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files.
+</Callout>
 
-!!! warning
+<Callout type=warn>
     Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
+</Callout>
 
 The SCRATCH filesystem is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 5 OSTs dedicated for the SCRATCH filesystem.
 
-!!! note
+<Callout>
     Setting stripe size and stripe count correctly for your needs may significantly affect the I/O performance.
+</Callout>
 
 | SCRATCH filesystem   |           |
 | -------------------- | --------- |
diff --git a/content/docs/cloud/it4i-cloud.mdx b/content/docs/cloud/it4i-cloud.mdx
index 0268d908..8d34f7eb 100644
--- a/content/docs/cloud/it4i-cloud.mdx
+++ b/content/docs/cloud/it4i-cloud.mdx
@@ -5,10 +5,11 @@ IT4I cloud consists of 14 nodes from the [Karolina][a] supercomputer.
 The cloud site is built on top of OpenStack,
 which is a free open standard cloud computing platform.
 
-!!! Note
+<Callout>
     The guide describes steps for personal projects.<br>
     Some steps may differ for large projects.<br>
     For large project, apply for resources to the [Allocation Committee][11].
+</Callout>
 
 ## Access
 
diff --git a/content/docs/cs/guides/amd.mdx b/content/docs/cs/guides/amd.mdx
index 61086d5e..11241b5a 100644
--- a/content/docs/cs/guides/amd.mdx
+++ b/content/docs/cs/guides/amd.mdx
@@ -26,9 +26,10 @@ salloc -N 1 -c 32 -A PROJECT-ID -p p03-amd --gres=gpu:2 --time=08:00:00
 salloc -N 1 -c 16 -A PROJECT-ID -p p03-amd --gres=gpu:1 --time=08:00:00
 ```
 
-!!! Note
+<Callout>
     p03-amd01 server has hyperthreading **enabled** therefore htop shows 128 cores.<br>
     p03-amd02 server has hyperthreading **disabled** therefore htop shows 64 cores.
+</Callout>
 
 ## Using AMD MI100 GPUs
 
diff --git a/content/docs/cs/guides/grace.mdx b/content/docs/cs/guides/grace.mdx
index e101fa1b..05c90528 100644
--- a/content/docs/cs/guides/grace.mdx
+++ b/content/docs/cs/guides/grace.mdx
@@ -23,8 +23,9 @@ The platform offers three toolchains:
 - [NVHPC](https://developer.nvidia.com/hpc-sdk) (as a module `ml NVHPC`)
 - [Clang for NVIDIA Grace](https://developer.nvidia.com/grace/clang) (installed in `/opt/nvidia/clang`)
 
-!!! note
+<Callout>
     The NVHPC toolchain showed strong results with minimal amount of tuning necessary in our initial evaluation.
+</Callout>
 
 ### GCC Toolchain
 
@@ -59,8 +60,9 @@ for(int i = 0; i < 1000000; ++i) {
 }
 ```
 
-!!! note
+<Callout>
     Our basic experiments show that fixed width vectorization (NEON) tends to perform better in the case of short (register-length) loops than SVE. In cases (like above), where specified `vectorize_width` is larger than availiable vector unit width, Clang will emit multiple NEON instructions (eg. 4 instructions will be emitted to process 8 64-bit operations in 128-bit units of Grace).
+</Callout>
 
 ### NVHPC Toolchain
 
@@ -70,8 +72,9 @@ The NVHPC toolchain handled aforementioned case without any additional tuning. S
 
 The basic libraries (BLAS and LAPACK) are included in NVHPC toolchain and can be used simply as `-lblas` and `-llapack` for BLAS and LAPACK respectively (`lp64` and `ilp64` versions are also included).
 
-!!! note
+<Callout>
     The Grace platform doesn't include CUDA-capable GPU, therefore `nvcc` will fail with an error. This means that `nvc`, `nvc++` and `nvfortran` should be used instead.
+</Callout>
 
 ### NVIDIA Performance Libraries
 
@@ -91,8 +94,9 @@ This package should be compatible with all availiable toolchains and includes CM
 
 We recommend to use the multi-threaded BLAS library from the NVPL package.
 
-!!! note
+<Callout>
     It is important to pin the processes using **OMP_PROC_BIND=spread**
+</Callout>
 
 Example:
 
@@ -274,8 +278,9 @@ nvfortran -O3 -march=native -fast -lblas main.f90 -o main.x
 OMP_NUM_THREADS=144 OMP_PROC_BIND=spread ./main
 ```
 
-!!! note
+<Callout>
     It may be advantageous to use NVPL libraries instead NVHPC ones. For example DGEMM BLAS 3 routine from NVPL is almost 30% faster than NVHPC one.
+</Callout>
 
 ### Using Clang (For Grace) Toolchain
 
@@ -286,8 +291,9 @@ ml NVHPC
 /opt/nvidia/clang/17.23.11/bin/clang++ -O3 -march=native -ffast-math -I$NVHPC/Linux_aarch64/$EBVERSIONNVHPC/compilers/include/lp64 -lnvpl_blas_lp64_gomp main.cpp -o main
 ```
 
-!!! note
+<Callout>
     NVHPC module is used just for the `cblas.h` include in this case. This can be avoided by changing the code to use `nvpl_blas.h` instead.
+</Callout>
 
 ## Additional Resources
 
diff --git a/content/docs/cs/guides/hm_management.mdx b/content/docs/cs/guides/hm_management.mdx
index c99de57b..074578df 100644
--- a/content/docs/cs/guides/hm_management.mdx
+++ b/content/docs/cs/guides/hm_management.mdx
@@ -69,8 +69,9 @@ memkind_free(NULL, pData); // "kind" parameter is deduced from the address
 
 Similarly other memory types can be chosen.
 
-!!! note
+<Callout>
     The allocation will return `NULL` pointer when memory of specified kind is not available.
+</Callout>
 
 ## High Bandwidth Memory (HBM)
 
@@ -277,4 +278,4 @@ Moving histogram bins data into HBM memory should speedup the algorithm more tha
 
 [1]: https://linux.die.net/man/8/numactl
 [2]: http://memkind.github.io/memkind/man_pages/memkind.html
-[3]: https://lenovopress.lenovo.com/lp1738-implementing-intel-high-bandwidth-memory
\ No newline at end of file
+[3]: https://lenovopress.lenovo.com/lp1738-implementing-intel-high-bandwidth-memory
diff --git a/content/docs/cs/guides/horizon.mdx b/content/docs/cs/guides/horizon.mdx
index ffce8b86..e90a14c7 100644
--- a/content/docs/cs/guides/horizon.mdx
+++ b/content/docs/cs/guides/horizon.mdx
@@ -10,8 +10,9 @@ including features such as session management, user authentication, and virtual
 
 ## How to Access VMware Horizon
 
-!!! important
+<Callout type=warn>
     Access to VMware Horizon requires IT4I VPN.
+</Callout>
 
 1. Contact [IT4I support][a] with a request for an access and VM allocation.
 1. [Download][1] and install the VMware Horizon Client for Windows.
diff --git a/content/docs/cs/guides/power10.mdx b/content/docs/cs/guides/power10.mdx
index 7d147848..43e7ec3d 100644
--- a/content/docs/cs/guides/power10.mdx
+++ b/content/docs/cs/guides/power10.mdx
@@ -77,8 +77,9 @@ xlf -lopenblas hello.f90 -o hello
 
 to build the application as usual.
 
-!!! note
+<Callout>
     Combination of `xlf` and `openblas` seems to cause severe performance degradation. Therefore `ESSL` library should be preferred (see below).
+</Callout>
 
 ### Using ESSL Library
 
diff --git a/content/docs/cs/guides/xilinx.mdx b/content/docs/cs/guides/xilinx.mdx
index 015b4dc0..4195b922 100644
--- a/content/docs/cs/guides/xilinx.mdx
+++ b/content/docs/cs/guides/xilinx.mdx
@@ -297,8 +297,9 @@ $ XCL_EMULATION_MODE=sw_emu <application>
 
 ### Hardware Synthesis Mode
 
-!!! note
+<Callout>
     The HLS of these simple applications **can take up to 2 hours** to finish.
+</Callout>
 
 To allow the application to utilize real hardware we have to synthetize FPGA design for the accelerator. This can be done by repeating same steps used to build kernels in emulation mode, but with `IT4I_BUILD_MODE` set to `hw` like so:
 
diff --git a/content/docs/cs/job-scheduling.mdx b/content/docs/cs/job-scheduling.mdx
index ed7cc173..b32c32fe 100644
--- a/content/docs/cs/job-scheduling.mdx
+++ b/content/docs/cs/job-scheduling.mdx
@@ -60,8 +60,9 @@ Run interactive job, with X11 forwarding
  $ salloc -A PROJECT-ID -p p01-arm --x11
 ```
 
-!!! warning
+<Callout type=warn>
     Do not use `srun` for initiating interactive jobs, subsequent `srun`, `mpirun` invocations would block forever.
+</Callout>
 
 ## Running Batch Jobs
 
@@ -377,8 +378,9 @@ $ scontrol -d show node p02-intel02 | grep ActiveFeatures
 
 Slurm supports the ability to define and schedule arbitrary resources - Generic RESources (GRES) in Slurm's terminology. We use GRES for scheduling/allocating GPUs and FPGAs.
 
-!!! warning
+<Callout type=warn>
     Use only allocated GPUs and FPGAs. Resource separation is not enforced. If you use non-allocated resources, you can observe strange behavior and get into troubles.
+</Callout>
 
 ### Node Resources
 
diff --git a/content/docs/dgx2/accessing.mdx b/content/docs/dgx2/accessing.mdx
index 2c4433b1..2fde5b38 100644
--- a/content/docs/dgx2/accessing.mdx
+++ b/content/docs/dgx2/accessing.mdx
@@ -3,8 +3,9 @@ title: "Accessing the DGX-2"
 ---
 ## Before You Access
 
-!!! warning
+<Callout type= warn>
     GPUs are single-user devices. GPU memory is not purged between job runs and it can be read (but not written) by any user. Consider the confidentiality of your running jobs.
+</Callout>
 
 ## How to Access
 
@@ -24,8 +25,9 @@ The HOME filesystem is realized as an NFS filesystem. This is a shared home from
 The SCRATCH is realized on an NVME storage. The SCRATCH filesystem is mounted in the `/scratch` directory.
 Accessible capacity is 22TB, shared among all users.
 
-!!! warning
+<Callout type=warn>
     Files on the SCRATCH filesystem that are not accessed for more than 60 days will be automatically deleted.
+</Callout>
 
 ### PROJECT
 
diff --git a/content/docs/dgx2/job_execution.mdx b/content/docs/dgx2/job_execution.mdx
index 9c3ab091..8168fa07 100644
--- a/content/docs/dgx2/job_execution.mdx
+++ b/content/docs/dgx2/job_execution.mdx
@@ -81,8 +81,9 @@ Wed Jun 16 07:46:32 2021
 kru0052@cn202:~$ exit
 ```
 
-!!! tip
+<Callout>
     Submit the interactive job using the `salloc` command.
+</Callout>
 
 ## Job Execution
 
diff --git a/content/docs/dice.mdx b/content/docs/dice.mdx
index 99767602..7bca3f20 100644
--- a/content/docs/dice.mdx
+++ b/content/docs/dice.mdx
@@ -148,8 +148,9 @@ Use the command `iput` for upload, `iget` for download, or `ihelp` for help.
 
 ## Access to iRODS Collection From Other Resource
 
-!!! note
+<Callout>
     This guide assumes you are uploading your data from your local PC/VM.
+</Callout>
 
 Use the password from [AAI][f].
 
@@ -173,8 +174,9 @@ For access, set PAM passwords at [AAI][f].
 
 ### Fuse
 
-!!!note "Linux client only"
+<Callout>
     This is a Linux client only, basic knowledge of the command line is necessary.
+</Callout>
 
 Fuse allows you to work with your iRODS collection like an ordinary directory.
 
@@ -227,8 +229,9 @@ To stop/unmount your collection, use:
 
 ### iCommands
 
-!!!note "Linux client only"
+<Callout>
     This is a Linux client only, basic knowledge of the command line is necessary.
+</Callout>
 
 We recommend Centos7, Ubuntu 20 is optional.
 
diff --git a/content/docs/einfracz-migration.mdx b/content/docs/einfracz-migration.mdx
index 586661e0..a65d20ec 100644
--- a/content/docs/einfracz-migration.mdx
+++ b/content/docs/einfracz-migration.mdx
@@ -30,8 +30,9 @@ After the migration, you must use your **e-INFRA CZ credentials** to access all
 
 Successfully migrated accounts tied to e-INFRA CZ can be self-managed at [e-INFRA CZ User profile][4].
 
-!!! tip "Recommendation"
+<Callout>
     We recommend [verifying your SSH keys][6] for cluster access.
+</Callout>
 
 ## Troubleshooting
 
diff --git a/content/docs/environment-and-modules.mdx b/content/docs/environment-and-modules.mdx
index 447c9680..486fb516 100644
--- a/content/docs/environment-and-modules.mdx
+++ b/content/docs/environment-and-modules.mdx
@@ -13,8 +13,9 @@ Note that bash is the only supported shell.
 | Barbora         | yes  | yes  | yes | yes | no   |
 | DGX-2           | yes  | no   | no  | no  | no   |
 
-!!! info
+<Callout>
     Bash is the default shell. Should you need a different shell, contact [support\[at\]it4i.cz][3].
+</Callout>
 
 ## Environment Customization
 
@@ -39,8 +40,9 @@ then
 fi
 ```
 
-!!! note
+<Callout>
     Do not run commands outputting to standard output (echo, module list, etc.) in .bashrc for non-interactive SSH sessions. It breaks the fundamental functionality (SCP) of your account. Take care for SSH session interactivity for such commands as stated in the previous example.
+</Callout>
 
 ### Application Modules
 
@@ -74,8 +76,9 @@ Application modules on clusters are built using [EasyBuild][1]. The modules are
  python: python packages
 ```
 
-!!! note
+<Callout>
     The modules set up the application paths, library paths and environment variables for running a particular application.
+</Callout>
 
 The modules may be loaded, unloaded, and switched according to momentary needs. For details, see [lmod][2].
 
diff --git a/content/docs/general/access/einfracz-account.mdx b/content/docs/general/access/einfracz-account.mdx
index bfbb2063..9888312a 100644
--- a/content/docs/general/access/einfracz-account.mdx
+++ b/content/docs/general/access/einfracz-account.mdx
@@ -5,8 +5,9 @@ title: "e-INFRA CZ Account"
 which provides capacities and resources for the transmission, storage and processing of scientific and research data.
 IT4Innovations has become a member of e-INFRA CZ on January 2022.
 
-!!! important
+<Callout type=warn>
     Only persons affiliated with an academic institution from the Czech Republic ([eduID.cz][6]) are eligible for an e-INFRA CZ account.
+</Callout>
 
 ## Request e-INFRA CZ Account
 
diff --git a/content/docs/general/access/project-access.mdx b/content/docs/general/access/project-access.mdx
index 37491c02..1adca13a 100644
--- a/content/docs/general/access/project-access.mdx
+++ b/content/docs/general/access/project-access.mdx
@@ -1,8 +1,9 @@
 ---
 title: "Get Project Membership"
 ---
-!!! note
+<Callout>
     You need to be named as a collaborator by a Primary Investigator (PI) in order to access and use the clusters.
+</Callout>
 
 ## Authorization by Web
 
diff --git a/content/docs/general/accessing-the-clusters/graphical-user-interface/ood.mdx b/content/docs/general/accessing-the-clusters/graphical-user-interface/ood.mdx
index 8837689f..1eb3a3c7 100644
--- a/content/docs/general/accessing-the-clusters/graphical-user-interface/ood.mdx
+++ b/content/docs/general/accessing-the-clusters/graphical-user-interface/ood.mdx
@@ -18,8 +18,9 @@ and launch interactive apps on login nodes.
 
 ## OOD Apps on IT4I Clusters
 
-!!! note
+<Callout>
     Barbora OOD offers Mate and XFCE Desktops on login node only. Other applications listed below are exclusive to Karolina OOD.
+</Callout>
 
 * Desktops
     * Karolina Login Mate
diff --git a/content/docs/general/accessing-the-clusters/graphical-user-interface/vnc.mdx b/content/docs/general/accessing-the-clusters/graphical-user-interface/vnc.mdx
index 6907d0af..34577b5b 100644
--- a/content/docs/general/accessing-the-clusters/graphical-user-interface/vnc.mdx
+++ b/content/docs/general/accessing-the-clusters/graphical-user-interface/vnc.mdx
@@ -9,8 +9,9 @@ The recommended clients are [TightVNC][b] or [TigerVNC][c] (free, open source, a
 
 ## Create VNC Server Password
 
-!!! note
+<Callout>
     VNC server password should be set before the first login. Use a strong password.
+</Callout>
 
 ```console
 $ vncpasswd
@@ -20,8 +21,9 @@ Verify:
 
 ## Start VNC Server
 
-!!! note
+<Callout>
     To access VNC, a remote VNC Server must be started first and a tunnel using SSH port forwarding must be established.
+</Callout>
 
 [See below][2] the details on SSH tunnels.
 
@@ -40,8 +42,9 @@ Generally, you can choose display number freely, *except these occupied numbers*
 Also remember that display number should be lower than or equal to 99.
 Based on this requirement, we have chosen the display number 61, as seen in the examples below.
 
-!!! note
+<Callout>
     Your situation may be different so the choice of your number may differ, as well. **Choose and use your own display number accordingly!**
+</Callout>
 
 Start your remote VNC server on the chosen display number (61):
 
@@ -74,13 +77,15 @@ username :61
 username :102
 ```
 
-!!! note
+<Callout>
     The VNC server runs on port 59xx, where xx is the display number. To get your port number, simply add 5900 + display number, in our example 5900 + 61 = 5961. Another example for display number 102 is calculation of TCP port 5900 + 102 = 6002, but note that TCP ports above 6000 are often used by X11. **Calculate your own port number and use it instead of 5961 from examples below**.
+</Callout>
 
 To access the remote VNC server you have to create a tunnel between the login node using TCP port 5961 and your local  machine using a free TCP port (for simplicity the very same) in next step. See examples for [Linux/Mac OS][2] and [Windows][3].
 
-!!! note
+<Callout>
     The tunnel must point to the same login node where you launched the VNC server, e.g. login2. If you use just cluster-name.it4i.cz, the tunnel might point to a different node due to DNS round robin.
+</Callout>
 
 ## Linux/Mac OS Example of Creating a Tunnel
 
@@ -121,8 +126,9 @@ You have to close the SSH tunnel which is still running in the background after
 kill 2022
 ```
 
-!!! note
+<Callout>
     You can watch the instruction video on how to make a VNC connection between a local Ubuntu desktop and the IT4I cluster [here][e].
+</Callout>
 
 ## Windows Example of Creating a Tunnel
 
@@ -215,8 +221,9 @@ or:
 $ pkill vnc
 ```
 
-!!! note
+<Callout>
     Also, do not forget to terminate the SSH tunnel, if it was used. For details, see the end of [this section][2].
+</Callout>
 
 ## GUI Applications on Compute Nodes Over VNC
 
diff --git a/content/docs/general/accessing-the-clusters/graphical-user-interface/x-window-system.mdx b/content/docs/general/accessing-the-clusters/graphical-user-interface/x-window-system.mdx
index 760a8487..92c6e805 100644
--- a/content/docs/general/accessing-the-clusters/graphical-user-interface/x-window-system.mdx
+++ b/content/docs/general/accessing-the-clusters/graphical-user-interface/x-window-system.mdx
@@ -3,8 +3,9 @@ title: "X Window System"
 ---
 The X Window system is a principal way to get GUI access to the clusters. The **X Window System** (commonly known as **X11**, based on its current major version being 11, or shortened to simply **X**, and sometimes informally **X-Windows**) is a computer software system and network protocol that provides a basis for graphical user interfaces (GUIs) and rich input device capability for networked computers.
 
-!!! tip
+<Callout>
     The X display forwarding must be activated and the X server running on client side
+</Callout>
 
 ## X Display
 
@@ -30,8 +31,9 @@ To enable the X display forwarding, log in using the `-X` option in the SSH clie
  local $ ssh -X username@cluster-name.it4i.cz
 ```
 
-!!! tip
+<Callout>
     If you are getting the "cannot open display" error message, try to export the DISPLAY variable, before attempting to log in:
+</Callout>
 
 ```console
  local $ export DISPLAY=localhost:0.0
@@ -51,8 +53,9 @@ To run Linux GuI on WSL, download, for example, [VcXsrv][a].
 
 1. After installation, run XLaunch and during the initial setup, check the `Disable access control`.
 
-    !!! tip
+    <Callout>
         Save the configuration and launch VcXsrv using the `config.xlaunch` file, so you won't have to check the option on every run.
+    </Callout>
 
 1. Allow VcXsrv in your firewall to communicate on private and public networks.
 
@@ -62,8 +65,9 @@ To run Linux GuI on WSL, download, for example, [VcXsrv][a].
         export DISPLAY="`grep nameserver /etc/resolv.conf | sed 's/nameserver //'`:0"
     ```
 
-    !!! tip
+    <Callout>
         Include the command at the end of the `/etc/bash.bashrc`, so you don't have to run it every time you run WSL.
+    </Callout>
 
 1. Test the configuration by running `echo $DISPLAY`:
 
@@ -82,8 +86,9 @@ There is a variety of X servers available for the Windows environment. The comme
 
 ## Running GUI Enabled Applications
 
-!!! note
+<Callout>
     Make sure that X forwarding is activated and the X server is running.
+</Callout>
 
 Then launch the application as usual. Use the `&` to run the application in background:
 
diff --git a/content/docs/general/accessing-the-clusters/graphical-user-interface/xorg.mdx b/content/docs/general/accessing-the-clusters/graphical-user-interface/xorg.mdx
index 26f07956..6ac71298 100644
--- a/content/docs/general/accessing-the-clusters/graphical-user-interface/xorg.mdx
+++ b/content/docs/general/accessing-the-clusters/graphical-user-interface/xorg.mdx
@@ -3,8 +3,9 @@ title: "Xorg"
 ---
 ## Introduction
 
-!!! note
+<Callout>
     Available only for Karolina accelerated nodes acn[01-72] and vizualization servers viz[1-2]
+</Callout>
 
 Some applications (e.g. Paraview, Ensight, Blender, Ovito) require not only visualization but also computational resources such as multiple cores or multiple graphics accelerators. For the processing of demanding tasks, more operating memory and more memory on the graphics card are also required. These requirements are met by all accelerated nodes on the Karolina cluster, which are equipped with eight graphics cards with 40GB GPU memory and 1TB CPU memory. To run properly, it is required to have the Xorg server running and the VirtualGL environment installed.
 
@@ -63,8 +64,9 @@ Some applications (e.g. Paraview, Ensight, Blender, Ovito) require not only visu
     [acnX.karolina]$ DISPLAY=:XX vglrun paraview
     ```
 
-!!! note
+<Callout>
     It is not necessary to run Xorg from the command line on the visualization servers viz[1-2]. Xorg runs without interruption and is started when the visualization server boots.<br> Another option is to use [vglclient][2] for visualization server.
+</Callout>
 
 ## Running Blender (Eevee) on the Background Without GUI and Without Interactive Job on Karolina
 
diff --git a/content/docs/general/accessing-the-clusters/shell-access-and-data-transfer/putty.mdx b/content/docs/general/accessing-the-clusters/shell-access-and-data-transfer/putty.mdx
index 9b061a0a..eb580c90 100644
--- a/content/docs/general/accessing-the-clusters/shell-access-and-data-transfer/putty.mdx
+++ b/content/docs/general/accessing-the-clusters/shell-access-and-data-transfer/putty.mdx
@@ -5,10 +5,11 @@ title: "PuTTY (Windows)"
 
 We recommend you to download "**A Windows installer for everything except PuTTYtel**" with **Pageant** (SSH authentication agent) and **PuTTYgen** (PuTTY key generator) which is available [here][a].
 
-!!! note
+<Callout>
     "Pageant" is optional.
 
     "Change Password for Existing Private Key" is optional.
+</Callout>
 
 ## PuTTY - How to Connect to the IT4Innovations Cluster
 
diff --git a/content/docs/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-key-management.mdx b/content/docs/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-key-management.mdx
index eb97f652..259d3198 100644
--- a/content/docs/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-key-management.mdx
+++ b/content/docs/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-key-management.mdx
@@ -6,8 +6,9 @@ SSH uses public-private key pair for authentication, allowing users to log in wi
 
 ## Private Key
 
-!!! note
+<Callout>
     The path to a private key is usually /home/username/.ssh/
+</Callout>
 
 A private key file in the `id_rsa` or `*.ppk` format is present locally on local side and used for example in the Pageant SSH agent (for Windows users). The private key should always be kept in a safe place.
 
diff --git a/content/docs/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.mdx b/content/docs/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.mdx
index 7853d5db..98b6d533 100644
--- a/content/docs/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.mdx
+++ b/content/docs/general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.mdx
@@ -9,8 +9,9 @@ To generate a new keypair of your public and private key, use the `ssh-keygen` t
 local $ ssh-keygen -t ed25519 -C username@organization.example.com' -f additional_key
 ```
 
-!!! note
+<Callout>
     Enter a **strong** **passphrase** for securing your private key.
+</Callout>
 
 By default, your private key is saved to the `id_rsa` file in the `.ssh` directory
 and your public key is saved to the `id_rsa.pub` file.
diff --git a/content/docs/general/accessing-the-clusters/vpn-access.mdx b/content/docs/general/accessing-the-clusters/vpn-access.mdx
index 77e7b13e..bb0c30b7 100644
--- a/content/docs/general/accessing-the-clusters/vpn-access.mdx
+++ b/content/docs/general/accessing-the-clusters/vpn-access.mdx
@@ -24,8 +24,9 @@ Client Certificate  | None
 
 Optionally, you can describe the VPN connection and select Save Login under Authentication.
 
-!!! Note "Realms"
+<Callout>
     If you are member of a partner organization, we may ask you to use so called realm in your VPN connection. In the Remote Gateway field, include the realm path after the IP address or hostname. For example, for a realm `excellent`, the field would read as follows `reconnect.it4i.cz:443/excellent`.
+</Callout>
 
 ![](/it4i/img/fc_vpn_web_login_2_1.png)
 
diff --git a/content/docs/general/applying-for-resources.mdx b/content/docs/general/applying-for-resources.mdx
index 2fe5d49e..a5f5c9ba 100644
--- a/content/docs/general/applying-for-resources.mdx
+++ b/content/docs/general/applying-for-resources.mdx
@@ -38,8 +38,9 @@ In order to authorize a Collaborator to utilize the allocated resources, the PI
 1. Provide a list of people, including themself, who are authorized to use the resources allocated to the project. The list must include the full name, email and affiliation. If collaborators' login access already exists in the IT4I systems, provide their usernames as well.
 1. Include "Authorization to IT4Innovations" into the subject line.
 
-!!! warning
+<Callout type=warn>
     Should the above information be provided by email, the email **must be** digitally signed. Read more on [digital signatures][2].
+</Callout>
 
 Example (except the subject line which must be in English, you may use Czech or Slovak language for communication with us):
 
@@ -59,8 +60,9 @@ PI
 (Digitally signed)
 ```
 
-!!! note
+<Callout>
     Web-based email interfaces cannot be used for secure communication; external application, such as Thunderbird or Outlook must be used. This way, your new credentials will be visible only in applications that have access to your certificate.
+</Callout>
 
 [1]: obtaining-login-credentials/obtaining-login-credentials.md
 [2]: https://docs.it4i.cz/general/obtaining-login-credentials/obtaining-login-credentials/#certificates-for-digital-signatures
diff --git a/content/docs/general/barbora-partitions.mdx b/content/docs/general/barbora-partitions.mdx
index b51ca751..fb5093af 100644
--- a/content/docs/general/barbora-partitions.mdx
+++ b/content/docs/general/barbora-partitions.mdx
@@ -1,8 +1,9 @@
 ---
 title: "Barbora Partitions"
 ---
-!!! important
+<Callout type=warn>
     Active [project membership][1] is required to run jobs.
+</Callout>
 
 Below is the list of partitions available on the Barbora cluster:
 
diff --git a/content/docs/general/capacity-computing.mdx b/content/docs/general/capacity-computing.mdx
index cc5064bb..a4666065 100644
--- a/content/docs/general/capacity-computing.mdx
+++ b/content/docs/general/capacity-computing.mdx
@@ -13,8 +13,9 @@ and user experience for all users.
 
 [//]: # (For this reason, the number of jobs is **limited to 100 jobs per user, 4,000 jobs and subjobs per user, 1,500 subjobs per job array**.)
 
-!!! note
+<Callout>
     Follow one of the procedures below, in case you wish to schedule more than 100 jobs at a time.
+</Callout>
 
 You can use [HyperQueue][1] when running a huge number of jobs. HyperQueue can help efficiently
 load balance a large number of jobs amongst available computing nodes.
diff --git a/content/docs/general/hyperqueue.mdx b/content/docs/general/hyperqueue.mdx
index 1176a2d7..95841cfa 100644
--- a/content/docs/general/hyperqueue.mdx
+++ b/content/docs/general/hyperqueue.mdx
@@ -81,8 +81,9 @@ $ hq job <job-id>
 $ hq jobs
 ```
 
-!!! important
+<Callout type=warn>
     Before the jobs can start executing, you have to provide HyperQueue with some computational resources.
+</Callout>
 
 ### Providing Computational Resources
 
@@ -114,9 +115,10 @@ There are two ways of providing computational resources.
     $ salloc <salloc-params> -- /bin/bash -l -c "$(which hq) worker start"
     ```
 
-!!! tip
+<Callout>
     For debugging purposes, you can also start the worker e.g. on a login node, simply by running
     `$ hq worker start`. Do not use such worker for any long-running computations though!
+</Callout>
 
 ## Architecture
 
diff --git a/content/docs/general/job-arrays.mdx b/content/docs/general/job-arrays.mdx
index 645c81f8..ae793102 100644
--- a/content/docs/general/job-arrays.mdx
+++ b/content/docs/general/job-arrays.mdx
@@ -1,13 +1,14 @@
 ---
 title: "Job Arrays"
 ---
-!!!warning
+<Callout type=warn>
     This page has not been updated yet. The page does not reflect the transition from PBS to Slurm.
 A job array is a compact representation of many jobs called subjobs. Subjobs share the same job script, and have the same values for all attributes and resources, with the following exceptions:
 
 * each subjob has a unique index, $PBS_ARRAY_INDEX
 * job Identifiers of subjobs only differ by their indices
 * the state of subjobs can differ (R, Q, etc.)
+</Callout>
 
 All subjobs within a job array have the same scheduling priority and schedule as independent jobs. An entire job array is submitted through a single `qsub` command and may be managed by `qdel`, `qalter`, `qhold`, `qrls`, and `qsig` commands as a single job.
 
diff --git a/content/docs/general/job-priority.mdx b/content/docs/general/job-priority.mdx
index 6bb951c7..a6f12cfc 100644
--- a/content/docs/general/job-priority.mdx
+++ b/content/docs/general/job-priority.mdx
@@ -49,8 +49,9 @@ The scheduler makes a list of jobs to run in order of priority. The scheduler lo
 
 This means that jobs with lower priority can be run before jobs with higher priority.
 
-!!! note
+<Callout>
     It is **very beneficial to specify the timelimit** when submitting jobs.
+<Callout>
 
 Specifying more accurate timelimit enables better scheduling, better times, and better resource usage. Jobs with suitable (small) timelimit can be backfilled - and overtake job(s) with a higher priority.
 
diff --git a/content/docs/general/job-submission-and-execution.mdx b/content/docs/general/job-submission-and-execution.mdx
index 0d9e40b2..0b2ece24 100644
--- a/content/docs/general/job-submission-and-execution.mdx
+++ b/content/docs/general/job-submission-and-execution.mdx
@@ -1,10 +1,11 @@
 ---
 title: "Job Submission and Execution"
 ---
-!!! warning
+<Callout type=warn>
     Don't use the `#SBATCH --exclusive` parameter as it is already included in the SLURM configuration.<br><br>
     Use the `#SBATCH --mem=` parameter **on `qfat` only**. On `cpu_` queues, whole nodes are allocated.
     Accelerated nodes (`gpu_` queues) are divided each into eight parts with corresponding memory.
+</Callout>
 
 ## Introduction
 
@@ -84,8 +85,9 @@ $ salloc -A PROJECT-ID -p qcpu_exp --x11
 
 To finish the interactive job, use the Ctrl+D (`^D`) control sequence.
 
-!!! warning
+<Callout type=warn>
     Do not use `srun` for initiating interactive jobs, subsequent `srun`, `mpirun` invocations would block forever.
+</Callout>
 
 ## Running Batch Jobs
 
@@ -123,9 +125,9 @@ Script will:
 * load appropriate module
 * run command, `srun` serves as Slurm's native way of executing MPI-enabled applications, `hostname` is used in the example just for sake of simplicity
 
-!!! tip "Excluding Specific Nodes"
-
+<Callout>
     Use `#SBATCH --exclude=<node_name_list>` directive to exclude specific nodes from your job, e.g.: `#SBATCH --exclude=cn001,cn002,cn003`.
+</Callout>
 
 Submit directory will be used as working directory for submitted job,
 so there is no need to change directory in the job script.
diff --git a/content/docs/general/karolina-mpi.mdx b/content/docs/general/karolina-mpi.mdx
index 66525c40..bb10541e 100644
--- a/content/docs/general/karolina-mpi.mdx
+++ b/content/docs/general/karolina-mpi.mdx
@@ -1,12 +1,13 @@
 ---
 title: "Parallel Runs Setting on Karolina"
 ---
-!!!warning
+<Callout type=warn>
     This page has not been updated yet. The page does not reflect the transition from PBS to Slurm.
 Important aspect of each parallel application is correct placement of MPI processes
 or threads to available hardware resources.
 Since incorrect settings can cause significant degradation of performance,
 all users should be familiar with basic principles explained below.
+</Callout>
 
 At the beginning, a basic [hardware overview][1] is provided,
 since it influences settings of `mpirun` command.
diff --git a/content/docs/general/karolina-partitions.mdx b/content/docs/general/karolina-partitions.mdx
index b4617920..69549b89 100644
--- a/content/docs/general/karolina-partitions.mdx
+++ b/content/docs/general/karolina-partitions.mdx
@@ -1,8 +1,9 @@
 ---
 title: "Karolina Partitions"
 ---
-!!! important
+<Callout type=warn>
     Active [project membership][1] is required to run jobs.
+<Callout>
 
 Below is the list of partitions available on the Karolina cluster:
 
diff --git a/content/docs/general/karolina-slurm.mdx b/content/docs/general/karolina-slurm.mdx
index ebec8bc8..31558b3d 100644
--- a/content/docs/general/karolina-slurm.mdx
+++ b/content/docs/general/karolina-slurm.mdx
@@ -40,9 +40,10 @@ On Karolina cluster
 * all CPU queues/partitions provide full node allocation, whole nodes (all node resources) are allocated to a job.
 * other queues/partitions (gpu, fat, viz) provide partial node allocation. Jobs' resources (cpu, mem) are separated and dedicated for job.
 
-!!! important "Partial node allocation and security"
+<Callout>
     Division of nodes means that if two users allocate a portion of the same node, they can see each other's running processes.
     If this solution is inconvenient for you, consider allocating a whole node.
+</Callout>
 
 ## Using CPU Queues
 
@@ -62,9 +63,10 @@ There is no need to specify the number of cores and memory size.
 
 ## Using GPU Queues
 
-!!! important "Nodes per job limit"
+<Callout type=warn>
     Because we are still in the process of fine-tuning and setting optimal parameters for SLURM,
     we have temporarily limited the maximum number of nodes per job on `qgpu` and `qgpu_biz` to **16**.
+</Callout>
 
 Access [GPU accelerated nodes][5].
 Every GPU accelerated node is divided into eight parts, each part contains one GPU, 16 CPU cores and corresponding memory.
diff --git a/content/docs/general/obtaining-login-credentials/obtaining-login-credentials.mdx b/content/docs/general/obtaining-login-credentials/obtaining-login-credentials.mdx
index f841a4ef..9824cd9a 100644
--- a/content/docs/general/obtaining-login-credentials/obtaining-login-credentials.mdx
+++ b/content/docs/general/obtaining-login-credentials/obtaining-login-credentials.mdx
@@ -1,8 +1,9 @@
 ---
 title: "IT4I Account"
 ---
-!!! important
+<Callout type=warn>
     If you are affiliated with an academic institution from the Czech Republic ([eduID.cz][u]), create an [e-INFRA CZ account][8], instead.
+</Callout>
 
 If you are not eligible for an e-INFRA CZ account, contact the [IT4I support][a] (email: [support\[at\]it4i.cz][b]) and provide the following information:
 
@@ -54,8 +55,9 @@ Certificate generation process for academic purposes, utilizing the CESNET certi
 
 * [How to generate a personal TCS certificate in Mozilla Firefox ESR web browser.][k] (in Czech)
 
-!!! note
+<Callout>
     The certificate file can be installed into your email client. Web-based email interfaces cannot be used for secure communication, external application, such as Thunderbird or Outlook must be used. This way, your new credentials will be visible only in applications that have access to your certificate.
+<Callout>
 
 If you are not able to obtain the certificate from any of the respected certification authorities, follow the Alternative Way below.
 
@@ -63,9 +65,10 @@ FAQ about certificates can be found here: [Certificates FAQ][7].
 
 ## Alternative Way to Personal Certificate
 
-!!! important
+<Callout type=warn>
     Choose this alternative **only** if you cannot obtain your certificate in a standard way.
     Note that in this case **you must attach a scan of your photo ID** (personal ID, passport, or driver's license) when applying for login credentials.
+</Callout>
 
 An alternative to personal certificate is an S/MIME certificate allowing secure email communication,
 e.g. providing sensitive information such as ID scan or user login/password.
@@ -78,8 +81,9 @@ The following example is for Actalis free S/MIME certificate, but you can choose
 1. Import the certificate to one of the supported email clients.
 1. Attach a scan of photo ID (personal ID, passport, or driver license) to your email request for IT4I account.
 
-!!! note
+<Callout>
     Web-based email interfaces cannot be used for secure communication; external application, such as Thunderbird or Outlook must be used. This way, your new credentials will be visible only in applications that have access to your certificate.
+</Callout>
 
 [1]: ./obtaining-login-credentials.md#certificates-for-digital-signatures
 [2]: #authorization-by-web
diff --git a/content/docs/general/pbs-job-submission-and-execution.mdx b/content/docs/general/pbs-job-submission-and-execution.mdx
index c9627c1a..d2c3b52a 100644
--- a/content/docs/general/pbs-job-submission-and-execution.mdx
+++ b/content/docs/general/pbs-job-submission-and-execution.mdx
@@ -1,8 +1,10 @@
 ---
 title: "Job Submission and Execution"
 ---
-!!!warning
+<Callout type=warn>
     This page has not been updated yet. The page does not reflect the transition from PBS to Slurm.
+<\Callout>
+
 ## Job Submission
 
 When allocating computational resources for the job, specify:
@@ -22,8 +24,9 @@ $ qsub -A Project_ID -q queue -l select=x:ncpus=y,walltime=[[hh:]mm:]ss[.ms] job
 
 The `qsub` command submits the job to the queue, i.e. it creates a request to the PBS Job manager for allocation of specified resources. The resources will be allocated when available, subject to the above described policies and constraints. **After the resources are allocated, the jobscript or interactive shell is executed on the first of the allocated nodes.**
 
-!!! note
+<Callout>
     `ncpus=y` is usually not required, because the smallest allocation unit is an entire node. The exception are corner cases for `qviz` and `qfat` on Karolina.
+</Callout>
 
 ### Job Submission Examples
 
@@ -123,8 +126,9 @@ For communication intensive jobs, it is possible to set stricter requirement - t
 
 Nodes directly connected to the same InfiniBand switch can communicate most efficiently. Using the same switch prevents hops in the network and provides for unbiased, most efficient network communication. There are 9 nodes directly connected to every InfiniBand switch.
 
-!!! note
+<Callout>
     We recommend allocating compute nodes of a single switch when the best possible computational network performance is required to run job efficiently.
+</Callout>
 
 Nodes directly connected to the one InfiniBand switch can be allocated using node grouping on the PBS resource attribute `switch`.
 
@@ -140,8 +144,9 @@ $ qsub -A OPEN-0-0 -q qprod -l select=9 -l place=group=switch ./myjob
 
 ### Selecting Turbo Boost Off
 
-!!! note
+<Callout>
     For Barbora only.
+</Callout>
 
 Intel Turbo Boost Technology is on by default. We strongly recommend keeping the default.
 
@@ -170,8 +175,9 @@ Although this example is somewhat artificial, it demonstrates the flexibility of
 
 ## Job Management
 
-!!! note
+<Callout>
     Check the status of your jobs using the `qstat` and `check-pbs-jobs` commands
+</Callout>
 
 ```console
 $ qstat -a
@@ -249,8 +255,9 @@ Run loop 3
 
 In this example, we see the actual output (some iteration loops) of the job `35141.dm2`.
 
-!!! note
+<Callout>
     Manage your queued or running jobs, using the `qhold`, `qrls`, `qdel`, `qsig`, or `qalter` commands
+</Callout>
 
 You may release your allocation at any time, using the `qdel` command
 
@@ -274,13 +281,15 @@ $ man pbs_professional
 
 ### Jobscript
 
-!!! note
+<Callout>
     Prepare the jobscript to run batch jobs in the PBS queue system
+</Callout>
 
 The Jobscript is a user made script controlling a sequence of commands for executing the calculation. It is often written in bash, though other scripts may be used as well. The jobscript is supplied to the PBS `qsub` command as an argument, and is executed by the PBS Professional workload manager.
 
-!!! note
+<Callout>
     The jobscript or interactive shell is executed on first of the allocated nodes.
+</Callout>
 
 ```console
 $ qsub -q qexp -l select=4 -N Name0 ./myjob
@@ -309,8 +318,9 @@ $ pwd
 
 In this example, 4 nodes were allocated interactively for 1 hour via the `qexp` queue. The interactive shell is executed in the `/home` directory.
 
-!!! note
+<Callout>
     All nodes within the allocation may be accessed via SSH. Unallocated nodes are not accessible to the user.
+</Callout>
 
 The allocated nodes are accessible via SSH from login nodes. The nodes may access each other via SSH as well.
 
@@ -341,8 +351,9 @@ In this example, the hostname program is executed via `pdsh` from the interactiv
 
 ### Example Jobscript for MPI Calculation
 
-!!! note
+<Callout>
     Production jobs must use the /scratch directory for I/O
+</Callout>
 
 The recommended way to run production jobs is to change to the `/scratch` directory early in the jobscript, copy all inputs to `/scratch`, execute the calculations, and copy outputs to the `/home` directory.
 
@@ -378,13 +389,15 @@ exit
 
 In this example, a directory in `/home` holds the input file input and the `mympiprog.x` executable. We create the `myjob` directory on the `/scratch` filesystem, copy input and executable files from the `/home` directory where the `qsub` was invoked (`$PBS_O_WORKDIR`) to `/scratch`, execute the MPI program `mympiprog.x` and copy the output file back to the `/home` directory. `mympiprog.x` is executed as one process per node, on all allocated nodes.
 
-!!! note
+<Callout>
     Consider preloading inputs and executables onto [shared scratch][6] memory before the calculation starts.
+</Callout>
 
 In some cases, it may be impractical to copy the inputs to the `/scratch` memory and the outputs to the `/home` directory. This is especially true when very large input and output files are expected, or when the files should be reused by a subsequent calculation. In such cases, it is the users' responsibility to preload the input files on the shared `/scratch` memory before the job submission, and retrieve the outputs manually after all calculations are finished.
 
-!!! note
+<Callout>
     Store the `qsub` options within the jobscript. Use the `mpiprocs` and `ompthreads` qsub options to control the MPI job execution.
+</Callout>
 
 ### Example Jobscript for MPI Calculation With Preloaded Inputs
 
@@ -421,8 +434,9 @@ Note the `mpiprocs` and `ompthreads` qsub options controlling the behavior of th
 
 ### Example Jobscript for Single Node Calculation
 
-!!! note
+<Callout>
     The local scratch directory is often useful for single node jobs. Local scratch memory will be deleted immediately after the job ends.
+</Callout>
 
 Example jobscript for single node calculation, using [local scratch][6] memory on the node:
 
diff --git a/content/docs/general/resource-accounting.mdx b/content/docs/general/resource-accounting.mdx
index b4f67054..af290c68 100644
--- a/content/docs/general/resource-accounting.mdx
+++ b/content/docs/general/resource-accounting.mdx
@@ -28,14 +28,14 @@ The same rule applies for unspent [reservations][10a].
 
 time: duration of the Slurm job in hours
 
-!!! important "CPU/GPU resources granularity"
-
+<Callout type=warn>
     Minimal granularity of all Barbora's partitions and Karolina's CPU partition is 1 node.
     This means that if you request, for example, 32 cores on Karolina's CPU partition,
     your job will still consume 1 NH \* time.
 
     All other Karolina's partitions (GPU, FAT, VIZ) provide partial node allocation;
     i.e.: if you request 4 GPUs on Karolina, you will consume only 0.5 NH \* time.
+</Callout>
 
 [1a]: job-submission-and-execution.md
 [2a]: #normalized-core-hours-nch
diff --git a/content/docs/general/resource_allocation_and_job_execution.mdx b/content/docs/general/resource_allocation_and_job_execution.mdx
index 6829ae90..5e4cabdc 100644
--- a/content/docs/general/resource_allocation_and_job_execution.mdx
+++ b/content/docs/general/resource_allocation_and_job_execution.mdx
@@ -15,8 +15,9 @@ Read more on the [Job Submission and Execution][5] page.
 
 Resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and resources available to the Project. [The Fair-share][3] ensures that individual users may consume approximately equal amount of resources per week. The resources are accessible via queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources.
 
-!!! note
+<Callout>
     See the queue status for [Karolina][d] or [Barbora][e].
+</Callout>
 
 Read more on the [Resource Allocation Policy][4] page.
 
diff --git a/content/docs/general/resources-allocation-policy.mdx b/content/docs/general/resources-allocation-policy.mdx
index 683b286b..1bc9e851 100644
--- a/content/docs/general/resources-allocation-policy.mdx
+++ b/content/docs/general/resources-allocation-policy.mdx
@@ -13,9 +13,10 @@ Queues provide prioritized and exclusive access to the computational resources.
 
 Computational resources are subject to [accounting policy][7].
 
-!!! important
+<Callout type=warn>
     Queues are divided based on a resource type: `qcpu_` for non-accelerated nodes and `qgpu_` for accelerated nodes. <br><br>
     EuroHPC queues are no longer available. If you are an EuroHPC user, use standard queues based on allocated/required type of resources.
+</Callout>
 
 ### Queues
 
@@ -49,8 +50,9 @@ however it cannot be changed for a running job.
 
 ## Queue Status
 
-!!! tip
+<Callout>
     Check the status of jobs, queues and compute nodes [here][c].
+</Callout>
 
 ![rsweb interface](/it4i/img/barbora_cluster_usage.png)
 
diff --git a/content/docs/general/shell-and-data-access.mdx b/content/docs/general/shell-and-data-access.mdx
index 5b9eaa4d..ca394d8d 100644
--- a/content/docs/general/shell-and-data-access.mdx
+++ b/content/docs/general/shell-and-data-access.mdx
@@ -5,11 +5,13 @@ title: "Accessing the Clusters"
 
 All IT4Innovations clusters are accessed by the SSH protocol via login nodes at the address **cluster-name.it4i.cz**. The login nodes may be addressed specifically, by prepending the loginX node name to the address.
 
-!!! note "Workgroups Access Limitation"
+<Callout>
     Projects from the **EUROHPC** workgroup can only access the **Karolina** cluster.
+</Callout>
 
-!!! important "Supported keys"
+<Callout>
     We accept only RSA or ED25519 keys for logging into our systems.
+</Callout>
 
 ### Karolina Cluster
 
@@ -80,8 +82,9 @@ barbora.it4i.cz, ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHUHvIrv7VUcGIcfsrcBjYfHp
 barbora.it4i.cz, ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOmUm4btn7OC0QLIT3xekKTTdg5ziby8WdxccEczEeE1
 ```
 
-!!! note
+<Callout>
     Barbora has identical SSH fingerprints on all login nodes.
+</Callout>
 
 ### Private Key Authentication:
 
@@ -101,8 +104,9 @@ On **Windows**, use the [PuTTY SSH client][2].
 
 After logging in, you will see the command prompt with the name of the cluster and the message of the day.
 
-!!! note
+<Callout>
     The environment is **not** shared between login nodes, except for shared filesystems.
+</Callout>
 
 ## Data Transfer
 
@@ -169,9 +173,10 @@ local $ rsync -r my-local-dir username@cluster-name.it4i.cz:directory
 
 ### Parallel Transfer
 
-!!! note
+<Callout>
     The data transfer speed is limited by the single TCP stream and single-core ssh encryption speed to about **250 MB/s** (750 MB/s in case of aes256-gcm@openssh.com cipher)
     Run **multiple** streams for unlimited transfers
+</Callout>
 
 #### Many Files
 
@@ -274,8 +279,9 @@ Outgoing connections from cluster login nodes to the outside world are restricte
 | 443  | HTTPS    |
 | 873  | Rsync    |
 
-!!! note
+<Callout>
     Use **SSH port forwarding** and proxy servers to connect from cluster to all other remote ports.
+</Callout>
 
 Outgoing connections from cluster compute nodes are restricted to the internal network. Direct connections from compute nodes to the outside world are cut.
 
@@ -289,8 +295,9 @@ Outgoing connections from cluster compute nodes are restricted to the internal n
 
 ### Port Forwarding From Login Nodes
 
-!!! note
+<Callout>
     Port forwarding allows an application running on cluster to connect to arbitrary remote hosts and ports.
+</Callout>
 
 It works by tunneling the connection from cluster back to the user's workstations and forwarding from the workstation to the remote host.
 
@@ -310,8 +317,9 @@ Port forwarding may be established directly to the remote host. However, this re
 $ ssh -L 6000:localhost:1234 remote.host.com
 ```
 
-!!! note
+<Callout>
     Port number 6000 is chosen as an example only. Pick any free port.
+</Callout>
 
 ### Port Forwarding From Compute Nodes
 
@@ -331,8 +339,9 @@ In this example, we assume that port forwarding from `login1:6000` to `remote.ho
 
 Port forwarding is static; each single port is mapped to a particular port on a remote host. Connection to another remote host requires a new forward.
 
-!!! note
+<Callout>
     Applications with inbuilt proxy support experience unlimited access to remote hosts via a single proxy server.
+</Callout>
 
 To establish a local proxy server on your workstation, install and run the SOCKS proxy server software. On Linux, SSHD demon provides the functionality. To establish the SOCKS proxy server listening on port 1080 run:
 
diff --git a/content/docs/general/slurm-job-submission-and-execution.mdx b/content/docs/general/slurm-job-submission-and-execution.mdx
index 0d9e40b2..0b2ece24 100644
--- a/content/docs/general/slurm-job-submission-and-execution.mdx
+++ b/content/docs/general/slurm-job-submission-and-execution.mdx
@@ -1,10 +1,11 @@
 ---
 title: "Job Submission and Execution"
 ---
-!!! warning
+<Callout type=warn>
     Don't use the `#SBATCH --exclusive` parameter as it is already included in the SLURM configuration.<br><br>
     Use the `#SBATCH --mem=` parameter **on `qfat` only**. On `cpu_` queues, whole nodes are allocated.
     Accelerated nodes (`gpu_` queues) are divided each into eight parts with corresponding memory.
+</Callout>
 
 ## Introduction
 
@@ -84,8 +85,9 @@ $ salloc -A PROJECT-ID -p qcpu_exp --x11
 
 To finish the interactive job, use the Ctrl+D (`^D`) control sequence.
 
-!!! warning
+<Callout type=warn>
     Do not use `srun` for initiating interactive jobs, subsequent `srun`, `mpirun` invocations would block forever.
+</Callout>
 
 ## Running Batch Jobs
 
@@ -123,9 +125,9 @@ Script will:
 * load appropriate module
 * run command, `srun` serves as Slurm's native way of executing MPI-enabled applications, `hostname` is used in the example just for sake of simplicity
 
-!!! tip "Excluding Specific Nodes"
-
+<Callout>
     Use `#SBATCH --exclude=<node_name_list>` directive to exclude specific nodes from your job, e.g.: `#SBATCH --exclude=cn001,cn002,cn003`.
+</Callout>
 
 Submit directory will be used as working directory for submitted job,
 so there is no need to change directory in the job script.
diff --git a/content/docs/general/tools/cicd.mdx b/content/docs/general/tools/cicd.mdx
index 9ed4d7e0..30f964ba 100644
--- a/content/docs/general/tools/cicd.mdx
+++ b/content/docs/general/tools/cicd.mdx
@@ -27,8 +27,9 @@ The execution of CI pipelines works as follows. First, a user in the IT4I GitLab
 
 <img src="../../../img/it4i-ci.svg" title="IT4I CI" width="750">
 
-!!! note
+<Callout>
     The GitLab runners at Karolina and Barbora are able to submit (as a Slurm job) and execute 32 CI jobs concurrently, while the runner at Complementary systems can submit 16 jobs concurrently at most. Jobs above this limit are postponed in submission to respective slurm queue until a previous job has finished.
+</Callout>
 
 ### Virtual Environment (Docker Containers)
 
@@ -42,8 +43,9 @@ In addition, these runners have distributed caching enabled. This feature uses p
 
 To begin with, a CI pipeline of a project must be defined in a YAML file. The most common name of this file is `.gitlab-ci.yml` and it should be located in the repository top level. For detailed information, see [tutorial][7] on how to create your first pipeline. Additionally, [CI/CD YAML syntax reference][8] lists all possible keywords, that can be specified in the definition of CI/CD pipelines and jobs.
 
-!!! note
+<Callout>
     The default maximum time that a CI job can run for before it times out is 1 hour. This can be changed in [project's CI/CD settings][9]. When jobs exceed the specified timeout, they are marked as failed. Pending jobs are dropped after 24 hours of inactivity.
+</Callout>
 
 ### Execution of CI Pipelines at the HPC Clusters
 
diff --git a/content/docs/general/tools/portal-clients.mdx b/content/docs/general/tools/portal-clients.mdx
index d7f29aef..08203239 100644
--- a/content/docs/general/tools/portal-clients.mdx
+++ b/content/docs/general/tools/portal-clients.mdx
@@ -4,8 +4,9 @@ title: "it4i-portal-clients"
 it4i-portal-clients provides simple user-friendly shell interface
 to call [IT4I API](https://docs.it4i.cz/apiv1/) requests and display their respond.
 
-!!! important
+<Callout type=warn>
 	Python 2.7 is required.
+</Callout>
 
 Limits are placed on the number of requests you may make to [IT4I API](https://docs.it4i.cz/apiv1/).
 Rate limit can be changed without any warning at any time, but the default is **6 requests per minute**.
diff --git a/content/docs/index.mdx b/content/docs/index.mdx
index 73bc26c9..2a3d4950 100644
--- a/content/docs/index.mdx
+++ b/content/docs/index.mdx
@@ -14,8 +14,9 @@ The purpose of these pages is to provide comprehensive documentation of the hard
 
 ## Required Proficiency
 
-!!! note
+<Callout>
     Basic proficiency in Linux environments is required.
+</Callout>
 
 In order to use the system for your calculations, you need basic proficiency in Linux environments.
 To gain this proficiency, we recommend you read the [introduction to Linux][a] operating system environments,
@@ -23,8 +24,9 @@ and install a Linux distribution on your personal computer.
 For example, the [CentOS][b] distribution is similar to systems on the clusters at IT4Innovations and it is easy to install and use,
 but any Linux distribution would do.
 
-!!! note
+<Callout>
     Learn how to parallelize your code.
+</Callout>
 
 In many cases, you will run your own code on the cluster.
 In order to fully exploit the cluster, you will need to carefully consider how to utilize all the cores available on the node
diff --git a/content/docs/job-features.mdx b/content/docs/job-features.mdx
index 0c69f53e..fdcdcc94 100644
--- a/content/docs/job-features.mdx
+++ b/content/docs/job-features.mdx
@@ -41,8 +41,9 @@ $ salloc ... --comment "use:vtune=version_string"
 
 ## Global RAM Disk
 
-!!! warning
+<Callout type=warn>
     The feature has not been implemented on Slurm yet.
+<Callout>
 
 The Global RAM disk deploys BeeGFS On Demand parallel filesystem,
 using local (i.e. allocated nodes') RAM disks as a storage backend.
@@ -69,9 +70,10 @@ and files written to this RAM disk will be visible on all 4 nodes.
 The file system is private to a job and shared among the nodes,
 created when the job starts and deleted at the job's end.
 
-!!! warning
+<Callout type=warn>
     The Global RAM disk will be deleted immediately after the calculation end.
     Users should take care to save the output data from within the jobscript.
+</Callout>
 
 The files on the Global RAM disk will be equally striped across all the nodes, using 512k stripe size.
 Check the Global RAM disk status:
@@ -84,8 +86,9 @@ $ beegfs-ctl --mount=/mnt/global_ramdisk --getentryinfo /mnt/global_ramdisk
 Use Global RAM disk in case you need very large RAM disk space.
 The Global RAM disk allows for high performance sharing of data among compute nodes within a job.
 
-!!! warning
+<Callout type=warn>
      Use of Global RAM disk file system is at the expense of operational memory.
+</Callout>
 
 | Global RAM disk    |                                                                           |
 | ------------------ | --------------------------------------------------------------------------|
@@ -96,8 +99,9 @@ The Global RAM disk allows for high performance sharing of data among compute no
 
 N = number of compute nodes in the job.
 
-!!! Warning
+<Callout type=warn>
     Available on Barbora nodes only.
+</Callout>
 
 ## MSR-SAFE Support
 
@@ -110,11 +114,13 @@ $ salloc ... --comment "use:msr=version_string"
 
 `version_string` is MSR-SAFE version e.g. 1.4.0
 
-!!! Danger
+<Callout type=error>
     Hazardous, it causes CPU frequency disruption.
+</Callout>
 
-!!! Warning
+<Callout type=warn>
     Available on Barbora nodes only.
+</Callout>
 
 ## HDEEM Support
 
@@ -126,13 +132,15 @@ $ salloc ... --comment "use:hdeem=version_string"
 
 `version_string` is HDEEM version e.g. 2.2.8-1
 
-!!! Warning
+<Callout type=warn>
     Available on Barbora nodes only.
+</Callout>
 
 ## NVMe Over Fabrics File System
 
-!!! warning
+<Callout type=warn>
     The feature has not been implemented on Slurm yet.
+</Callout>
 
 Attach a volume from an NVMe storage and mount it as a file-system. File-system is mounted on /mnt/nvmeof (on the first node of the job).
 Barbora cluster provides two NVMeoF storage nodes equipped with NVMe disks. Each storage node contains seven 1.6TB NVMe disks and provides net aggregated capacity of 10.18TiB. Storage space is provided using the NVMe over Fabrics protocol; RDMA network i.e. InfiniBand is used for data transfers.
@@ -155,13 +163,15 @@ For example:
 $ salloc ... --comment "use:nvmeof=10t:shared"
 ```
 
-!!! Warning
+<Callout type=warn>
     Available on Barbora nodes only.
+</Callout>
 
 ## Smart Burst Buffer
 
-!!! warning
+<Callout type=warn>
     The feature has not been implemented on Slurm yet.
+</Callout>
 
 Accelerate SCRATCH storage using the Smart Burst Buffer (SBB) technology. A specific Burst Buffer process is launched and Burst Buffer resources (CPUs, memory, flash storage) are allocated on an SBB storage node for acceleration (I/O caching) of SCRATCH data operations. The SBB profile file `/lscratch/$SLURM_JOB_ID/sbb.sh` is created on the first allocated node of job. For SCRATCH acceleration, the SBB profile file has to be sourced into the shell environment - provided environment variables have to be defined in the process environment. Modified data is written asynchronously to a backend (Lustre) filesystem, writes might be proceeded after job termination.
 
@@ -179,8 +189,9 @@ Loading SBB profile:
 $ source /lscratch/$SLURM_JOB_ID/sbb.sh
 ```
 
-!!! Warning
+<Callout type=warn>
     Available on Barbora nodes only.
+</Callout>
 
 [1]: software/tools/virtualization.md#tap-interconnect
 [2]: general/accessing-the-clusters/graphical-user-interface/xorg.md
diff --git a/content/docs/karolina/introduction.mdx b/content/docs/karolina/introduction.mdx
index 3570e290..394af33a 100644
--- a/content/docs/karolina/introduction.mdx
+++ b/content/docs/karolina/introduction.mdx
@@ -1,9 +1,6 @@
 ---
 title: "Introduction"
 ---
-!!! important "Karolina Update"
-    Karolina has been updated. This includes updates to cluster management tools and new node images with **Rocky Linux 8.9**. Expect new versions of kernels, libraries, and drivers on compute nodes.
-
 Karolina is the latest and most powerful supercomputer cluster built for IT4Innovations in Q2 of 2021. The Karolina cluster consists of 829 compute nodes, totaling 106,752 compute cores with 313 TB RAM, giving over 15.7 PFLOP/s theoretical peak performance.
 
 Nodes are interconnected through a fully non-blocking fat-tree InfiniBand network, and are equipped with AMD Zen 2, Zen 3, and Intel Cascade Lake architecture processors. Seventy two nodes are also equipped with NVIDIA A100 accelerators. Read more in [Hardware Overview][1].
diff --git a/content/docs/karolina/storage.mdx b/content/docs/karolina/storage.mdx
index ea4d5095..e48f38d1 100644
--- a/content/docs/karolina/storage.mdx
+++ b/content/docs/karolina/storage.mdx
@@ -11,8 +11,9 @@ Shared filesystems should not be used as a backup for large amount of data or lo
 
 The HOME filesystem is an HA cluster of two active-passive NFS servers. This filesystem contains users' home directories `/home/username`. Accessible capacity is 31 TB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 25 GB per user. Should 25 GB prove insufficient, contact [support][d], the quota may be increased upon request.
 
-!!! note
+<Callout>
     The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active projects.
+</Callout>
 
 The files on HOME filesystem will not be deleted until the end of the [user's lifecycle][4].
 
@@ -69,13 +70,15 @@ Filesystem kbytes quota limit grace files quota limit grace
 /scratch/ 14356700796 0 19531250000 - 82841 0 20000000 -
 ```
 
-!!! note
+<Callout>
     The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high-performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory.
 
     Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files.
+</Callout>
 
-!!! warning
+<Callout type=warn>
     Files on the SCRATCH filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
+</Callout>
 
 | SCRATCH filesystem   |                                    |
 | -------------------- | ---------------------------------- |
diff --git a/content/docs/lumi/openfoam.mdx b/content/docs/lumi/openfoam.mdx
index f5cd8867..f6639783 100644
--- a/content/docs/lumi/openfoam.mdx
+++ b/content/docs/lumi/openfoam.mdx
@@ -12,10 +12,10 @@ from complex fluid flows involving chemical reactions, turbulence and heat trans
 
 ## Install 32bit/64bit
 
-!!! warning
-
+<Callout type=warn>
     There is a very small quota for maximum number of files on LUMI:
     projappl (100K), scratch (2.0M), flash (1.0M) - check: `lumi-quota`.
+</Callout>
 
 ```
 #!/bin/bash
diff --git a/content/docs/prace.mdx b/content/docs/prace.mdx
index 70944e4b..3c4dc79f 100644
--- a/content/docs/prace.mdx
+++ b/content/docs/prace.mdx
@@ -201,8 +201,9 @@ Generally, both shared file systems are available through GridFTP:
 
 More information about the shared file systems on Salomon is available [here][10].
 
-!!! hint
+<Callout>
     The `prace` directory is used for PRACE users on the SCRATCH file system.
+</Callout>
 
 Salomon cluster /scratch:
 
@@ -250,8 +251,9 @@ PRACE users should check their project accounting using the PRACE Accounting Too
 
 Users who have undergone the full local registration procedure (including signing the IT4Innovations Acceptable Use Policy) and who have received a local password may check at any time, how many core-hours they and their projects have consumed using the command "it4ifree". Note that you need to know your user password to use the command and that the displayed core hours are "system core hours" which differ from PRACE "standardized core hours".
 
-!!! note
+<Callout>
     The **it4ifree** command is a part of it4i.portal.clients package, [located here][pypi].
+</Callout>
 
 ```console
 $ it4ifree
diff --git a/content/docs/salomon/software/numerical-libraries/Clp.mdx b/content/docs/salomon/software/numerical-libraries/Clp.mdx
index 819f410b..8926e82a 100644
--- a/content/docs/salomon/software/numerical-libraries/Clp.mdx
+++ b/content/docs/salomon/software/numerical-libraries/Clp.mdx
@@ -19,8 +19,9 @@ The module sets up environment variables required for linking and running applic
 
 ## Compiling and Linking
 
-!!! note
+<Callout>
     Link with -lClp
+</Callout>
 
 Load the Clp module. Link using -lClp switch to link your code against Clp.
 
diff --git a/content/docs/salomon/storage.mdx b/content/docs/salomon/storage.mdx
index b1f7df23..803a7577 100644
--- a/content/docs/salomon/storage.mdx
+++ b/content/docs/salomon/storage.mdx
@@ -9,14 +9,15 @@ All login and compute nodes may access same data on shared file systems. Compute
 
 ## Policy (In a Nutshell)
 
-!!! note
-
+<Callout>
     * Use [HOME][1] for your most valuable data and programs.
     * Use [WORK][3] for your large project files.
     * Use [TEMP][4] for large scratch data.
+</Callout>
 
-!!! warning
+<Callout type=warn>
     Do not use for [archiving][5]!
+</Callout>
 
 ## Archiving
 
@@ -149,8 +150,9 @@ For more information, see the [Access Control List][11] section of the documenta
 
 Users home directories /home/username reside on HOME file system. Accessible capacity is 0.5 PB, shared among all users. Individual users are restricted by file system usage quotas, set to 250 GB per user. If 250 GB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request.
 
-!!! note
+<Callout>
     The HOME file system is intended for preparation, evaluation, processing and storage of data generated by active Projects.
+</Callout>
 
 The HOME should not be used to archive data of past Projects or other unrelated data.
 
@@ -177,22 +179,25 @@ Accessible capacity is 1.6PB, shared among all users on TEMP and WORK. Individua
 
 The WORK workspace resides on SCRATCH file system. Users may create subdirectories and files in the **/scratch/work/project/projectid** directory. The directory is accessible to all users involved in the `projectid` project.
 
-!!! note
+<Callout>
     The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up.
 
     Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project.
+</Callout>
 
 #### Temp
 
 The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is  /scratch/temp.  Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK.
 
-!!! note
+<Callout>
     The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
 
     Users are advised to save the necessary data from the TEMP workspace to HOME or WORK after the calculations and clean up the scratch files.
+</Callout>
 
-!!! warning
+<Callout type=warn>
     Files on the TEMP file system that are **not accessed for more than 90 days** will be automatically **deleted**.
+</Callout>
 
 <table>
   <tr>
@@ -241,15 +246,17 @@ The local RAM disk is mounted as /ramdisk and is accessible to user at /ramdisk/
 
 The RAM disk is private to a job and local to node, created when the job starts and deleted at the job end.
 
-!!! note
+<Callout>
     The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
+</Callout>
 
 The local RAM disk file system is intended for  temporary scratch data generated during the calculation as well as
 for high-performance access to input and output files. Size of RAM disk file system is limited.
 It is not recommended to allocate large amount of memory and use large amount of data in RAM disk file system at the same time.
 
-!!! warning
+<Callout type=warn>
      Be very careful, use of RAM disk file system is at the expense of operational memory.
+</Callout>
 
 | Local RAM disk    |                                                                                                   |
 | ----------- | ------------------------------------------------------------------------------------------------------- |
diff --git a/content/docs/software/bio/omics-master/diagnostic-component-team.mdx b/content/docs/software/bio/omics-master/diagnostic-component-team.mdx
index 5396b1db..b998e164 100644
--- a/content/docs/software/bio/omics-master/diagnostic-component-team.mdx
+++ b/content/docs/software/bio/omics-master/diagnostic-component-team.mdx
@@ -5,8 +5,9 @@ title: "Diagnostic Component (TEAM)"
 
 TEAM is available at the [following address][a]
 
-!!! note
+<Callout>
     The address is accessible only via VPN.
+</Callout>
 
 ## Diagnostic Component
 
diff --git a/content/docs/software/bio/omics-master/priorization-component-bierapp.mdx b/content/docs/software/bio/omics-master/priorization-component-bierapp.mdx
index 95ba7f74..d6e77e89 100644
--- a/content/docs/software/bio/omics-master/priorization-component-bierapp.mdx
+++ b/content/docs/software/bio/omics-master/priorization-component-bierapp.mdx
@@ -5,8 +5,9 @@ title: "Prioritization Component (BiERapp)"
 
 BiERapp is available at the [following address][1].
 
-!!! note
+<Callout>
     The address is accessible only via VPN.
+</Callout>
 
 ## BiERapp
 
diff --git a/content/docs/software/cae/comsol/comsol-multiphysics.mdx b/content/docs/software/cae/comsol/comsol-multiphysics.mdx
index 40461901..7aab34eb 100644
--- a/content/docs/software/cae/comsol/comsol-multiphysics.mdx
+++ b/content/docs/software/cae/comsol/comsol-multiphysics.mdx
@@ -33,8 +33,9 @@ $ ml av COMSOL
 
 To prepare COMSOL jobs in the interactive mode, we recommend using COMSOL on the compute nodes.
 
-!!! Note
+<Callout>
     To run the COMSOL Desktop GUI on Windows, we recommend using the [Virtual Network Computing (VNC)][2].
+</Callout>
 
 Example for Karolina:
 
@@ -111,8 +112,9 @@ Starting a COMSOL server on a compute node and then connecting to it
 through a COMSOL Desktop GUI environment is a convenient way of running calculations from GUI.
 To do so, you first need to submit a job with which you'll start the COMSOL server, for example:
 
-!!! Note
+<Callout>
     You may be prompted to provide username and password. These can be different from your IT4Innovations credentials, and will be used during the authentication when trying to connect to the server from GUI.
+</Callout>
 
 ```bash
 $ salloc --account=PROJECT_ID --partition=qcpu_exp --nodes=1 --ntasks=36 --cpus-per-task=1
diff --git a/content/docs/software/chemistry/gaussian.mdx b/content/docs/software/chemistry/gaussian.mdx
index 51161d2c..08d5d134 100644
--- a/content/docs/software/chemistry/gaussian.mdx
+++ b/content/docs/software/chemistry/gaussian.mdx
@@ -16,8 +16,9 @@ Gaussian software package is available to all users that are
 not in direct or indirect competition with the Gaussian Inc. company and have a valid AUP with the IT4Innovations National Supercomputing Center.
 The license includes GPU support and Linda parallel environment for Gaussian multi-node parallel execution.
 
-!!! note
+<Callout>
     You need to be a member of the **gaussian group**. Contact [support\[at\]it4i.cz][b] in order to get included in the gaussian group.
+</Callout>
 
 Check your group membership:
 
@@ -51,8 +52,9 @@ Speedup may be observed on Barbora and DGX-2 systems when using the `CascadeLake
 Gaussian is compiled for single node parallel execution as well as multi-node parallel execution using Linda.
 GPU support for V100 cards is available on Barbora and DGX-2.
 
-!!! note
+<Callout>
     By default, the execution is single-core, single-node, and without GPU acceleration.
+</Callout>
 
 ### Shared-Memory Multiprocessor Parallel Execution (Single Node)
 
diff --git a/content/docs/software/chemistry/molpro.mdx b/content/docs/software/chemistry/molpro.mdx
index 84828215..95c14845 100644
--- a/content/docs/software/chemistry/molpro.mdx
+++ b/content/docs/software/chemistry/molpro.mdx
@@ -41,8 +41,9 @@ On the remaining allocated nodes, compute processes are launched, one process pe
 You can modify this behavior by using the `-n`, `-t`, and `helper-server` options.
 For more details, see the [MOLPRO documentation][c].
 
-!!! note
+<Callout>
     The OpenMP parallelization in MOLPRO is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing the `--ntasks-per-node=128` and `--cpus-per-task=1` options to Slurm.
+</Callout>
 
 You are advised to use the `-d` option to point to a directory in SCRATCH file system.
 MOLPRO can produce a large amount of temporary data during its run,
diff --git a/content/docs/software/chemistry/orca.mdx b/content/docs/software/chemistry/orca.mdx
index 73c8c9f1..352e462f 100644
--- a/content/docs/software/chemistry/orca.mdx
+++ b/content/docs/software/chemistry/orca.mdx
@@ -91,8 +91,9 @@ Your serial computation can be easily converted to parallel.
 Simply specify the number of parallel processes by the `%pal` directive.
 In this example, 1 node, 16 cores are used.
 
-!!! warning
+<Callout type=warn>
     Do not use the `! PAL` directive as only PAL2 to PAL8 is recognized.
+</Callout>
 
 ```bash
     ! HF SVP
@@ -122,9 +123,10 @@ ml ORCA/6.0.0-gompi-2023a-avx2
 $(which orca) orca_parallel.inp > output.out
 ```
 
-!!! note
+<Callout>
     When running ORCA in parallel, ORCA should **NOT** be started with `mpirun` (e.g. `mpirun -np 4 orca`, etc.)
     like many MPI programs and **has to be called with a full pathname**.
+</Callout>
 
 Submit this job to the queue and see the output file.
 
diff --git a/content/docs/software/chemistry/vasp.mdx b/content/docs/software/chemistry/vasp.mdx
index 9ca744b8..2a39fc08 100644
--- a/content/docs/software/chemistry/vasp.mdx
+++ b/content/docs/software/chemistry/vasp.mdx
@@ -29,8 +29,9 @@ mpirun vasp_std
 
 ### VASP Compilations
 
-!!! note
+<Callout>
     Starting from version 6.3.1, we compile VASP with HDF5 support.
+</Callout>
 
 VASP can be ran using several different binaries, each being compiled for a specific purpose, for example:
 
diff --git a/content/docs/software/data-science/dask.mdx b/content/docs/software/data-science/dask.mdx
index c1043575..26475b4f 100644
--- a/content/docs/software/data-science/dask.mdx
+++ b/content/docs/software/data-science/dask.mdx
@@ -8,8 +8,9 @@ versions of popular Python data science libraries like
 [numpy](https://docs.dask.org/en/latest/array.html) or
 [Pandas](https://docs.dask.org/en/latest/dataframe.html).
 
-!!! tip
+<Callout>
     For links to Python documentation, style guide, and introductory tutorial, see the [Python page][a].
+</Callout>
 
 ## Installation
 
@@ -48,11 +49,12 @@ with the server.
 > There are some performance considerations to be taken into account regarding Dask cluster
 > deployment, see [below](#dask-performance-considerations) for more information.
 
-!!! note
+<Callout>
     All the following deployment methods assume that you are inside a Python environment that has
     Dask installed. Do not forget to load Python and activate the correct virtual environment at
     the beginning of your job! And also do the same after connecting to any worker nodes
     manually using SSH.
+</Callout>
 
 ### Manual Deployment
 
diff --git a/content/docs/software/debuggers/allinea-ddt.mdx b/content/docs/software/debuggers/allinea-ddt.mdx
index 10af360d..7ad00c37 100644
--- a/content/docs/software/debuggers/allinea-ddt.mdx
+++ b/content/docs/software/debuggers/allinea-ddt.mdx
@@ -34,8 +34,9 @@ Load the Allinea DDT module:
 $ ml Forge
 ```
 
-!!! note
+<Callout>
     Loading default modules is [**not** recommended][2].
+</Callout>
 
 Compile the code:
 
@@ -48,10 +49,10 @@ $ mpif90 -g -O0 -o test_debug test.f
 
 Before debugging, you need to compile your code with these flags:
 
-!!! note
+<Callout>
     `-g`: Generates extra debugging information usable by GDB. `-g3` includes even more debugging information. This option is available for GNU and Intel C/C++ and Fortran compilers.
-
     `-O0`: Suppresses all optimizations.
+</Callout>
 
 ## Starting a Job With DDT
 
diff --git a/content/docs/software/debuggers/cube.mdx b/content/docs/software/debuggers/cube.mdx
index 70523b1c..974c4edd 100644
--- a/content/docs/software/debuggers/cube.mdx
+++ b/content/docs/software/debuggers/cube.mdx
@@ -29,8 +29,9 @@ $ ml av Cube
 
 CUBE is a graphical application. Refer to the Graphical User Interface documentation for a list of methods to launch graphical applications clusters.
 
-!!! note
+<Callout>
     Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes.
+</Callout>
 
 After loading the appropriate module, simply launch the cube command, or alternatively you can use the `scalasca -examine` command to launch the GUI. Note that for Scalasca data sets, if you do not analyze the data with `scalasca -examine` before opening them with CUBE, not all performance data will be available.
 
diff --git a/content/docs/software/debuggers/intel-vtune-profiler.mdx b/content/docs/software/debuggers/intel-vtune-profiler.mdx
index 0451eecc..165d5ded 100644
--- a/content/docs/software/debuggers/intel-vtune-profiler.mdx
+++ b/content/docs/software/debuggers/intel-vtune-profiler.mdx
@@ -50,8 +50,9 @@ and launch the GUI:
 $ vtune-gui
 ```
 
-!!! Warning
+<Callout type=warn>
     The command line `amplxe-gui` is deprecated. Use `vtune-gui` instead.
+</Callout>
 
 The GUI will open in a new window. Click on "New Project..." to create a new project. After clicking OK, a new window with project properties will appear.  At "Application:", select the path to your binary you want to profile (the binary should be compiled with the `-g` flag). You can also select some additional options such as command line arguments. Click OK to create the project.
 
@@ -67,8 +68,9 @@ The command line will look like this:
 vtune -collect hotspots -app-working-dir /home/$USER/tmp -- /home/$USER/tmp/sgemm
 ```
 
-!!! Warning
+<Callout type=warn>
     The command line `amplxe-cl` is a relative link to command `vtune`.
+</Callout>
 
 Copy the line to clipboard and then you can paste it in your jobscript or in the command line. After the collection is run, open the GUI again, click the menu button in the upper right corner, and select "Open > Result...". The GUI will load the results from the run.
 
diff --git a/content/docs/software/debuggers/scalasca.mdx b/content/docs/software/debuggers/scalasca.mdx
index aab12982..5d2bfdb2 100644
--- a/content/docs/software/debuggers/scalasca.mdx
+++ b/content/docs/software/debuggers/scalasca.mdx
@@ -44,8 +44,9 @@ Some notable Scalasca options are:
 * `-t` enables trace data collection. By default, only summary data are collected.
 * `-e <directory>` specifies a directory to which the collected data is saved. By default, Scalasca saves the data to a directory with the scorep\_ prefix, followed by the name of the executable and the launch configuration.
 
-!!! note
+<Callout>
     Scalasca can generate a huge amount of data, especially if tracing is enabled. Consider saving the data to a scratch directory.
+</Callout>
 
 ### Analysis of Reports
 
diff --git a/content/docs/software/debuggers/total-view.mdx b/content/docs/software/debuggers/total-view.mdx
index 4a8135a7..c2ecfddf 100644
--- a/content/docs/software/debuggers/total-view.mdx
+++ b/content/docs/software/debuggers/total-view.mdx
@@ -60,10 +60,10 @@ $ mpif90 -g -O0 -o test_debug test.f
 
 Before debugging, you need to compile your code with theses flags:
 
-!!! note
+<Callout>
     `-g` Generates extra debugging information usable by GDB. `-g3` includes additional debugging information. This option is available for GNU, Intel C/C++, and Fortran compilers.
-
     `-O0` Suppresses all optimizations.
+</Callout>
 
 ## Starting a Job With TotalView
 
@@ -95,8 +95,9 @@ $ totalview test_debug
 
 To debug a parallel code compiled with **OpenMPI**, you need to setup your TotalView environment:
 
-!!! hint
+<Callout>
     To be able to run a parallel debugging procedure from the command line without stopping the debugger in the mpiexec source code, you have to add the following function to your **~/.tvdrc** file.
+</Callout>
 
 ```console
 proc mpi_auto_run_starter {loaded_id} {
diff --git a/content/docs/software/intel/intel-suite/intel-compilers.mdx b/content/docs/software/intel/intel-suite/intel-compilers.mdx
index fd09e94c..71ca4ba6 100644
--- a/content/docs/software/intel/intel-suite/intel-compilers.mdx
+++ b/content/docs/software/intel/intel-suite/intel-compilers.mdx
@@ -7,10 +7,11 @@ Intel compilers are compilers for Intel processor-based systems, available for M
 
 ## Installed Versions
 
-!!! important "New Compilers in intel/2023a Module"
+<Callout type=warn>
     An `intel/2023a` module has been installed on the Karolina and Barbora clusters.
     This module contains new compilers `icx`, `icpx`, and `ifx`.<br>
     See the porting guides for [ICC Users to DPCPP or ICX][b] or for [Intel® Fortran Compiler][c].
+</Callout>
 
 Intel compilers are available in multiple versions via the `intel` module. The compilers include the icc C and C++ compiler and the ifort Fortran 77/90/95 compiler.
 
@@ -34,8 +35,9 @@ Intel compilers provide vectorization of the code via the AVX-2/AVX-512 instruct
 
 For maximum performance on the Barbora cluster compute nodes, compile your programs using the AVX-512 instructions, with reporting where the vectorization was used. We recommend the following compilation options for high performance.
 
-!!! info
+<Callout>
     Barbora non-accelerated nodes support AVX-512 instructions (cn1-cn192).
+</Callout>
 
 ```console
 $ icc -ipo -O3 -xCORE-AVX512 -qopt-report1 -qopt-report-phase=vec myprog.c mysubroutines.c -o myprog.x
@@ -49,8 +51,9 @@ For maximum performance on the Barbora GPU nodes or Karolina cluster compute nod
 $ icc -ipo -O3 -xCORE-AVX2 -qopt-report1 -qopt-report-phase=vec myprog.c mysubroutines.c -o myprog.x
 ```
 
-!!! warning
+<Callout type=warn>
     Karolina cluster has AMD cpu, use compiler options `-march=core-avx2`.
+</Callout>
 
 In this example, we compile the program enabling interprocedural optimizations between source files (`-ipo`), aggressive loop optimizations (`-O3`), and vectorization (`-xCORE-AVX2`).
 
diff --git a/content/docs/software/isv_licenses.mdx b/content/docs/software/isv_licenses.mdx
index 097d74d3..1d901b86 100644
--- a/content/docs/software/isv_licenses.mdx
+++ b/content/docs/software/isv_licenses.mdx
@@ -11,8 +11,9 @@ If an ISV application was purchased for educational (research) purposes and also
 
 ## Overview of the Licenses Usage
 
-!!! note
+<Callout>
     The overview is generated every minute and is accessible from the web or command line interface.
+</Callout>
 
 ### Web Interface
 
diff --git a/content/docs/software/karolina-compilation.mdx b/content/docs/software/karolina-compilation.mdx
index c8f37fad..dc3d6446 100644
--- a/content/docs/software/karolina-compilation.mdx
+++ b/content/docs/software/karolina-compilation.mdx
@@ -10,8 +10,9 @@ When compiling your code, it is important to select right compiler flags;
 otherwise, the code will not be SIMD vectorized, resulting in severely degraded performance.
 Depending on the compiler, you should use these flags:
 
-!!! important
+<Callout type=warn>
     `-Ofast` optimization may result in unpredictable behavior (e.g. a floating point overflow).
+</Callout>
 
 | Compiler | Module   | Command | Flags                   |
 | -------- |----------| --------|-------------------------|
@@ -85,8 +86,9 @@ This assumes you have allocated 2 full nodes on Karolina using SLURM's directive
 
 **Don't forget** before the run to ensure you have the correct modules and loaded and that you have set up the LD_LIBRARY_PATH environment variable set as shown above (e.g. part of your submission script for SLURM).
 
-!!! note
+<Callout>
     Most MPI libraries do the binding automatically. The binding of MPI ranks can be inspected for any MPI by running  `$ mpirun -n num_of_ranks numactl --show`. However, if the ranks spawn threads, binding of these threads should be done via the environment variables described above.
+</Callout>
 
 The choice of BLAS library and its performance may be verified with our benchmark,
 see  [Lorenz BLAS performance benchmark](https://code.it4i.cz/jansik/lorenz/-/blob/main/README.md).
diff --git a/content/docs/software/lang/conda.mdx b/content/docs/software/lang/conda.mdx
index 3c28d1ae..f9bce758 100644
--- a/content/docs/software/lang/conda.mdx
+++ b/content/docs/software/lang/conda.mdx
@@ -12,8 +12,9 @@ Anaconda supports Python 3.X. Default Python is 3.8, depending on which installe
 
 On the clusters, we have the Anaconda3 software installed. How to use these modules is shown below.
 
-!!! note
+<Callout>
     Use the `ml av conda` command to get up-to-date versions of the modules.
+</Callout>
 
 ```console
 $ ml av conda
diff --git a/content/docs/software/lang/csc.mdx b/content/docs/software/lang/csc.mdx
index ac2a72d6..242aedce 100644
--- a/content/docs/software/lang/csc.mdx
+++ b/content/docs/software/lang/csc.mdx
@@ -10,8 +10,9 @@ $ ml av mono
    Mono/6.12.0.122
 ```
 
-!!! note
+<Callout>
     Use the `ml av mono` command to get up-to-date versions of the modules.
+</Callout>
 
 Activate C# by loading the Mono module:
 
diff --git a/content/docs/software/lang/python.mdx b/content/docs/software/lang/python.mdx
index 22019a4a..870349da 100644
--- a/content/docs/software/lang/python.mdx
+++ b/content/docs/software/lang/python.mdx
@@ -13,11 +13,13 @@ Python features a dynamic type system and automatic memory management and suppor
 
 On the clusters, we have the Python 3.X software installed. How to use these modules is shown below.
 
-!!! note
+<Callout>
     Use the `ml av python/` command to get up-to-date versions of the modules.
+</Callout>
 
-!!! warn
+<Callout type=error>
     Python 2.7 is not supported - [EOL][b] January 1st, 2020.
+</Callout>
 
 ```console
 $ ml av python/
diff --git a/content/docs/software/machine-learning/netket.mdx b/content/docs/software/machine-learning/netket.mdx
index 64fd83b8..51066052 100644
--- a/content/docs/software/machine-learning/netket.mdx
+++ b/content/docs/software/machine-learning/netket.mdx
@@ -17,8 +17,9 @@ Load the `Python/3.8.6-GCC-10.2.0-NetKet` and `intel/2020b` modules.
 
 ### Example for Multi-GPU Node
 
-!!! important
+<Callout type=warn>
     Set the visible device in the environment variable before loading jax and NetKet, as NetKet loads jax.
+</Callout>
 
 ```code
 # J1-J2 model
diff --git a/content/docs/software/machine-learning/tensorflow.mdx b/content/docs/software/machine-learning/tensorflow.mdx
index 5a9cffc6..2bf00ecb 100644
--- a/content/docs/software/machine-learning/tensorflow.mdx
+++ b/content/docs/software/machine-learning/tensorflow.mdx
@@ -130,13 +130,15 @@ with strategy.scope():
 model.fit(train_dataset, epochs=100)
 ```
 
-!!! note
+<Callout>
     If using the `NCCL` strategy causes runtime errors, try to run your application with the
     environment variable `TF_FORCE_GPU_ALLOW_GROWTH` set to `true`.
+</Callout>
 
-!!! tip
+<Callout>
     For real-world multi-GPU training, it might be better to use a dedicated multi-GPU framework such
     as [Horovod](https://github.com/horovod/horovod).
+</Callout>
 
 <!---
 
diff --git a/content/docs/software/modules/lmod.mdx b/content/docs/software/modules/lmod.mdx
index 346ba9f9..b964446a 100644
--- a/content/docs/software/modules/lmod.mdx
+++ b/content/docs/software/modules/lmod.mdx
@@ -40,8 +40,9 @@ Currently Loaded Modules:
    S:  Module is Sticky, requires --force to unload or purge
 ```
 
-!!! tip
+<Callout>
     For more details on sticky modules, see the section on [ml purge][1].
+</Callout>
 
 ## Searching for Available Modules
 
@@ -64,8 +65,9 @@ In the current module naming scheme, each module name consists of two parts:
 * the part before the first /, corresponding to the software name
 * the remainder, corresponding to the software version, the compiler toolchain that was used to install the software, and a possible version suffix
 
-!!! tip
+<Callout>
     `(D)` indicates that this particular version of the module is the default, but we strongly recommend to not rely on this, as the default can change at any point. Usually, the default points to the latest version available.
+</Callout>
 
 ## Searching for Modules
 
@@ -109,8 +111,9 @@ $ ml spider gcc
 ---------------------------------------------------------------------------------
 ```
 
-!!! tip
+<Callout>
     `spider` is case-insensitive.
+</Callout>
 
 If you use `spider` on a full module name like `GCC/6.2.0-2.27`, it will tell on which cluster(s) that module is available:
 
@@ -149,8 +152,9 @@ Use "module spider" to find all possible modules.
 Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".
 ```
 
-!!! tip
+<Callout>
     The specified software name is case-insensitive.
+</Callout>
 
 Lmod does a partial match on the module name, so sometimes you need to specify the end of the software name you are interested in:
 
@@ -197,15 +201,17 @@ setenv("EBDEVELPYTHON","/apps/all/Python/3.5.2/easybuild/Python-3.5.2-easybuild-
 setenv("EBEXTSLISTPYTHON","setuptools-20.1.1,pip-8.0.2,nose-1.3.7")
 ```
 
-!!! tip
+<Callout>
     Note that both the direct changes to the environment as well as other modules that will be loaded are shown.
+</Callout>
 
 ## Loading Modules
 
-!!! warning
+<Callout type=warn>
     Always specify the name **and** the version when loading a module.
     Loading a default module in your script (e.g. `$ ml intel`) will cause divergent results in the case the default module is upgraded.
     **IT4Innovations is not responsible for any loss of allocated core- or node-hours resulting from the use of improper modules in your calculations.**
+</Callout>
 
 To effectively apply the changes to the environment that are specified by a module, use `ml` and specify the name of the module.
 For example, to set up your environment to use Intel:
@@ -228,13 +234,15 @@ Currently Loaded Modules:
    H:  Hidden Module
 ```
 
-!!! tip
+<Callout>
     Note that even though we only loaded a single module, the output of `ml` shows that a whole set of modules was loaded. These are required dependencies for `intel/2017.00`.
+</Callout>
 
 ## Conflicting Modules
 
-!!! warning
+<Callout type=warn>
     It is important to note that **only modules that are compatible with each other can be loaded together. In particular, modules must be installed either with the same toolchain as the modules that are already loaded, or with a compatible (sub)toolchain**.
+</Callout>
 
 For example, once you have loaded one or more modules that were installed with the `intel/2017.00` toolchain, all other modules that you load should have been installed with the same toolchain.
 
diff --git a/content/docs/software/mpi/mpi.mdx b/content/docs/software/mpi/mpi.mdx
index 198d036a..7b0bca08 100644
--- a/content/docs/software/mpi/mpi.mdx
+++ b/content/docs/software/mpi/mpi.mdx
@@ -11,9 +11,9 @@ The Karolina cluster provides several implementations of the MPI library:
 
 MPI libraries are activated via the environment modules.
 
-!!! note
-
+<Callout>
     All OpenMPI modules are configured with `setenv("SLURM_MPI_TYPE", "pmix_v4")`.
+</Callout>
 
 Look up the modulefiles/mpi section in `ml av`:
 
@@ -156,18 +156,20 @@ The MPI program executable must be available within the same path on all nodes.
 
 The optimal way to run an MPI program depends on its memory requirements, memory access pattern and communication pattern.
 
-!!! note
+<Callout>
     Consider these ways to run an MPI program:
     1. One MPI process per node, 128 threads per process
     2. Two MPI processes per node, 64 threads per process
     3. 128 MPI processes per node, 1 thread per process.
+</Callout>
 
 **One MPI** process per node, using 128 threads, is most useful for memory demanding applications that make good use of processor cache memory and are not memory-bound. This is also a preferred way for communication intensive applications as one process per node enjoys full bandwidth access to the network interface.
 
 **Two MPI** processes per node, using 64 threads each, bound to processor socket is most useful for memory bandwidth-bound applications such as BLAS1 or FFT with scalable memory demand. However, note that the two processes will share access to the network interface. The 64 threads and socket binding should ensure maximum memory access bandwidth and minimize communication, migration, and NUMA effect overheads.
 
-!!! note
+<Callout type=warn>
     Important! Bind every OpenMP thread to a core!
+</Callout>
 
 In the previous two cases with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You want to avoid this by setting the `KMP_AFFINITY` or `GOMP_CPU_AFFINITY` environment variables.
 
diff --git a/content/docs/software/numerical-languages/matlab.mdx b/content/docs/software/numerical-languages/matlab.mdx
index 26f139cc..95725c12 100644
--- a/content/docs/software/numerical-languages/matlab.mdx
+++ b/content/docs/software/numerical-languages/matlab.mdx
@@ -250,9 +250,10 @@ For example, a job that needs eight workers will request nine CPU cores.
 
 Run the same simulation but increase the Pool size. This time, to retrieve the results later, keep track of the job ID.
 
-!!! note
+<Callout>
     For some applications, there will be a diminishing return when allocating too many workers,
     as the overhead may exceed computation time.
+</Callout>
 
 ```
 >> % Get a handle to the cluster
diff --git a/content/docs/software/numerical-languages/opencoarrays.mdx b/content/docs/software/numerical-languages/opencoarrays.mdx
index 7429f4fe..4414201f 100644
--- a/content/docs/software/numerical-languages/opencoarrays.mdx
+++ b/content/docs/software/numerical-languages/opencoarrays.mdx
@@ -74,9 +74,10 @@ end program synchronization_test
 * `sync images(*)` - Synchronize this image to all other
 * `sync images(index)` - Synchronize this image to image with `index`
 
-!!! note
+<Callout>
     `number` is the local variable while `number[index]` accesses the variable in the specific image.
     `number[this_image()]` is the same as `number`.
+</Callout>
 
 ## Compile and Run
 
@@ -95,9 +96,10 @@ The above mentioned *Hello World* program can be compiled as follows:
 $ caf hello_world.f90 -o hello_world.x
 ```
 
-!!! warning
+<Callout type=warn>
     The input file extension **.f90** or **.F90** are to be interpreted as *Fortran 90*.
     If the input file extension is **.f** or **.F** the source code will be interpreted as *Fortran 77*.
+</Callout>
 
 Another method for compiling is by invoking the `mpif90` compiler wrapper directly:
 
diff --git a/content/docs/software/numerical-languages/r.mdx b/content/docs/software/numerical-languages/r.mdx
index 0cbaeee8..b39721ad 100644
--- a/content/docs/software/numerical-languages/r.mdx
+++ b/content/docs/software/numerical-languages/r.mdx
@@ -96,7 +96,7 @@ More information and examples may be obtained directly by reading the documentat
 
 Forking is the most simple to use. Forking family of functions provide parallelized, drop-in replacement for the serial `apply()` family of functions.
 
-!!! warning
+<Callout type=warn>
     Forking via package parallel provides functionality similar to OpenMP construct omp parallel for
 
     Only cores of single node can be utilized this way!
diff --git a/content/docs/software/numerical-libraries/intel-numerical-libraries.mdx b/content/docs/software/numerical-libraries/intel-numerical-libraries.mdx
index 130f8bfe..fb1755c5 100644
--- a/content/docs/software/numerical-libraries/intel-numerical-libraries.mdx
+++ b/content/docs/software/numerical-libraries/intel-numerical-libraries.mdx
@@ -15,8 +15,9 @@ $ ml av mkl
    imkl/2019.1.144-iimpi-2019a    imkl/2020.4.304-iompi-2020b
 ```
 
-!!! info
+<Callout>
     `imkl` ... with intel toolchain. `mkl` with system toolchain.
+</Callout>
 
 For more information, see the [Intel MKL][1] section.
 
diff --git a/content/docs/software/nvidia-cuda.mdx b/content/docs/software/nvidia-cuda.mdx
index b1a0e0c1..7d5433be 100644
--- a/content/docs/software/nvidia-cuda.mdx
+++ b/content/docs/software/nvidia-cuda.mdx
@@ -285,10 +285,11 @@ int main(int argc, char **argv)
 }
 ```
 
-!!! note
+<Callout>
     cuBLAS has its own function for data transfers between CPU and GPU memory:
     - [cublasSetVector][c] - transfers data from CPU to GPU memory
     - [cublasGetVector][d] - transfers data from GPU to CPU memory
+</Callout>
 
 To compile the code using the NVCC compiler, the `-lcublas` compiler flag has to be specified:
 
diff --git a/content/docs/software/sdk/openacc-mpi.mdx b/content/docs/software/sdk/openacc-mpi.mdx
index f3c6c331..0886b28b 100644
--- a/content/docs/software/sdk/openacc-mpi.mdx
+++ b/content/docs/software/sdk/openacc-mpi.mdx
@@ -6,9 +6,10 @@ All source code for this tutorial can be downloaded as part of this [tarball][2]
 `SEISMIC_CPML`, developed by Dimitri Komatitsch and Roland Martin from University of Pau, France,
 is a set of ten open-source Fortran 90 programs.
 
-!!!note
+<Callout>
     Before building and running each step,
     make sure that the compiler (`pgfortran`) and MPI wrappers (`mpif90`) are in your path.
+</Callout>
 
 ## Step 0: Evaluation
 
@@ -257,12 +258,13 @@ make verify
 
 ## Step 3: Adding Data Regions
 
-!!! tip
+<Callout>
     Set the environment variable `PGI_ACC_TIME=1` and run your executable.
     This option prints basic profile information such as the kernel execution time,
     data transfer time, initialization time, the actual launch configuration,
     and total time spent in a compute region.
     Note that the total time is measured from the host and includes time spent executing host code within a region.
+<\Callout>
 
 To improve performance, you should minimize the amount of time transferring data,
 i.e. the data directive.
@@ -409,8 +411,9 @@ For example, any update to the copy of a variable in device memory
 won't be reflected in the host copy until you specified
 using either an update directive or a `copy` clause at a data or compute region boundary.
 
-!!! important
+<Callout type=warn>
     Unintentional loss of coherence between the host and device copy of a variable is one of the most common causes of validation errors in OpenACC programs.
+</Callout>
 
 After making the above change to `SEISMIC_CPML`, the code generated incorrect results. After debugging, it was determined that the section of the time step loop
 that initializes boundary conditions was omitted from an OpenACC compute region.
diff --git a/content/docs/software/tools/apptainer.mdx b/content/docs/software/tools/apptainer.mdx
index 81365fa7..97f49124 100644
--- a/content/docs/software/tools/apptainer.mdx
+++ b/content/docs/software/tools/apptainer.mdx
@@ -16,8 +16,9 @@ Barbora             Karolina
       └── latest          └── latest
 ```
 
-!!! info
+<Callout>
     Current information about available Apptainer images can be obtained by the `ml av` command. The images are listed in the `OS` section.
+</Callout>
 
 The bootstrap scripts, wrappers, features, etc. are located on [it4i-singularity GitLab page][a].
 
@@ -37,13 +38,15 @@ After the module is loaded for the first time, the prepared image is copied into
 When you load the module next time, the version of the image is checked and an image update (if exists) is offered.
 Then you can update your copy of the image by the `image-update` command.
 
-!!! warning
+<Callout type=warn>
     With an image update, all user changes to the image will be overridden.
+</Callout>
 
 The runscript inside the Apptainer image can be run by the `image-run` command.
 
-!!! note " CentOS/7 module only"
+<Callout>
     This command automatically mounts the `/scratch` and `/apps` storage and invokes the image as writable, so user changes can be made.
+</Callout>
 
 Very similar to `image-run` is the `image-exec` command.
 The only difference is that `image-exec` runs a user-defined command instead of a runscript.
@@ -75,8 +78,9 @@ Preparing image CentOS-7_20230116143612.sif
 Your image of CentOS/7 is at location: /home/username/.apptainer/images/CentOS-7_20230116143612.sif
 ```
 
-!!! tip
+<Callout>
     After the module is loaded for the first time, the prepared image is copied into your home folder to the *.apptainer/images* subfolder.
+</Callout>
 
 ### Wrappers
 
@@ -137,8 +141,9 @@ In the following example, we are using a job submitted by the command:
 $ salloc -A PROJECT_ID -p qcpu --nodes=2 --ntasks-per-node=128 --time=00:30:00
 ```
 
-!!! note
+<Callout>
     We have seen no major performance impact for a job running in a Apptainer container.
+</Callout>
 
 With Apptainer, the MPI usage model is to call `mpirun` from outside the container
 and reference the container from your `mpirun` command.
@@ -196,8 +201,9 @@ local:$ scp container.img login@login2.clustername.it4i.cz:~/.apptainer/image/co
 * Load module Apptainer (`ml apptainer`)
 * Use your image
 
-!!! note
+<Callout>
     If you want to use the Apptainer wrappers with your own images, load the `apptainer-wrappers/1.0` module and set the environment variable `IMAGE_PATH_LOCAL=/path/to/container.img`.
+</Callout>
 
 ## How to Edit IT4Innovations Image?
 
diff --git a/content/docs/software/tools/easybuild-images.mdx b/content/docs/software/tools/easybuild-images.mdx
index bfd10631..6ce75d22 100644
--- a/content/docs/software/tools/easybuild-images.mdx
+++ b/content/docs/software/tools/easybuild-images.mdx
@@ -9,9 +9,10 @@ To generate container recipes, use `eb --containerize`, or `eb -C` for short.
 
 The resulting container recipe will leverage EasyBuild to build and install the software that corresponds to the easyconfig files that are specified as arguments to the eb command (and all required dependencies, if needed).
 
-!!! note
+<Callout>
     EasyBuild will refuse to overwrite existing container recipes.
     To re-generate an already existing recipe file, use the `--force` command line option.
+</Callout>
 
 ## Base Container Image
 
@@ -97,13 +98,14 @@ ml Python/3.6.4-foss-2018a OpenMPI/2.1.2-GCC-6.4.0-2.28
 %labels
 ```
 
-!!! note
+<Callout>
     We also specify the easyconfig file for the OpenMPI component of `foss/2018a` here, because it requires specific OS dependencies to be installed (see the second `yum ... install` line in the generated container recipe).
     We intend to let EasyBuild take into account the OS dependencies of the entire software stack automatically in a future update.
 
     The generated container recipe includes `pip install -U easybuild` to ensure that the latest version of EasyBuild is used to build the software in the container image, regardless of whether EasyBuild was already present in the container and which version it was.
 
     In addition, the generated module files will follow the default module-naming scheme (EasyBuildMNS). The modules that correspond to the easyconfig files that were specified on the command line will be loaded automatically; see the statements in the %environment section of the generated container recipe.
+</Callout>
 
 ## Example of Building Container Image
 
diff --git a/content/docs/software/tools/easybuild.mdx b/content/docs/software/tools/easybuild.mdx
index cbd80555..cf7d70d6 100644
--- a/content/docs/software/tools/easybuild.mdx
+++ b/content/docs/software/tools/easybuild.mdx
@@ -465,8 +465,9 @@ silence-deprecation-warnings=True
 trace=True
 ```
 
-!!! note
+<Callout>
     Do not forget to add the path to your modules to MODULEPATH using the `module use` command in your `~/.bashrc` to be able to lookup and use your installed modules.
+</Callout>
 
 Template requires you to fill in the `EASYBUILD_ROOT`, `CUDA_CC`, and `USER` variables. `EASYBUILD_ROOT` is the top level directory which will hold all of your EasyBuild related data. `CUDA_CC` defines the CUDA compute capabilities of graphics cards, and `USER` should preferably be set to your username.
 
diff --git a/content/docs/software/tools/singularity.mdx b/content/docs/software/tools/singularity.mdx
index f9cb5ffa..f665e895 100644
--- a/content/docs/software/tools/singularity.mdx
+++ b/content/docs/software/tools/singularity.mdx
@@ -135,8 +135,9 @@ $ apptainer exec lolcow.sif cowsay moo
 
 A user home directory is mounted inside the container automatically. If you need access to the **/SCRATCH** storage for your computation, this must be mounted by the `-B | --bind` option.
 
-!!!Warning
+<Callout type=warn>
       The mounted folder has to exist inside the container or the container image has to be writable!
+</Callout>
 
 ```console
 $ apptainer shell -B /scratch ubuntu.sif
diff --git a/content/docs/software/tools/spack.mdx b/content/docs/software/tools/spack.mdx
index 2271ce8d..2758c64b 100644
--- a/content/docs/software/tools/spack.mdx
+++ b/content/docs/software/tools/spack.mdx
@@ -15,8 +15,9 @@ $ ml av Spack
    Spack/0.16.2 (D)
 ```
 
-!!! note
+<Callout>
     Spack/default is the rule for setting up local installation
+</Callout>
 
 ## First Usage Module Spack/Default
 
@@ -328,8 +329,9 @@ $ spack install git@2.29.0
 [+] /home/kru0052/Spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/git-2.29.0-cabbbb7qozeijgspy2wl3hf6on6f4b4c
 ```
 
-!!! warning
+<Callout type=warn>
     `FTP` on cluster is not allowed, you must edit the source link.
+</Callout>
 
 ### Edit Rule
 
@@ -337,8 +339,9 @@ $ spack install git@2.29.0
 $ spack edit git
 ```
 
-!!! note
+<Callout>
     To change the source link (`ftp://` to `http://`), use `spack create URL -f` to regenerate rules.
+</Callout>
 
 ## Available Spack Module
 
diff --git a/content/docs/software/tools/virtualization.mdx b/content/docs/software/tools/virtualization.mdx
index 5d2ebcfa..3ae8ea0f 100644
--- a/content/docs/software/tools/virtualization.mdx
+++ b/content/docs/software/tools/virtualization.mdx
@@ -31,11 +31,13 @@ Virtualization has some drawbacks, as well. It is not so easy to set up an effic
 
 The solution described in the [HOWTO][2] section is suitable for single node tasks; it does not introduce virtual machine clustering.
 
-!!! note
+<Callout>
     Consider virtualization as a last resort solution for your needs.
+</Callout>
 
-!!! warning
+<Callout type=warn>
     Consult use of virtualization with IT4Innovations' support.
+</Callout>
 
 For running a Windows application (when the source code and Linux native application are not available), consider use of Wine, Windows compatibility layer. Many Windows applications can be run using Wine with less effort and better performance than when using virtualization.
 
@@ -43,8 +45,9 @@ For running a Windows application (when the source code and Linux native applica
 
 IT4Innovations does not provide any licenses for operating systems and software of virtual machines. Users are (in accordance with [Acceptable use policy document][a]) fully responsible for licensing all software running on virtual machines on clusters. Be aware of complex conditions of licensing software in virtual environments.
 
-!!! note
+<Callout>
     Users are responsible for licensing OS (e.g. MS Windows) and all software running on their virtual machines.
+</Callout>
 
 ## Howto
 
@@ -249,8 +252,9 @@ $ qemu-system-x86_64 -drive file=win.img,media=disk,if=virtio -enable-kvm -cpu h
 
 Port forwarding allows you to access the virtual machine via SSH (Linux) or RDP (Windows) connecting to the IP address of the compute node (and port 2222 for SSH). You must use a VPN network).
 
-!!! note
+<Callout>
     Keep in mind, that if you use virtio devices, you must have virtio drivers installed on your virtual machine.
+</Callout>
 
 ### Networking and Data Sharing
 
diff --git a/content/docs/software/viz/NICEDCVsoftware.mdx b/content/docs/software/viz/NICEDCVsoftware.mdx
index 1cb45612..11648421 100644
--- a/content/docs/software/viz/NICEDCVsoftware.mdx
+++ b/content/docs/software/viz/NICEDCVsoftware.mdx
@@ -36,8 +36,9 @@ You are connected to Vizserv1 now.
 
 * Create a VNC Server Password
 
-!!! note
+<Callout>
     A VNC server password should be set before the first login to VNC server. Use a strong password.
+</Callout>
 
 ```console
 [yourusername@vizserv1 ~]$ vncpasswd
@@ -54,13 +55,15 @@ username :61
 .....
 ```
 
-!!! note
+<Callout>
     The VNC server runs on port 59xx, where xx is the display number. To get your port number, simply add 5900 + display number, in our example 5900 + 11 = 5911. **Calculate your own port number and use it instead of 5911 from examples below**.
+</Callout>
 
 * Start your remote VNC server
 
-!!! note
+<Callout>
     Choose the display number which is different from other users display number. Also remember that display number should be lower or equal 99.
+</Callout>
 
 ```console
 [yourusername@vizserv1 ~]$ vncserver :11 -geometry 1600x900 -depth 24
@@ -80,8 +83,9 @@ yourusername :11
 .....
 ```
 
-!!! note
+<Callout>
     You started a new VNC server. The server is listening on port 5911 (5900 + 11 = 5911).
+</Callout>
 
 Your VNC server is listening on port 59xx, in our example on port 5911.
 
@@ -152,8 +156,9 @@ ssh     5675  user    5u  IPv6 571419      0t0  TCP ip6-localhost:5911 (LISTEN)
 ssh     5675  user    6u  IPv4 571420      0t0  TCP localhost:5911 (LISTEN)
 ```
 
-!!! note
+<Callout>
     PID in our example is 5675. You also need to use the correct port number for both commands above. In this example, the port number is 5911. Your PID and port number may differ.
+</Callout>
 
 * Kill the process
 
diff --git a/content/docs/software/viz/gpi2.mdx b/content/docs/software/viz/gpi2.mdx
index be4ffa56..c5ac43cb 100644
--- a/content/docs/software/viz/gpi2.mdx
+++ b/content/docs/software/viz/gpi2.mdx
@@ -1,8 +1,10 @@
 ---
 title: "GPI-2"
 ---
-!!!warning
+<Callout type=warn>
     This page has not been updated yet. The page does not reflect the transition from PBS to Slurm.
+</Callout>
+
 ## Introduction
 
 Programming Next Generation Supercomputers: GPI-2 is an API library for asynchronous interprocess, cross-node communication. It provides a flexible, scalable, and fault tolerant interface for parallel applications.
@@ -21,8 +23,9 @@ The module sets up environment variables required for linking and running GPI-2
 
 ## Linking
 
-!!! note
+<Callout>
     Link with -lGPI2 -libverbs
+</Callout>
 
 Load the `gpi2` module. Link using `-lGPI2` and `-libverbs` switches to link your code against GPI-2. The GPI-2 requires the OFED InfiniBand communication library ibverbs.
 
@@ -44,8 +47,9 @@ $ gcc myprog.c -o myprog.x -Wl,-rpath=$LIBRARY_PATH -lGPI2 -libverbs
 
 ## Running the GPI-2 Codes
 
-!!! note
+<Callout>
     `gaspi_run` starts the GPI-2 application
+</Callout>
 
 The `gaspi_run` utility is used to start and run GPI-2 applications:
 
@@ -79,8 +83,9 @@ machinefle:
 
 This machinefile will run 4 GPI-2 processes, two on node cn79 and two on node cn80.
 
-!!! note
+<Callout>
     Use `mpiprocs` to control how many GPI-2 processes will run per node.
+</Callout>
 
 Example:
 
@@ -92,8 +97,9 @@ This example will produce $PBS_NODEFILE with 16 entries per node.
 
 ### Gaspi_logger
 
-!!! note
+<Callout>
     `gaspi_logger` views the output from GPI-2 application ranks.
+</Callout>
 
 The `gaspi_logger` utility is used to view the output from all nodes except the master node (rank 0). `gaspi_logger` is started, on another session, on the master node - the node where the `gaspi_run` is executed. The output of the application, when called with `gaspi_printf()`, will be redirected to the `gaspi_logger`. Other I/O routines (e.g. `printf`) will not.
 
diff --git a/content/docs/software/viz/openfoam.mdx b/content/docs/software/viz/openfoam.mdx
index 1086c4b2..99518d41 100644
--- a/content/docs/software/viz/openfoam.mdx
+++ b/content/docs/software/viz/openfoam.mdx
@@ -53,8 +53,9 @@ $ ml openfoam
 $ source $FOAM_BASHRC
 ```
 
-!!! note
+<Callout>
     Load the correct module with your requirements “compiler - GCC/ICC, precision - DP/SP”.
+</Callout>
 
 Create a project directory within the $HOME/OpenFOAM directory named `<USER>-<OFversion>` and create a directory named `run` within it:
 
@@ -117,8 +118,9 @@ Run the second case, for example external incompressible turbulent flow - case -
 
 First we must run the serial application bockMesh and decomposePar for preparation of parallel computation.
 
-!!! note
+<Callout>
     Create a test.sh Bash scrip:
+</Callout>
 
 ```bash
 #!/bin/bash
@@ -143,8 +145,9 @@ $ sbatch -A PROJECT_ID -p qcpu --nodes=1 --ntasks=16 --time=03:00:00 test.sh
 
 This job creates a simple block mesh and domain decomposition. Check your decomposition and submit parallel computation:
 
-!!! note
+<Callout>
     Create a testParallel.slurm script:
+</Callout>
 
 ```bash
 #!/bin/bash
diff --git a/content/docs/software/viz/paraview.mdx b/content/docs/software/viz/paraview.mdx
index 77c1d62b..de8ab8b3 100644
--- a/content/docs/software/viz/paraview.mdx
+++ b/content/docs/software/viz/paraview.mdx
@@ -17,8 +17,9 @@ Currently, version 5.1.2 compiled with Intel/2017a against the Intel MPI library
 
 On the clusters, ParaView is to be used in the client-server mode. A parallel ParaView server is launched on compute nodes by the user and the client is launched on your desktop PC to control and view the visualization. Download the ParaView client application for your OS [here][b].
 
-!!!Warning
+<Callout type=warn>
     Your version must match the version number installed on the cluster.
+</Callout>
 
 ### Launching Server
 
diff --git a/content/docs/software/viz/qtiplot.mdx b/content/docs/software/viz/qtiplot.mdx
index 88d95767..a2732775 100644
--- a/content/docs/software/viz/qtiplot.mdx
+++ b/content/docs/software/viz/qtiplot.mdx
@@ -20,8 +20,9 @@ $ ml av QtiPlot
 
 ## Running QtiPlot
 
-!!! important
+<Callout type=warn>
     You must first enable the [VNC][a] or [X Window System][b] GUI environment.
+</Callout>
 
 To run QtiPlot, use:
 
diff --git a/content/docs/software/viz/vesta.mdx b/content/docs/software/viz/vesta.mdx
index b41e1922..196498dd 100644
--- a/content/docs/software/viz/vesta.mdx
+++ b/content/docs/software/viz/vesta.mdx
@@ -31,8 +31,9 @@ For the current list of installed versions, use:
 $ ml av vesta
 ```
 
-!!!note Module Availability
+<Callout>
     This module is currently availble on the Barbora cluster only.
+</Callout>
 
 [1]: https://jp-minerals.org/vesta/en/
 [2]: https://jp-minerals.org/vesta/archives/VESTA_Manual.pdf
diff --git a/content/docs/software/viz/vgl.mdx b/content/docs/software/viz/vgl.mdx
index 6005d37b..efeddafb 100644
--- a/content/docs/software/viz/vgl.mdx
+++ b/content/docs/software/viz/vgl.mdx
@@ -5,8 +5,9 @@ VirtualGL is an open source program that redirects the 3D rendering commands fro
 
 See the documentation [here][a].
 
-!!! info
+<Callout>
     VirtualGL is available on Barbora and Karolina.
+<\Callout>
 
 ## How to Use
 
@@ -18,7 +19,6 @@ Read our documentation on [VNC server][1].
 
 ```console
 Warning: No xauth data; using fake authentication data for X11 forwarding.
-Last login: Tue Mar  3 14:20:18 2020 from vpn-kru0052.it4i.cz
                   ____             _
                  |  _ \           | |
                  | |_) | __ _ _ __| |__   ___  _ __ __ _
@@ -29,20 +29,15 @@ Last login: Tue Mar  3 14:20:18 2020 from vpn-kru0052.it4i.cz
 
                   ...running on Red Hat Enterprise Linux 7.x
 
-kru0052@login1:~$ vncserver :99
+login@login1:~$ vncserver :99
 
-New 'login1.barbora.it4i.cz:99 (kru0052)' desktop is login1.barbora.it4i.cz:99
-
-Starting applications specified in /home/kru0052/.vnc/xstartup
-Log file is /home/kru0052/.vnc/login1.barbora.it4i.cz:99.log
-
-kru0052@login1:~$
+login@login1:~$
 ```
 
 * VNC Client (your local machine)
 
 ```console
-root@toshiba:~# ssh -L 5999:localhost:5999 kru0052@login1.barbora.it4i.cz -X
+root@toshiba:~# ssh -L 5999:localhost:5999 login@login1.barbora.it4i.cz -X
 ```
 
 * Connect to a VNC server from a VNC client (your local machine)
@@ -56,14 +51,15 @@ Or via GUI.
 
 ![](/it4i/img/vnc.jpg)
 
-!!! tip
+<Callout>
     To resize the window scale, use the `xrandr -s 1920x1200` command.
+</Callout>
 
 **Run vglclient on the login server (use the terminal in the local machine VNC window)**
 
 ```console
-kru0052@login1:~$ ml VirtualGL
-kru0052@login1:~$ vglclient
+login@login1:~$ ml VirtualGL
+login@login1:~$ vglclient
 
 VirtualGL Client 64-bit v2.6.1 (Build 20200228)
 Listening for unencrypted connections on port 4242
@@ -75,12 +71,12 @@ Listening for unencrypted connections on port 4242
 **Execute an interactive job on vizserv (use another terminal in the local machine VNC window)**
 
 ```console
-[kru0052@login1.barbora ~]$ salloc -p qviz -A PROJECT_ID --x11
+[login@login1.barbora ~]$ salloc -p qviz -A PROJECT_ID --x11
 salloc: Granted job allocation 694741
 salloc: Waiting for resource configuration
 salloc: Nodes vizserv1 are ready for job
 
-[kru0052@vizserv1.barbora ~]$
+[login@vizserv1.barbora ~]$
 ```
 
 ![](/it4i/img/job.jpg)
@@ -88,9 +84,9 @@ salloc: Nodes vizserv1 are ready for job
 **New SSH connection on vizserv - elimination of the Slurm setting (use another terminal in the local machine VNC window)**
 
 ```console
-kru0052@login1:~$ ssh vizserv1 -X
+login@login1:~$ ssh vizserv1 -X
 Last login: Tue Mar  3 13:54:33 2020 from login1.barbora.it4i.cz
-kru0052@vizserv1:~$
+login@vizserv1:~$
 ```
 
 ![](/it4i/img/ssh.jpg)
@@ -98,7 +94,7 @@ kru0052@vizserv1:~$
 **Run the graphical application**
 
 ```console
-kru0052@vizserv1:~$ vglrun /apps/easybuild/glxgears
+login@vizserv1:~$ vglrun /apps/easybuild/glxgears
 [VGL] NOTICE: Automatically setting VGL_CLIENT environment variable to
 [VGL]    10.32.2.1, the IP address of your SSH client.
 libGL error: unable to load driver: swrast_dri.so
diff --git a/content/docs/storage/cesnet-s3.mdx b/content/docs/storage/cesnet-s3.mdx
index 623dcc39..2df46f7d 100644
--- a/content/docs/storage/cesnet-s3.mdx
+++ b/content/docs/storage/cesnet-s3.mdx
@@ -20,8 +20,9 @@ To access to S3 you must:
 
 IT4I offers two tools for object storage management on Karolina and Barbora:
 
-!!! Note
+<Callout>
     We recommend using the default versions installed.
+</Callout>
 
 * [S3cmd][1]
 * [AWS CLI][2]
diff --git a/content/docs/storage/cesnet-storage.mdx b/content/docs/storage/cesnet-storage.mdx
index 773b833c..c331d9c9 100644
--- a/content/docs/storage/cesnet-storage.mdx
+++ b/content/docs/storage/cesnet-storage.mdx
@@ -16,15 +16,17 @@ The procedure to obtain the CESNET access is quick and simple.
 
 ### Understanding CESNET Storage
 
-!!! note
+<Callout>
     It is very important to understand the CESNET storage before uploading data. [Read][d] first.
+</Callout>
 
 Once registered for CESNET Storage, you may [access the storage][e] in number of ways. We recommend the SSHFS and RSYNC methods.
 
 ### SSHFS Access
 
-!!! note
+<Callout>
     SSHFS: The storage will be mounted like a local hard drive
+</Callout>
 
 The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
 
@@ -69,8 +71,9 @@ $ fusermount -u cesnet
 
 ### Rsync Access
 
-!!! note
+<Callout>
     Rsync provides delta transfer for best performance and can resume interrupted transfers.
+</Callout>
 
 Rsync is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
 
diff --git a/content/docs/storage/project-storage.mdx b/content/docs/storage/project-storage.mdx
index d95bea28..956aa39f 100644
--- a/content/docs/storage/project-storage.mdx
+++ b/content/docs/storage/project-storage.mdx
@@ -32,8 +32,9 @@ The project directory is removed after the project's data expiration.
 
 ### POSIX File Access
 
-!!!note "Mountpoints"
+<Callout>
     PROJECT file storages are accessible at mountpoints `/mnt/proj1`, `/mnt/proj2`, and `/mnt/proj3`.
+</Callout>
 
 The PROJECT storage can be accessed via the following nodes:
 
@@ -76,9 +77,10 @@ Project        open-ZZ-ZZ       proj2          844.4 GB      20.0 TB        797
 
 The information can also be found in IT4Innovations' [SCS information system][b].
 
-!!!note
+<Callout>
     At this time, only PIs can see the quotas of their respective projects in IT4Innovations' SCS information system.
     We are working on making this information available to all users assigned to their projects.
+</Callout>
 
 #### Increasing Project Quotas
 
@@ -92,8 +94,9 @@ Default file permissions and ACLs are set by IT4Innovations during project direc
 
 ## Backup and Safety
 
-!!!important "Data Backup"
+<Callout type=warn>
     Data on the PROJECT storage is **not** backed up.
+</Callout>
 
 The PROJECT storage utilizes fully redundant design, redundant devices, highly available services, data redundancy, and snapshots. For increased data protection, disks in each disk array are connected in Distributed RAID6 with two hot-spare disks, meaning the disk array can recover full redundancy after two simultaneous disk failures.
 
@@ -129,8 +132,9 @@ drwxrws---. 16 vop999 open-XX-XX 4096 led 20 16:36 2021-03-07-021446
 
 ## Computing on PROJECT
 
-!!!important "I/O Intensive Jobs"
+<Callout type=error>
     Stage files for intensive I/O calculations onto the SCRATCH storage.
+</Callout>
 
 The PROJECT storage is not primarily intended for computing and it is strongly recommended to avoid using it directly for computing in majority of cases.
 
-- 
GitLab