diff --git a/docs.it4i/general/resources-allocation-policy.md b/docs.it4i/general/resources-allocation-policy.md
index e7474a077604f9870b21d5511d85a6f35e665be5..e22f0f43cd03644f092177e49dc089f6be35bb62 100644
--- a/docs.it4i/general/resources-allocation-policy.md
+++ b/docs.it4i/general/resources-allocation-policy.md
@@ -1,6 +1,3 @@
-!!!Warning
-    This page has not been fully updated yet. The page does not reflect the transition from PBS to Slurm.
-
 # Resource Allocation Policy
 
 ## Job Queue Policies
@@ -17,7 +14,7 @@ Computational resources are subject to [accounting policy][7].
 
 !!! important
     Queues are divided based on a resource type: `qcpu_` for non-accelerated nodes and `qgpu_` for accelerated nodes. <br><br>
-    On the Karolina's `qgpu` queue, **you can allocate 1/8 of the node - 1 GPU and 16 cores**. <br><br>
+    On the Karolina's `qgpu` queue, **you can allocate 1/8 of the node - 1 GPU and 16 cores**.
 
 ### Queues
 
@@ -26,7 +23,6 @@ Computational resources are subject to [accounting policy][7].
 | `qcpu`                           | Production queue for non-accelerated nodes intended for standard production runs. Requires an active project with nonzero remaining resources. Full nodes are allocated. Identical to `qprod`. |
 | `qgpu`                           | Dedicated queue for accessing the NVIDIA accelerated nodes. Requires an active project with nonzero remaining resources. It utilizes 8x NVIDIA A100 with 320GB HBM2 memory per node. The PI needs to explicitly ask support for authorization to enter the queue for all users associated with their project. **On Karolina, you can allocate 1/8 of the node - 1 GPU and 16 cores**. For more information, see [Karolina qgpu allocation][4]. |
 | `qcpu_biz`<br>`qgpu_biz`         | Commercial queues, slightly higher priority.                   |
-| `qcpu_eurohpc`<br>`qgpu_eurohpc` | EuroHPC queues, slightly higher priority, **Karolina only**.   |
 | `qcpu_exp`<br>`qgpu_exp`         | Express queues for testing and running very small jobs. There are 2 nodes always reserved (w/o accelerators), max 8 nodes available per user. The nodes may be allocated on a per core basis. It is configured to run one job and accept five jobs in a queue per user. |
 | `qcpu_free`<br>`qgpu_free`       | Intended for utilization of free resources, after a project exhausted all its allocated resources. Note that the queue is **not free of charge**. [Normal accounting][2] applies. (Does not apply to DD projects by default. DD projects have to request for permission after exhaustion of computational resources.). Consumed resources will be accounted to the Project. Access to the queue is removed if consumed resources exceed 150% of the allocation. Full nodes are allocated. |
 | `qcpu_long`       | Queues for long production runs. Require an active project with nonzero remaining resources. Only 200 nodes without acceleration may be accessed. Full nodes are allocated. |
@@ -42,9 +38,12 @@ See the following subsections for the list of queues:
 
 ## Queue Notes
 
-The job time limit defaults to **half the maximum time**, see the table above. Longer time limits can be [set manually, see examples][3].
+The job time limit defaults to **half the maximum time**, see the table above.
+Longer time limits can be [set manually, see examples][3].
 
-Jobs that exceed the reserved time limit get killed automatically. The time limit can be changed for queuing jobs (state Q) using the `scontrol modify job` command, however it cannot be changed for a running job.
+Jobs that exceed the reserved time limit get killed automatically.
+The time limit can be changed for queuing jobs (state Q) using the `scontrol modify job` command,
+however it cannot be changed for a running job.
 
 ## Queue Status