diff --git a/docs.it4i/general/resources-allocation-policy.md b/docs.it4i/general/resources-allocation-policy.md
index 15ab9735fa0c5ffafa346e17ae39776bcf0fe4a8..65fc765c0151d6ca424a2ac073d80d1fcfafee6a 100644
--- a/docs.it4i/general/resources-allocation-policy.md
+++ b/docs.it4i/general/resources-allocation-policy.md
@@ -4,8 +4,14 @@
 
 The resources are allocated to the job in a fair-share fashion, subject to constraints set by the queue and the resources available to the Project. The Fair-share system of Anselm ensures that individual users may consume approximately equal amounts of resources per week. Detailed information can be found in the [Job scheduling][1] section. The resources are accessible via several queues for queueing the jobs. The queues provide prioritized and exclusive access to the computational resources. The following table provides the queue partitioning overview:
 
+!!! hint
+    **The exp queue is configured to run one job per user only.**
+
+!!! note
+    **The qfree queue is not free of charge**. [Normal accounting][2] applies. However, it allows for utilization of free resources, once a project has exhausted all its allocated computational resources. This does not apply to Director's Discretion projects (DD projects) by default. Usage of qfree after exhaustion of DD projects' computational resources is allowed after request for this queue.
+
 !!! note
-    Check the queue status [here][z].
+    **The qexp queue is equipped with nodes which do not have exactly the same CPU clock speed.** Should you need the nodes to have exactly the same CPU speed, you have to select the proper nodes during the PSB job submission.
 
 ### Anselm
 
@@ -18,11 +24,6 @@ The resources are allocated to the job in a fair-share fashion, subject to const
 | qfat                | yes            | > 0                  | 2 fat nodes                                          | 16        | 200      | yes           | 24/144 h |
 | qfree               | yes            | < 120% of allocation | 180 w/o accelerator                                  | 16        | -1024    | no            | 12 h     |
 
-!!! note
- **The qfree queue is not free of charge**. [Normal accounting][2] applies. However, it allows for utilization of free resources, once a project has exhausted all its allocated computational resources. This does not apply to Director's Discretion projects (DD projects) by default. Usage of qfree after exhaustion of DD projects' computational resources is allowed after request for this queue.
-
-**The qexp queue is equipped with nodes which do not have exactly the same CPU clock speed.** Should you need the nodes to have exactly the same CPU speed, you have to select the proper nodes during the PSB job submission.
-
 * **qexp**, the Express queue: This queue is dedicated to testing and running very small jobs. It is not required to specify a project to enter the qexp. There are always 2 nodes reserved for this queue (w/o accelerators), a maximum 8 nodes are available via the qexp for a particular user, from a pool of nodes containing Nvidia accelerated nodes (cn181-203), MIC accelerated nodes (cn204-207) and Fat nodes with 512GB of RAM (cn208-209). This enables us to test and tune accelerated code and code with higher RAM requirements. The nodes may be allocated on a per core basis. No special authorization is required to use qexp. The maximum runtime in qexp is 1 hour.
 * **qprod**, the Production queue: This queue is intended for normal production runs. It is required that an active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, except the reserved ones. 178 nodes without accelerators are included. Full nodes, 16 cores per node, are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
 * **qlong**, the Long queue: This queue is intended for long production runs. It is required that an active project with nonzero remaining resources is specified to enter the qlong. Only 60 nodes without acceleration may be accessed via the qlong queue. Full nodes, 16 cores per node, are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times that of the standard qprod time - 3 x 48 h).
@@ -42,9 +43,6 @@ The resources are allocated to the job in a fair-share fashion, subject to const
 | **qviz** Visualization queue    | yes            | none required        | 2 (with NVIDIA Quadro K5000)                                  | 4         | 150      | no            | 1 / 8h    |
 | **qmic** Intel Xeon Phi cards   | yes            | > 0                  | 864 Intel Xeon Phi cards, max 8 mic per job                   | 0         | 0        | no            | 24 / 48h  |
 
-!!! note
-    **The qfree queue is not free of charge**. [Normal accounting][2] applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply to Directors Discretion (DD projects) but may be allowed upon request.
-
 * **qexp**, the Express queue: This queue is dedicated for testing and running very small jobs. It is not required to specify a project to enter the qexp. There are 2 nodes always reserved for this queue (w/o accelerator), maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on per core basis. No special authorization is required to use it. The maximum runtime in qexp is 1 hour.
 * **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
 * **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 336 nodes without acceleration may be accessed via the qlong queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 \* 48 h)
@@ -68,9 +66,6 @@ The resources are allocated to the job in a fair-share fashion, subject to const
 | qfat                | yes            | > 0                  | 1 fat nodes                                          | 8         | 200      | yes           | 24/144 h |
 | qfree               | yes            | < 120% of allocation | 189 w/o accelerator                                  | 36        | -1024    | no            | 12 h     |
 
-!!! note
- **The qfree queue is not free of charge**. [Normal accounting][2] applies. However, it allows for utilization of free resources, once a project has exhausted all its allocated computational resources. This does not apply to Director's Discretion projects (DD projects) by default. Usage of qfree after exhaustion of DD projects' computational resources is allowed after request for this queue.
-
 **The qexp queue is equipped with nodes which do not have exactly the same CPU clock speed.** Should you need the nodes to have exactly the same CPU speed, you have to select the proper nodes during the PSB job submission.
 
 * **qexp**, the Express queue: This queue is dedicated to testing and running very small jobs. It is not required to specify a project to enter the qexp. There are always 2 nodes reserved for this queue (w/o accelerators), a maximum 8 nodes are available via the qexp for a particular user. The nodes may be allocated on a per core basis. No special authorization is required to use qexp. The maximum runtime in qexp is 1 hour.
diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md
index e22cb52dd49369fa32a14c87ee415f88e1205470..f9bf6b24cd976b9f4a00c2c4a6ae020e059ff1ab 100644
--- a/docs.it4i/salomon/storage.md
+++ b/docs.it4i/salomon/storage.md
@@ -4,14 +4,21 @@
 
 There are two main shared file systems on Salomon cluster, the [HOME][1] and [SCRATCH][2].
 
+!!! important
+    Storage mounted at /scratch/work/user/ will be decommissioned very soon.
+
+    * will be remounted read-only on 2020-01-31
+    * will be removed from cluster on 2020-02-29
+
 All login and compute nodes may access same data on shared file systems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp file systems.
 
 ## Policy (In a Nutshell)
 
 !!! note
-    \* Use [HOME][1] for your most valuable data and programs.
-    \* Use [WORK][3] for your large project files.
-    \* Use [TEMP][4] for large scratch data.
+
+    * Use [HOME][1] for your most valuable data and programs.
+    * Use [WORK][3] for your large project files.
+    * Use [TEMP][4] for large scratch data.
 
 !!! warning
     Do not use for [archiving][5]!
@@ -48,7 +55,7 @@ Configuration of the SCRATCH Lustre storage
 
 A user file on the Lustre file system can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing.
 
-When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the file's stripes. Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval.
+When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server (MDS) and the metadata target (MDT) for the layout and location of the file's stripes. Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval.
 
 If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results.
 
diff --git a/docs.it4i/software/isv_licenses.md b/docs.it4i/software/isv_licenses.md
index 7a231eed56dbfe801cc0de1889b310001761cf55..e781d93bb41b2fc05b1e702e9dbf6e08f66d88ba 100644
--- a/docs.it4i/software/isv_licenses.md
+++ b/docs.it4i/software/isv_licenses.md
@@ -73,7 +73,7 @@ Supported application license features:
 Do not hesitate to ask IT4I support for support of additional license features you want to use in your jobs.
 
 !!! warnig
-Resource names in PBS Pro are case sensitive.
+    Resource names in PBS Pro are case sensitive.
 
 ### Example of qsub Statement