diff --git a/docs.it4i/anselm/job-submission-and-execution.md b/docs.it4i/anselm/job-submission-and-execution.md
index d6584a8a96256e5a5ca02747de8b00587671cd05..63490e6a8123229d4b413847f86130fdec396441 100644
--- a/docs.it4i/anselm/job-submission-and-execution.md
+++ b/docs.it4i/anselm/job-submission-and-execution.md
@@ -22,6 +22,9 @@ $ qsub -A Project_ID -q queue -l select=x:ncpus=y,walltime=[[hh:]mm:]ss[.ms] job
 
 The qsub submits the job into the queue, in another words the qsub command creates a request to the PBS Job manager for allocation of specified resources. The resources will be allocated when available, subject to above described policies and constraints. **After the resources are allocated the jobscript or interactive shell is executed on first of the allocated nodes.**
 
+!!! note
+    PBS statement nodes (qsub -l nodes=nodespec) is not supported on Anselm cluster.
+
 ### Job Submission Examples
 
 ```console
diff --git a/docs.it4i/anselm/software/intel-xeon-phi.md b/docs.it4i/anselm/software/intel-xeon-phi.md
index f22027b8151eae3f3155a2e7725bbbacebb09fb3..d879361135e715e4af6862ed6636adb45a895fb1 100644
--- a/docs.it4i/anselm/software/intel-xeon-phi.md
+++ b/docs.it4i/anselm/software/intel-xeon-phi.md
@@ -883,18 +883,21 @@ A possible output of the MPI "hello-world" example executed on two hosts and two
 !!! note
     At this point the MPI communication between MIC accelerators on different nodes uses 1Gb Ethernet only.
 
-### Using the PBS Automatically Generated Node-Files
+### Using Automatically Generated Node-Files
 
-PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated:
+Set of node-files, that can be used instead of manually creating a new one every time, is generated for user convenience. Six node-files are generated:
 
 !!! note
-    **Host only node-file:**
+    **Node-files:**
 
-     - /lscratch/${PBS_JOBID}/nodefile-cn MIC only node-file:
-     - /lscratch/${PBS_JOBID}/nodefile-mic Host and MIC node-file:
-     - /lscratch/${PBS_JOBID}/nodefile-mix
+     - /lscratch/${PBS_JOBID}/nodefile-cn Hosts only node-file
+     - /lscratch/${PBS_JOBID}/nodefile-mic MICs only node-file
+     - /lscratch/${PBS_JOBID}/nodefile-mix Hosts and MICs node-file
+     - /lscratch/${PBS_JOBID}/nodefile-cn-sn Hosts only node-file, using short names
+     - /lscratch/${PBS_JOBID}/nodefile-mic-sn MICs only node-file, using short names
+     - /lscratch/${PBS_JOBID}/nodefile-mix-sn Hosts and MICs node-file, using short names
 
-Each host or accelerator is listed only per files. User has to specify how many jobs should be executed per node using `-n` parameter of the mpirun command.
+Each host or accelerator is listed only once per file. User has to specify how many jobs should be executed per node using `-n` parameter of the mpirun command.
 
 ## Optimization
 
diff --git a/docs.it4i/salomon/job-submission-and-execution.md b/docs.it4i/salomon/job-submission-and-execution.md
index dea86065b70048af16b40dc9252525cfc0816de0..9c7ce35a6ba00c469e2b2c63a480e42e5b36e85c 100644
--- a/docs.it4i/salomon/job-submission-and-execution.md
+++ b/docs.it4i/salomon/job-submission-and-execution.md
@@ -72,17 +72,17 @@ In this example, we allocate 4 nodes, with 24 cores per node (totalling 96 cores
 ### UV2000 SMP
 
 !!! note
-    14 NUMA nodes available on UV2000
+    13 NUMA nodes available on UV2000
     Per NUMA node allocation.
     Jobs are isolated by cpusets.
 
-The UV2000 (node uv1) offers 3328GB of RAM and 112 cores, distributed in 14 NUMA nodes. A NUMA node packs 8 cores and approx. 236GB RAM. In the PBS the UV2000 provides 14 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of the NUMA nodes allocated to itself.
+The UV2000 (node uv1) offers 3TB of RAM and 104 cores, distributed in 13 NUMA nodes. A NUMA node packs 8 cores and approx. 247GB RAM (with exception, node 11 has only 123GB RAM). In the PBS the UV2000 provides 13 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of the NUMA nodes allocated to itself.
 
 ```console
- $ qsub -A OPEN-0-0 -q qfat -l select=14 ./myjob
+ $ qsub -A OPEN-0-0 -q qfat -l select=13 ./myjob
 ```
 
-In this example, we allocate all 14 NUMA nodes (corresponds to 14 chunks), 112 cores of the SGI UV2000 node for 72 hours. Jobscript myjob will be executed on the node uv1.
+In this example, we allocate all 13 NUMA nodes (corresponds to 13 chunks), 104 cores of the SGI UV2000 node for 72 hours. Jobscript myjob will be executed on the node uv1.
 
 ```console
 $ qsub -A OPEN-0-0 -q qfat -l select=1:mem=2000GB ./myjob
diff --git a/docs.it4i/salomon/resources-allocation-policy.md b/docs.it4i/salomon/resources-allocation-policy.md
index 296844a23f2b567b11f15ad2d79b630ac81f8fb5..98f9d34f8dad971e7b09141da657666881c87a00 100644
--- a/docs.it4i/salomon/resources-allocation-policy.md
+++ b/docs.it4i/salomon/resources-allocation-policy.md
@@ -22,7 +22,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
 * **qprod**, the Production queue: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprod. All nodes may be accessed via the qprod queue, however only 86 per job. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprod is 48 hours.
 * **qlong**, the Long queue: This queue is intended for long production runs. It is required that active project with nonzero remaining resources is specified to enter the qlong. Only 336 nodes without acceleration may be accessed via the qlong queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qlong is 144 hours (three times of the standard qprod time - 3 \* 48 h)
 * **qmpp**, the massively parallel queue. This queue is intended for massively parallel runs. It is required that active project with nonzero remaining resources is specified to enter the qmpp. All nodes may be accessed via the qmpp queue. Full nodes, 24 cores per node are allocated. The queue runs with medium priority and no special authorization is required to use it.  The maximum runtime in qmpp is 4 hours. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
-* **qfat**, the UV2000 queue. This queue is dedicated to access the fat SGI UV2000 SMP machine. The machine (uv1) has 112 Intel IvyBridge cores at 3.3GHz and 3.25TB RAM. An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
+* **qfat**, the UV2000 queue. This queue is dedicated to access the fat SGI UV2000 SMP machine. The machine (uv1) has 112 Intel IvyBridge cores at 3.3GHz and 3.25TB RAM (8 cores and 128GB RAM are dedicated for system). An PI needs explicitly ask support for authorization to enter the queue for all users associated to her/his Project.
 * **qfree**, the Free resource queue: The queue qfree is intended for utilization of free resources, after a Project exhausted all its allocated computational resources (Does not apply to DD projects by default. DD projects have to request for persmission on qfree after exhaustion of computational resources.). It is required that active project is specified to enter the queue, however no remaining resources are required. Consumed resources will be accounted to the Project. Only 178 nodes without accelerator may be accessed from this queue. Full nodes, 24 cores per node are allocated. The queue runs with very low priority and no special authorization is required to use it. The maximum runtime in qfree is 12 hours.
 * **qviz**, the Visualization queue: Intended for pre-/post-processing using OpenGL accelerated graphics. Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 73 GB of RAM and 1/7 of the GPU capacity (default "chunk"). If more GPU power or RAM is required, it is recommended to allocate more chunks (with 4 cores each) up to one whole node per user, so that all 28 cores, 512 GB RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
 
diff --git a/docs.it4i/salomon/software/intel-xeon-phi.md b/docs.it4i/salomon/software/intel-xeon-phi.md
index dbcc5fad47a654d760b6e4fc5476ace34337334f..6d161439b7871e097ae095a4103a1f37ab490a0e 100644
--- a/docs.it4i/salomon/software/intel-xeon-phi.md
+++ b/docs.it4i/salomon/software/intel-xeon-phi.md
@@ -990,18 +990,21 @@ Hello world from process 7 of 8 on host r38u32n1001-mic0
 !!! note
     At this point the MPI communication between MIC accelerators on different nodes uses 1Gb Ethernet only.
 
-#### Using the PBS Automatically Generated Node-Files
+### Using Automatically Generated Node-Files
 
-PBS also generates a set of node-files that can be used instead of manually creating a new one every time. Three node-files are genereated:
+Set of node-files, that can be used instead of manually creating a new one every time, is generated for user convenience. Six node-files are generated:
 
 !!! note
-    **Host only node-file:**
+    **Node-files:**
 
-     - /lscratch/${PBS_JOBID}/nodefile-cn MIC only node-file:
-     - /lscratch/${PBS_JOBID}/nodefile-mic Host and MIC node-file:
-     - /lscratch/${PBS_JOBID}/nodefile-mix
+     - /lscratch/${PBS_JOBID}/nodefile-cn Hosts only node-file
+     - /lscratch/${PBS_JOBID}/nodefile-mic MICs only node-file
+     - /lscratch/${PBS_JOBID}/nodefile-mix Hosts and MICs node-file
+     - /lscratch/${PBS_JOBID}/nodefile-cn-sn Hosts only node-file, using short names
+     - /lscratch/${PBS_JOBID}/nodefile-mic-sn MICs only node-file, using short names
+     - /lscratch/${PBS_JOBID}/nodefile-mix-sn Hosts and MICs node-file, using short names
 
-Each host or accelerator is listed only per files. User has to specify how many jobs should be executed per node using "-n" parameter of the mpirun command.
+Each host or accelerator is listed only once per file. User has to specify how many jobs should be executed per node using `-n` parameter of the mpirun command.
 
 ## Optimization
 
diff --git a/docs.it4i/software/easybuild.md b/docs.it4i/software/easybuild.md
index 3bbcae0ebb82d1c0a52933cb686d9f76bbadd082..344b04ce895052d2b34883b6043d0c9f8cebcdea 100644
--- a/docs.it4i/software/easybuild.md
+++ b/docs.it4i/software/easybuild.md
@@ -16,7 +16,7 @@ All builds and installations are performed at user level, so you don't need the
 EasyBuild relies on two main concepts
 
  * Toolchains
- * EasyConfig file
+ * EasyConfig file (our easyconfigs is [here](https://code.it4i.cz/sccs/easyconfigs-it4i))
 
 Detailed documentations is available [here](http://easybuild.readthedocs.io).