diff --git a/docs.it4i/anselm/environment-and-modules.md b/docs.it4i/anselm/environment-and-modules.md
deleted file mode 100644
index 1b6fd9a6485a169d8e3a5c5180a736afed23b29a..0000000000000000000000000000000000000000
--- a/docs.it4i/anselm/environment-and-modules.md
+++ /dev/null
@@ -1,78 +0,0 @@
-# Environment and Modules
-
-## Environment Customization
-
-After logging in, you may want to configure the environment. Write your preferred path definitions, aliases, functions and module loads in the .bashrc file
-
-```console
-$ cat ./bashrc
-
-# ./bashrc
-
-# Source global definitions
-if [ -f /etc/bashrc ]; then
-      . /etc/bashrc
-fi
-
-# User specific aliases and functions
-alias qs='qstat -a'
-module load PrgEnv-gnu
-
-# Display information to standard output - only in interactive ssh session
-if [ -n "$SSH_TTY" ]
-then
- module list # Display loaded modules
-fi
-```
-
-!!! note
-	Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (SCP, PBS) of your account! Consider utilization of SSH session interactivity for such commands as stated in the previous example.
-
-## Application Modules
-
-In order to configure your shell for running particular application on Anselm we use Module package interface.
-
-!!! note
-    The modules set up the application paths, library paths and environment variables for running particular application.
-
-    We have also second modules repository. This modules repository is created using tool called EasyBuild. On Salomon cluster, all modules will be build by this tool. If you want to use software from this modules repository, please follow instructions in section [Application Modules Path Expansion](environment-and-modules/#application-modules-path-expansion).
-
-The modules may be loaded, unloaded and switched, according to momentary needs.
-
-To check available modules use
-
-```console
-$ ml av
-```
-
-To load a module, for example the octave module use
-
-```console
-$ ml octave
-```
-
-loading the octave module will set up paths and environment variables of your active shell such that you are ready to run the octave software
-
-To check loaded modules use
-
-```console
-$ ml
-```
-
- To unload a module, for example the octave module use
-
-```console
-$ ml -octave
-```
-
-Following modules set up the development environment
-
-PrgEnv-gnu sets up the GNU development environment in conjunction with the bullx MPI library
-
-PrgEnv-intel sets up the INTEL development environment in conjunction with the Intel MPI library
-
-## Application Modules Path Expansion
-
-All application modules on Anselm cluster (and further) will be build using tool called [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild").
-
-This command expands your searched paths to modules. You can also add this command to the .bashrc file to expand paths permanently. After this command, you can use same commands to list/add/remove modules as is described above.
diff --git a/docs.it4i/anselm/prace.md b/docs.it4i/anselm/prace.md
deleted file mode 100644
index 9b24551b7b2fdea37177ccf11e42fe8b1cff3ef8..0000000000000000000000000000000000000000
--- a/docs.it4i/anselm/prace.md
+++ /dev/null
@@ -1,260 +0,0 @@
-# PRACE User Support
-
-## Intro
-
-PRACE users coming to Anselm as to TIER-1 system offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](../general/obtaining-login-credentials/obtaining-login-credentials/), if the same level of access is required.
-
-All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here.
-
-## Help and Support
-
-If you have any troubles, need information, request support or want to install additional software, please use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/).
-
-Information about the local services are provided in the [introduction of general user documentation](introduction/). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz.
-
-## Obtaining Login Credentials
-
-In general PRACE users already have a PRACE account setup through their HOMESITE (institution from their country) as a result of rewarded PRACE project proposal. This includes signed PRACE AuP, generated and registered certificates, etc.
-
-If there's a special need a PRACE user can get a standard (local) account at IT4Innovations. To get an account on the Anselm cluster, the user needs to obtain the login credentials. The procedure is the same as for general users of the cluster, so please see the corresponding section of the general documentation here.
-
-## Accessing the Cluster
-
-### Access With GSI-SSH
-
-For all PRACE users the method for interactive access (login) and data transfer based on grid services from Globus Toolkit (GSI SSH and GridFTP) is supported.
-
-The user will need a valid certificate and to be present in the PRACE LDAP (please contact your HOME SITE or the primary investigator of your project for LDAP account creation).
-
-Most of the information needed by PRACE users accessing the Anselm TIER-1 system can be found here:
-
-* [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
-* [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
-* [Interactive access using GSISSH](http://www.prace-ri.eu/Interactive-Access-Using-gsissh)
-* [Data transfer with GridFTP](http://www.prace-ri.eu/Data-Transfer-with-GridFTP-Details)
-* [Data transfer with gtransfer](http://www.prace-ri.eu/Data-Transfer-with-gtransfer)
-
-Before you start to use any of the services don't forget to create a proxy certificate from your certificate:
-
-```console
-$ grid-proxy-init
-```
-
-To check whether your proxy certificate is still valid (by default it's valid 12 hours), use:
-
-```console
-$ grid-proxy-info
-```
-
-To access Anselm cluster, two login nodes running GSI SSH service are available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
-
-#### Access From PRACE Network:
-
-It is recommended to use the single DNS name anselm-prace.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
-
-| Login address               | Port | Protocol | Login node       |
-| --------------------------- | ---- | -------- | ---------------- |
-| anselm-prace.it4i.cz        | 2222 | gsissh   | login1 or login2 |
-| login1-prace.anselm.it4i.cz | 2222 | gsissh   | login1           |
-| login2-prace.anselm.it4i.cz | 2222 | gsissh   | login2           |
-
-```console
-$ gsissh -p 2222 anselm-prace.it4i.cz
-```
-
-When logging from other PRACE system, the prace_service script can be used:
-
-```console
-$ gsissh `prace_service -i -s anselm`
-```
-
-#### Access From Public Internet:
-
-It is recommended to use the single DNS name anselm.it4i.cz which is distributed between the two login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
-
-| Login address         | Port | Protocol | Login node       |
-| --------------------- | ---- | -------- | ---------------- |
-| anselm.it4i.cz        | 2222 | gsissh   | login1 or login2 |
-| login1.anselm.it4i.cz | 2222 | gsissh   | login1           |
-| login2.anselm.it4i.cz | 2222 | gsissh   | login2           |
-
-```console
-$ gsissh -p 2222 anselm.it4i.cz
-```
-
-When logging from other PRACE system, the prace_service script can be used:
-
-```console
-$ gsissh `prace_service -e -s anselm`
-```
-
-Although the preferred and recommended file transfer mechanism is [using GridFTP](prace/#file-transfers), the GSI SSH implementation on Anselm supports also SCP, so for small files transfer gsiscp can be used:
-
-```console
-$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ anselm.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_
-
-$ gsiscp -P 2222 anselm.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
-
-$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ anselm-prace.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_
-
-$ gsiscp -P 2222 anselm-prace.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
-```
-
-### Access to X11 Applications (VNC)
-
-If the user needs to run X11 based graphical application and does not have a X11 server, the applications can be run using VNC service. If the user is using regular SSH based access, please see the section in general documentation.
-
-If the user uses GSI SSH based access, then the procedure is similar to the SSH based access, only the port forwarding must be done using GSI SSH:
-
-```console
-$ gsissh -p 2222 anselm.it4i.cz -L 5961:localhost:5961
-```
-
-### Access With SSH
-
-After successful obtainment of login credentials for the local IT4Innovations account, the PRACE users can access the cluster as regular users using SSH. For more information please see the section in general documentation.
-
-## File Transfers
-
-PRACE users can use the same transfer mechanisms as regular users (if they've undergone the full registration procedure). For information about this, please see the section in the general documentation.
-
-Apart from the standard mechanisms, for PRACE users to transfer data to/from Anselm cluster, a GridFTP server running Globus Toolkit GridFTP service is available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
-
-There's one control server and three backend servers for striping and/or backup in case one of them would fail.
-
-### Access From PRACE Network
-
-| Login address                | Port | Node role                   |
-| ---------------------------- | ---- | --------------------------- |
-| gridftp-prace.anselm.it4i.cz | 2812 | Front end /control server   |
-| login1-prace.anselm.it4i.cz  | 2813 | Backend / data mover server |
-| login2-prace.anselm.it4i.cz  | 2813 | Backend / data mover server |
-| dm1-prace.anselm.it4i.cz     | 2813 | Backend / data mover server |
-
-Copy files **to** Anselm by running the following commands on your local machine:
-
-```console
-$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
-```
-
-Or by using prace_service script:
-
-```console
-$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
-```
-
-Copy files **from** Anselm:
-
-```console
-$ globus-url-copy gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
-```
-
-Or by using prace_service script:
-
-```console
-$ globus-url-copy gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
-```
-
-### Access From Public Internet
-
-| Login address          | Port | Node role                   |
-| ---------------------- | ---- | --------------------------- |
-| gridftp.anselm.it4i.cz | 2812 | Front end /control server   |
-| login1.anselm.it4i.cz  | 2813 | Backend / data mover server |
-| login2.anselm.it4i.cz  | 2813 | Backend / data mover server |
-| dm1.anselm.it4i.cz     | 2813 | Backend / data mover server |
-
-Copy files **to** Anselm by running the following commands on your local machine:
-
-```console
-$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
-```
-
-Or by using prace_service script:
-
-```console
-$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
-```
-
-Copy files **from** Anselm:
-
-```console
-$ globus-url-copy gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
-```
-
-Or by using prace_service script:
-
-```console
-$ globus-url-copy gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
-```
-
-Generally both shared file systems are available through GridFTP:
-
-| File system mount point | Filesystem | Comment                                                        |
-| ----------------------- | ---------- | -------------------------------------------------------------- |
-| /home                   | Lustre     | Default HOME directories of users in format /home/prace/login/ |
-| /scratch                | Lustre     | Shared SCRATCH mounted on the whole cluster                    |
-
-More information about the shared file systems is available [here](storage/).
-
-## Usage of the Cluster
-
-There are some limitations for PRACE user when using the cluster. By default PRACE users aren't allowed to access special queues in the PBS Pro to have high priority or exclusive access to some special equipment like accelerated nodes and high memory (fat) nodes. There may be also restrictions obtaining a working license for the commercial software installed on the cluster, mostly because of the license agreement or because of insufficient amount of licenses.
-
-For production runs always use scratch file systems, either the global shared or the local ones. The available file systems are described [here](hardware-overview/).
-
-### Software, Modules and PRACE Common Production Environment
-
-All system wide installed software on the cluster is made available to the users via the modules. The information about the environment and modules usage is in this [section of general documentation](environment-and-modules/).
-
-PRACE users can use the "prace" module to use the [PRACE Common Production Environment](http://www.prace-ri.eu/prace-common-production-environment/).
-
-```console
-$ ml prace
-```
-
-### Resource Allocation and Job Execution
-
-General information about the resource allocation, job queuing and job execution is in this [section of general documentation](resources-allocation-policy/).
-
-For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree".
-
-| queue                         | Active project | Project resources | Nodes               | priority | authorization | walltime  |
-| ----------------------------- | -------------- | ----------------- | ------------------- | -------- | ------------- | --------- |
-| **qexp** Express queue        | no             | none required     | 2 reserved, 8 total | high     | no            | 1 / 1h    |
-| **qprace** Production queue   | yes            | > 0               | 178 w/o accelerator | medium   | no            | 24 / 48 h |
-| **qfree** Free resource queue | yes            | none required     | 178 w/o accelerator | very low | no            | 12 / 12 h |
-
-**qprace**, the PRACE: This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprace. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprace is 12 hours. If the job needs longer time, it must use checkpoint/restart functionality.
-
-### Accounting & Quota
-
-The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resources-allocation-policy/).
-
-PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/).
-
-Users who have undergone the full local registration procedure (including signing the IT4Innovations Acceptable Use Policy) and who have received local password may check at any time, how many core-hours have been consumed by themselves and their projects using the command "it4ifree".
-
-!!! note
-    You need to know your user password to use the command. Displayed core hours are "system core hours" which differ from PRACE "standardized core hours".
-
-!!! hint
-    The **it4ifree** command is a part of it4i.portal.clients package, [located here](https://pypi.python.org/pypi/it4i.portal.clients).
-
-```console
-$ it4ifree
-    Password:
-         PID    Total   Used   ...by me Free
-       -------- ------- ------ -------- -------
-       OPEN-0-0 1500000 400644   225265 1099356
-       DD-13-1    10000   2606     2606    7394
-```
-
-By default file system quota is applied. To check the current status of the quota use
-
-```console
-$ lfs quota -u USER_LOGIN /home
-$ lfs quota -u USER_LOGIN /scratch
-```
-
-If the quota is insufficient, please contact the [support](prace/#help-and-support) and request an increase.
diff --git a/docs.it4i/environment-and-modules.md b/docs.it4i/environment-and-modules.md
new file mode 100644
index 0000000000000000000000000000000000000000..ada0058777e07e65f753253ec1d6517c5ce59f95
--- /dev/null
+++ b/docs.it4i/environment-and-modules.md
@@ -0,0 +1,64 @@
+# Environment and Modules
+
+## Environment Customization
+
+After logging in, you may want to configure the environment. Write your preferred path definitions, aliases, functions and module loads in the .bashrc file
+
+```console
+# ./bashrc
+
+# users compilation path
+export MODULEPATH=${MODULEPATH}:/home/$USER/.local/easybuild/modules/all
+
+# User specific aliases and functions
+alias qs='qstat -a'
+
+# load default intel compilator !!! is not recommended !!!
+ml intel
+
+# Display information to standard output - only in interactive ssh session
+if [ -n "$SSH_TTY" ]
+then
+ ml # Display loaded modules
+fi
+```
+
+!!! note
+    Do not run commands outputting to standard output (echo, module list, etc) in .bashrc  for non-interactive SSH sessions. It breaks fundamental functionality (SCP, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example.
+
+### Application Modules
+
+In order to configure your shell for running particular application on clusters we use Module package interface.
+
+Application modules on clusters are built using [EasyBuild](software/tools/easybuild/). The modules are divided into the following structure:
+
+```console
+ base: Default module class
+ bio: Bioinformatics, biology and biomedical
+ cae: Computer Aided Engineering (incl. CFD)
+ chem: Chemistry, Computational Chemistry and Quantum Chemistry
+ compiler: Compilers
+ data: Data management & processing tools
+ debugger: Debuggers
+ devel: Development tools
+ geo: Earth Sciences
+ ide: Integrated Development Environments (e.g. editors)
+ lang: Languages and programming aids
+ lib: General purpose libraries
+ math: High-level mathematical software
+ mpi: MPI stacks
+ numlib: Numerical Libraries
+ perf: Performance tools
+ phys: Physics and physical systems simulations
+ system: System utilities (e.g. highly depending on system OS and hardware)
+ toolchain: EasyBuild toolchains
+ tools: General purpose tools
+ vis: Visualization, plotting, documentation and typesetting
+ OS: singularity image
+ python: python packages
+```
+
+!!! note
+    The modules set up the application paths, library paths and environment variables for running particular application.
+
+The modules may be loaded, unloaded and switched, according to momentary needs. For details see [here](software/modules/lmod/).
diff --git a/docs.it4i/salomon/prace.md b/docs.it4i/prace.md
similarity index 60%
rename from docs.it4i/salomon/prace.md
rename to docs.it4i/prace.md
index fe6b1dfd24c08cca1b3e1b3e1f6d931b6bf89a26..261489971847e428efff23ba03d51293e483f538 100644
--- a/docs.it4i/salomon/prace.md
+++ b/docs.it4i/prace.md
@@ -1,8 +1,8 @@
 # PRACE User Support
 
-## Intro
+## Introduction
 
-PRACE users coming to Salomon as to TIER-1 system offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](../general/obtaining-login-credentials/obtaining-login-credentials/), if the same level of access is required.
+PRACE users coming to the TIER-1 systems offered through the DECI calls are in general treated as standard users and so most of the general documentation applies to them as well. This section shows the main differences for quicker orientation, but often uses references to the original documentation. PRACE users who don't undergo the full procedure (including signing the IT4I AuP on top of the PRACE AuP) will not have a password and thus access to some services intended for regular users. This can lower their comfort, but otherwise they should be able to use the TIER-1 system as intended. Please see the [Obtaining Login Credentials section](general/obtaining-login-credentials/obtaining-login-credentials/), if the same level of access is required.
 
 All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation/) should be read before continuing reading the local documentation here.
 
@@ -10,13 +10,13 @@ All general [PRACE User Documentation](http://www.prace-ri.eu/user-documentation
 
 If you have any troubles, need information, request support or want to install additional software, please use [PRACE Helpdesk](http://www.prace-ri.eu/helpdesk-guide264/).
 
-Information about the local services are provided in the [introduction of general user documentation](introduction/). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz.
+Information about the local services are provided in the [introduction of general user documentation Salomon](salomon/introduction/) and [introduction of general user documentation Anselm](anselm/introduction/). Please keep in mind, that standard PRACE accounts don't have a password to access the web interface of the local (IT4Innovations) request tracker and thus a new ticket should be created by sending an e-mail to support[at]it4i.cz.
 
 ## Obtaining Login Credentials
 
 In general PRACE users already have a PRACE account setup through their HOMESITE (institution from their country) as a result of rewarded PRACE project proposal. This includes signed PRACE AuP, generated and registered certificates, etc.
 
-If there's a special need a PRACE user can get a standard (local) account at IT4Innovations. To get an account on the Salomon cluster, the user needs to obtain the login credentials. The procedure is the same as for general users of the cluster, so please see the corresponding [section of the general documentation here](../general/obtaining-login-credentials/obtaining-login-credentials/).
+If there's a special need a PRACE user can get a standard (local) account at IT4Innovations. To get an account on a cluster, the user needs to obtain the login credentials. The procedure is the same as for general users of the cluster, so please see the corresponding [section of the general documentation here](general/obtaining-login-credentials/obtaining-login-credentials/).
 
 ## Accessing the Cluster
 
@@ -26,7 +26,7 @@ For all PRACE users the method for interactive access (login) and data transfer
 
 The user will need a valid certificate and to be present in the PRACE LDAP (please contact your HOME SITE or the primary investigator of your project for LDAP account creation).
 
-Most of the information needed by PRACE users accessing the Salomon TIER-1 system can be found here:
+Most of the information needed by PRACE users accessing the TIER-1 systems can be found here:
 
 * [General user's FAQ](http://www.prace-ri.eu/Users-General-FAQs)
 * [Certificates FAQ](http://www.prace-ri.eu/Certificates-FAQ)
@@ -46,11 +46,13 @@ To check whether your proxy certificate is still valid (by default it's valid 12
 $ grid-proxy-info
 ```
 
-To access Salomon cluster, four login nodes running GSI SSH service are available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
+To access the cluster, several login nodes running GSI SSH service are available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
 
 #### Access From PRACE Network:
 
-It is recommended to use the single DNS name salomon-prace.it4i.cz which is distributed between the four login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
+It is recommended to use the single DNS name **name-cluster**-prace.it4i.cz which is distributed between the four login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
+
+For Salomon cluster:
 
 | Login address                | Port | Protocol | Login node                       |
 | ---------------------------- | ---- | -------- | -------------------------------- |
@@ -64,15 +66,33 @@ It is recommended to use the single DNS name salomon-prace.it4i.cz which is dist
 $ gsissh -p 2222 salomon-prace.it4i.cz
 ```
 
+For Anselm cluster:
+
+| Login address               | Port | Protocol | Login node       |
+| --------------------------- | ---- | -------- | ---------------- |
+| anselm-prace.it4i.cz        | 2222 | gsissh   | login1 or login2 |
+| login1-prace.anselm.it4i.cz | 2222 | gsissh   | login1           |
+| login2-prace.anselm.it4i.cz | 2222 | gsissh   | login2           |
+
+```console
+$ gsissh -p 2222 anselm-prace.it4i.cz
+```
+
 When logging from other PRACE system, the prace_service script can be used:
 
 ```console
 $ gsissh `prace_service -i -s salomon`
 ```
 
+```console
+$ gsissh `prace_service -i -s anselm`
+```
+
 #### Access From Public Internet:
 
-It is recommended to use the single DNS name salomon.it4i.cz which is distributed between the four login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
+It is recommended to use the single DNS name **name-cluster**.it4i.cz which is distributed between the four login nodes. If needed, user can login directly to one of the login nodes. The addresses are:
+
+For Salomon cluster:
 
 | Login address                | Port | Protocol | Login node                       |
 | ---------------------------- | ---- | -------- | -------------------------------- |
@@ -86,14 +106,30 @@ It is recommended to use the single DNS name salomon.it4i.cz which is distribute
 $ gsissh -p 2222 salomon.it4i.cz
 ```
 
+For Anselm cluster:
+
+| Login address         | Port | Protocol | Login node       |
+| --------------------- | ---- | -------- | ---------------- |
+| anselm.it4i.cz        | 2222 | gsissh   | login1 or login2 |
+| login1.anselm.it4i.cz | 2222 | gsissh   | login1           |
+| login2.anselm.it4i.cz | 2222 | gsissh   | login2           |
+
+```console
+$ gsissh -p 2222 anselm.it4i.cz
+```
+
 When logging from other PRACE system, the prace_service script can be used:
 
 ```console
 $ gsissh `prace_service -e -s salomon`
 ```
 
+```console
+$ gsissh `prace_service -e -s anselm`
+```
+
 Although the preferred and recommended file transfer mechanism is [using GridFTP](prace/#file-transfers), the GSI SSH
-implementation on Salomon supports also SCP, so for small files transfer gsiscp can be used:
+implementation supports also SCP, so for small files transfer gsiscp can be used:
 
 ```console
 $ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ salomon.it4i.cz:_SALOMON_PATH_TO_YOUR_FILE_
@@ -102,11 +138,18 @@ $ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ salomon-prace.it4i.cz:_SALOMON_PATH_T
 $ gsiscp -P 2222 salomon-prace.it4i.cz:_SALOMON_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
 ```
 
+```console
+$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ anselm.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_
+$ gsiscp -P 2222 anselm.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
+$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ anselm-prace.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_
+$ gsiscp -P 2222 anselm-prace.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
+```
+
 ### Access to X11 Applications (VNC)
 
-If the user needs to run X11 based graphical application and does not have a X11 server, the applications can be run using VNC service. If the user is using regular SSH based access, please see the [section in general documentation](../general/accessing-the-clusters/graphical-user-interface/x-window-system/).
+If the user needs to run X11 based graphical application and does not have a X11 server, the applications can be run using VNC service. If the user is using regular SSH based access, please see the [section in general documentation](general/accessing-the-clusters/graphical-user-interface/x-window-system/).
 
-If the user uses GSI SSH based access, then the procedure is similar to the SSH based access ([look here](../general/accessing-the-clusters/graphical-user-interface/x-window-system/)), only the port forwarding must be done using GSI SSH:
+If the user uses GSI SSH based access, then the procedure is similar to the SSH based access ([look here](general/accessing-the-clusters/graphical-user-interface/x-window-system/)), only the port forwarding must be done using GSI SSH:
 
 ```console
 $ gsissh -p 2222 salomon.it4i.cz -L 5961:localhost:5961
@@ -114,11 +157,11 @@ $ gsissh -p 2222 salomon.it4i.cz -L 5961:localhost:5961
 
 ### Access With SSH
 
-After successful obtainment of login credentials for the local IT4Innovations account, the PRACE users can access the cluster as regular users using SSH. For more information please see the [section in general documentation](shell-and-data-access/).
+After successful obtainment of login credentials for the local IT4Innovations account, the PRACE users can access the cluster as regular users using SSH. For more information please see [the section in general documentation for Salomon](salomon/shell-and-data-access/) and [the section in general documentation for Anselm](anselm/shell-and-data-access/).
 
 ## File Transfers
 
-PRACE users can use the same transfer mechanisms as regular users (if they've undergone the full registration procedure). For information about this, please see [the section in the general documentation](shell-and-data-access/).
+PRACE users can use the same transfer mechanisms as regular users (if they've undergone the full registration procedure). For information about this, please see [the section in the general documentation for Salomon](salomon/shell-and-data-access/) and [the section in general documentation for Anselm](anselm/shell-and-data-access/).
 
 Apart from the standard mechanisms, for PRACE users to transfer data to/from Salomon cluster, a GridFTP server running Globus Toolkit GridFTP service is available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
 
@@ -126,6 +169,8 @@ There's one control server and three backend servers for striping and/or backup
 
 ### Access From PRACE Network
 
+For Salomon cluster:
+
 | Login address                 | Port | Node role                   |
 | ----------------------------- | ---- | --------------------------- |
 | gridftp-prace.salomon.it4i.cz | 2812 | Front end /control server   |
@@ -139,26 +184,57 @@ Copy files **to** Salomon by running the following commands on your local machin
 $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
 ```
 
+For Anselm cluster:
+
+| Login address                | Port | Node role                   |
+| ---------------------------- | ---- | --------------------------- |
+| gridftp-prace.anselm.it4i.cz | 2812 | Front end /control server   |
+| login1-prace.anselm.it4i.cz  | 2813 | Backend / data mover server |
+| login2-prace.anselm.it4i.cz  | 2813 | Backend / data mover server |
+| dm1-prace.anselm.it4i.cz     | 2813 | Backend / data mover server |
+
+Copy files **to** Anselm by running the following commands on your local machine:
+
+```console
+$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
+```
+
 Or by using prace_service script:
 
 ```console
 $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
 ```
 
+```console
+$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
+```
+
 Copy files **from** Salomon:
 
 ```console
 $ globus-url-copy gsiftp://gridftp-prace.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
 ```
 
+Copy files **from** Anselm:
+
+```console
+$ globus-url-copy gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
+```
+
 Or by using prace_service script:
 
 ```console
 $ globus-url-copy gsiftp://`prace_service -i -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
 ```
 
+```console
+$ globus-url-copy gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
+```
+
 ### Access From Public Internet
 
+For Salomon cluster:
+
 | Login address           | Port | Node role                   |
 | ----------------------- | ---- | --------------------------- |
 | gridftp.salomon.it4i.cz | 2812 | Front end /control server   |
@@ -172,24 +248,53 @@ Copy files **to** Salomon by running the following commands on your local machin
 $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
 ```
 
+For Anselm cluster:
+
+| Login address          | Port | Node role                   |
+| ---------------------- | ---- | --------------------------- |
+| gridftp.anselm.it4i.cz | 2812 | Front end /control server   |
+| login1.anselm.it4i.cz  | 2813 | Backend / data mover server |
+| login2.anselm.it4i.cz  | 2813 | Backend / data mover server |
+| dm1.anselm.it4i.cz     | 2813 | Backend / data mover server |
+
+Copy files **to** Anselm by running the following commands on your local machine:
+
+```console
+$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
+```
+
 Or by using prace_service script:
 
 ```console
 $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_
 ```
 
+```console
+$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
+```
+
 Copy files **from** Salomon:
 
 ```console
 $ globus-url-copy gsiftp://gridftp.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
 ```
 
+Copy files **from** Anselm:
+
+```console
+$ globus-url-copy gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
+```
+
 Or by using prace_service script:
 
 ```console
 $ globus-url-copy gsiftp://`prace_service -e -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
 ```
 
+```console
+$ globus-url-copy gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
+```
+
 Generally both shared file systems are available through GridFTP:
 
 | File system mount point | Filesystem | Comment                                                        |
@@ -197,11 +302,13 @@ Generally both shared file systems are available through GridFTP:
 | /home                   | Lustre     | Default HOME directories of users in format /home/prace/login/ |
 | /scratch                | Lustre     | Shared SCRATCH mounted on the whole cluster                    |
 
-More information about the shared file systems is available [here](storage/).
+More information about the shared file systems is available [for Salomon here](salomon/storage/) and [for anselm here](anselm/storage).
 
 !!! hint
     `prace` directory is used for PRACE users on the SCRATCH file system.
 
+Only Salomon cluster /scratch:
+
 | Data type                    | Default path                    |
 | ---------------------------- | ------------------------------- |
 | large project files          | /scratch/work/user/prace/login/ |
@@ -211,7 +318,7 @@ More information about the shared file systems is available [here](storage/).
 
 There are some limitations for PRACE user when using the cluster. By default PRACE users aren't allowed to access special queues in the PBS Pro to have high priority or exclusive access to some special equipment like accelerated nodes and high memory (fat) nodes. There may be also restrictions obtaining a working license for the commercial software installed on the cluster, mostly because of the license agreement or because of insufficient amount of licenses.
 
-For production runs always use scratch file systems. The available file systems are described [here](storage/).
+For production runs always use scratch file systems. The available file systems are described [for Salomon here](salomon/storage/) and [for Anselm here](anselm/storage).
 
 ### Software, Modules and PRACE Common Production Environment
 
@@ -220,26 +327,36 @@ All system wide installed software on the cluster is made available to the users
 PRACE users can use the "prace" module to use the [PRACE Common Production Environment](http://www.prace-ri.eu/prace-common-production-environment/).
 
 ```console
-$ module load prace
+$ ml prace
 ```
 
 ### Resource Allocation and Job Execution
 
-General information about the resource allocation, job queuing and job execution is in this [section of general documentation](resources-allocation-policy/).
+General information about the resource allocation, job queuing and job execution is in this [section of general documentation for Salomon](salomon/resources-allocation-policy/) and [section of general documentation for Anselm](anselm/resources-allocation-policy/).
 
 For PRACE users, the default production run queue is "qprace". PRACE users can also use two other queues "qexp" and "qfree".
 
+For Salomon:
+
 | queue                         | Active project | Project resources | Nodes                      | priority | authorization | walltime  |
 | ----------------------------- | -------------- | ----------------- | -------------------------- | -------- | ------------- | --------- |
 | **qexp** Express queue        | no             | none required     | 32 nodes, max 8 per user   | 150      | no            | 1 / 1 h   |
 | **qprace** Production queue   | yes            | >0                | 1006 nodes, max 86 per job | 0        | no            | 24 / 48 h |
 | **qfree** Free resource queue | yes            | none required     | 752 nodes, max 86 per job  | -1024    | no            | 12 / 12 h |
 
+For Anselm:
+
+| queue                         | Active project | Project resources | Nodes               | priority | authorization | walltime  |
+| ----------------------------- | -------------- | ----------------- | ------------------- | -------- | ------------- | --------- |
+| **qexp** Express queue        | no             | none required     | 2 reserved, 8 total | high     | no            | 1 / 1h    |
+| **qprace** Production queue   | yes            | > 0               | 178 w/o accelerator | medium   | no            | 24 / 48 h |
+| **qfree** Free resource queue | yes            | none required     | 178 w/o accelerator | very low | no            | 12 / 12 h |
+
 **qprace**, the PRACE This queue is intended for normal production runs. It is required that active project with nonzero remaining resources is specified to enter the qprace. The queue runs with medium priority and no special authorization is required to use it. The maximum runtime in qprace is 48 hours. If the job needs longer time, it must use checkpoint/restart functionality.
 
 ### Accounting & Quota
 
-The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation](resources-allocation-policy/).
+The resources that are currently subject to accounting are the core hours. The core hours are accounted on the wall clock basis. The accounting runs whenever the computational cores are allocated or blocked via the PBS Pro workload manager (the qsub command), regardless of whether the cores are actually used for any calculation. See [example in the general documentation for Salomon](salomon/resources-allocation-policy/) and [example in the general documentation for Anselm](anselm/resources-allocation-policy/).
 
 PRACE users should check their project accounting using the [PRACE Accounting Tool (DART)](http://www.prace-ri.eu/accounting-report-tool/).
 
diff --git a/docs.it4i/salomon/environment-and-modules.md b/docs.it4i/salomon/environment-and-modules.md
deleted file mode 100644
index 56a095c7e359aeae48186c28a494e8995d542178..0000000000000000000000000000000000000000
--- a/docs.it4i/salomon/environment-and-modules.md
+++ /dev/null
@@ -1,124 +0,0 @@
-# Environment and Modules
-
-## Environment Customization
-
-After logging in, you may want to configure the environment. Write your preferred path definitions, aliases, functions and module loads in the .bashrc file
-
-```console
-# ./bashrc
-
-# Source global definitions
-if [ -f /etc/bashrc ]; then
-      . /etc/bashrc
-fi
-
-# User specific aliases and functions
-alias qs='qstat -a'
-module load intel/2015b
-
-# Display information to standard output - only in interactive ssh session
-if [ -n "$SSH_TTY" ]
-then
- module list # Display loaded modules
-fi
-```
-
-!!! note
-    Do not run commands outputting to standard output (echo, module list, etc) in .bashrc  for non-interactive SSH sessions. It breaks fundamental functionality (SCP, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example.
-
-### Application Modules
-
-In order to configure your shell for running particular application on Salomon we use Module package interface.
-
-Application modules on Salomon cluster are built using [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). The modules are divided into the following structure:
-
-```console
- base: Default module class
- bio: Bioinformatics, biology and biomedical
- cae: Computer Aided Engineering (incl. CFD)
- chem: Chemistry, Computational Chemistry and Quantum Chemistry
- compiler: Compilers
- data: Data management & processing tools
- debugger: Debuggers
- devel: Development tools
- geo: Earth Sciences
- ide: Integrated Development Environments (e.g. editors)
- lang: Languages and programming aids
- lib: General purpose libraries
- math: High-level mathematical software
- mpi: MPI stacks
- numlib: Numerical Libraries
- perf: Performance tools
- phys: Physics and physical systems simulations
- system: System utilities (e.g. highly depending on system OS and hardware)
- toolchain: EasyBuild toolchains
- tools: General purpose tools
- vis: Visualization, plotting, documentation and typesetting
-```
-
-!!! note
-    The modules set up the application paths, library paths and environment variables for running particular application.
-
-The modules may be loaded, unloaded and switched, according to momentary needs.
-
-To check available modules use
-
-```console
-$ ml av
-```
-
-To load a module, for example the Open MPI module use
-
-```console
-$ ml OpenMPI
-```
-
-loading the Open MPI module will set up paths and environment variables of your active shell such that you are ready to run the Open MPI software
-
-To check loaded modules use
-
-```console
-$ ml
-```
-
-To unload a module, for example the Open MPI module use
-
-```console
-$ ml -OpenMPI
-```
-
-Learn more on modules by reading the module man page
-
-```console
-$ man module
-```
-
-### EasyBuild Toolchains
-
-As we wrote earlier, we are using EasyBuild for automatized software installation and module creation.
-
-EasyBuild employs so-called **compiler toolchains** or, simply toolchains for short, which are a major concept in handling the build and installation processes.
-
-A typical toolchain consists of one or more compilers, usually put together with some libraries for specific functionality, e.g., for using an MPI stack for distributed computing, or which provide optimized routines for commonly used math operations, e.g., the well-known BLAS/LAPACK APIs for linear algebra routines.
-
-For each software package being built, the toolchain to be used must be specified in some way.
-
-The EasyBuild framework prepares the build environment for the different toolchain components, by loading their respective modules and defining environment variables to specify compiler commands (e.g., via `$F90`), compiler and linker options (e.g., via `$CFLAGS` and `$LDFLAGS`), the list of library names to supply to the linker (via `$LIBS`), etc. This enables making easyblocks largely toolchain-agnostic since they can simply rely on these environment variables; that is, unless they need to be aware of, for example, the particular compiler being used to determine the build configuration options.
-
-Recent releases of EasyBuild include out-of-the-box toolchain support for:
-
-* various compilers, including GCC, Intel, Clang, CUDA
-* common MPI libraries, such as Intel MPI, MPICH, MVAPICH2, Open MPI
-* various numerical libraries, including ATLAS, Intel MKL, OpenBLAS, ScaLAPACK, FFTW
-
-On Salomon, we have currently following toolchains installed:
-
-| Toolchain | Module(s)                                      |
-| --------- | ---------------------------------------------- |
-| GCC       | GCC                                            |
-| ictce     | icc, ifort, imkl, impi                         |
-| intel     | GCC, icc, ifort, imkl, impi                    |
-| gompi     | GCC, OpenMPI                                   |
-| goolf     | BLACS, FFTW, GCC, OpenBLAS, OpenMPI, ScaLAPACK |
-| iompi     | OpenMPI, icc, ifort                            |
-| iccifort  | icc, ifort                                     |
diff --git a/mkdocs.yml b/mkdocs.yml
index f7c7db34867743e589978f855d74fb3f08c7f40b..18c7eec90cef0fd50bceeba3d4740e1d90f07dab 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -26,11 +26,11 @@ pages:
       - VNC: general/accessing-the-clusters/graphical-user-interface/vnc.md
       - VPN Access: general/accessing-the-clusters/vpn-access.md
     - Resource Allocation and Job Execution: general/resource_allocation_and_job_execution.md
+  - PRACE User Support: prace.md
   - Salomon Cluster:
     - Introduction: salomon/introduction.md
     - Hardware Overview: salomon/hardware-overview.md
     - Accessing the Cluster: salomon/shell-and-data-access.md
-    - Environment and Modules: salomon/environment-and-modules.md
     - Resource Allocation and Job Execution:
       - Resources Allocation Policy: salomon/resources-allocation-policy.md
       - Job Scheduling: salomon/job-priority.md
@@ -42,12 +42,10 @@ pages:
       - IB Single-Plane Topology: salomon/ib-single-plane-topology.md
       - 7D Enhanced Hypercube: salomon/7d-enhanced-hypercube.md
     - Storage: salomon/storage.md
-    - PRACE User Support: salomon/prace.md
   - Anselm Cluster:
     - Introduction: anselm/introduction.md
     - Hardware Overview: anselm/hardware-overview.md
     - Accessing the Cluster: anselm/shell-and-data-access.md
-    - Environment and Modules: anselm/environment-and-modules.md
     - Resource Allocation and Job Execution:
       - Resource Allocation Policy: anselm/resources-allocation-policy.md
       - Job Priority: anselm/job-priority.md
@@ -56,8 +54,8 @@ pages:
     - Compute Nodes: anselm/compute-nodes.md
     - Storage: anselm/storage.md
     - Network: anselm/network.md
-    - PRACE User Support: anselm/prace.md
   - Software:
+    - Environment and Modules: environment-and-modules.md
     - Modules:
       - Lmod Environment: software/modules/lmod.md
       - Intel Xeon Phi Environment: software/mic/mic_environment.md
@@ -198,4 +196,4 @@ markdown_extensions:
 
 google_analytics:
   - 'UA-90498826-1'
-  - 'auto'
\ No newline at end of file
+  - 'auto'