diff --git a/docs.it4i/anselm-cluster-documentation/capacity-computing.md b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
index 3addeac9ee24bef2d48e5da71c8e4fd3d23edd2c..5913a27fd33101fb57c4e5b26f9861bb44ec14ad 100644
--- a/docs.it4i/anselm-cluster-documentation/capacity-computing.md
+++ b/docs.it4i/anselm-cluster-documentation/capacity-computing.md
@@ -12,15 +12,17 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 
 -   Use [Job arrays](capacity-computing/#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
 -   Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
--   Combine[GNU parallel with Job arrays](capacity-computing/#combining-job-arrays-and-gnu-parallel) when running huge number of single core jobs
+-   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
 
 Policy
 ------
+
 1.  A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
 2.  The array size is at most 1000 subjobs.
 
 Job arrays
 --------------
+
 !!! Note "Note"
 	Huge number of jobs may be easily submitted and managed as a job array.
 
@@ -146,10 +148,11 @@ Display status information for all user's subjobs.
 $ qstat -u $USER -tJ
 ```
 
-Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/sitemap/).
+Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/).
 
 GNU parallel
 ----------------
+
 !!! Note "Note"
 	Use GNU parallel to run many single core tasks on one node.
 
@@ -220,7 +223,8 @@ In this example, we submit a job of 101 tasks. 16 input files will be processed
 Please note the #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
 
 Job arrays and GNU parallel
--------------------------------
+---------------------------
+
 !!! Note "Note"
 	Combine the Job arrays and GNU parallel for best throughput of single core jobs
 
@@ -305,6 +309,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo
 
 Examples
 --------
+
 Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
 
 Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
diff --git a/docs.it4i/anselm-cluster-documentation/job-priority.md b/docs.it4i/anselm-cluster-documentation/job-priority.md
index 347926c31b3b54f41a62f1a9ccc6b474052507d3..cb378282a7ae6c389de74fb6722219cbb7d3d471 100644
--- a/docs.it4i/anselm-cluster-documentation/job-priority.md
+++ b/docs.it4i/anselm-cluster-documentation/job-priority.md
@@ -28,13 +28,17 @@ Fair-share priority is used for ranking jobs with equal queue priority.
 
 Fair-share priority is calculated as
 
-![](../../img/fairshare_formula.png)
+![](../img/fairshare_formula.png)
 
-where MAX_FAIRSHARE has value 1E6, usage~Project~ is cumulated usage by all members of selected project, usage~Total~ is total usage by all users, by all projects.
+where MAX_FAIRSHARE has value 1E6,
+usage*Project* is cumulated usage by all members of selected project,
+usage*Total* is total usage by all users, by all projects.
 
-Usage counts allocated core hours (ncpus*walltime). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week). Jobs queued in queue qexp are not calculated to project's usage.
+Usage counts allocated core-hours (`ncpus x walltime`). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week).
+Jobs queued in queue qexp are not calculated to project's usage.
 
->Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/anselm/projects>.
+!!! Note "Note"
+	Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/anselm/projects>.
 
 Calculated fair-share priority can be also seen as Resource_List.fairshare attribute of a job.
 
@@ -50,7 +54,7 @@ Eligible time can be seen as eligible_time attribute of job.
 
 Job execution priority (job sort formula) is calculated as:
 
-![](../../img/job_sort_formula.png)
+![](../img/job_sort_formula.png)
 
 ### Job backfilling
 
diff --git a/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md b/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
index 3d3bb34e62130c48205c35956ac487c80aee8055..b42196620b299fbc61bb2b83b85fa99b7ef78ec2 100644
--- a/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
+++ b/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md
@@ -79,7 +79,7 @@ In this example, we allocate nodes cn171 and cn172, all 16 cores per node, for 2
 Nodes equipped with Intel Xeon E5-2665 CPU have base clock frequency 2.4GHz, nodes equipped with Intel Xeon E5-2470 CPU have base frequency 2.3 GHz (see section Compute Nodes for details).  Nodes may be selected via the PBS resource attribute cpu_freq .
 
 |CPU Type|base freq.|Nodes|cpu_freq attribute|
-|---|---|
+|---|---|---|---|
 |Intel Xeon E5-2665|2.4GHz|cn[1-180], cn[208-209]|24|
 |Intel Xeon E5-2470|2.3GHz|cn[181-207]|23|
 
diff --git a/docs.it4i/anselm-cluster-documentation/remote-visualization.md b/docs.it4i/anselm-cluster-documentation/remote-visualization.md
index a7de22ab470a65b4c628856fecf98939a52e7a1b..0de276a0a1f432b62d48844b5de5ec5eced32487 100644
--- a/docs.it4i/anselm-cluster-documentation/remote-visualization.md
+++ b/docs.it4i/anselm-cluster-documentation/remote-visualization.md
@@ -38,7 +38,7 @@ The procedure is:
 
 #### 1. Connect to a login node.
 
-Please follow the documentation.
+Please [follow the documentation](shell-and-data-access/).
 
 #### 2. Run your own instance of TurboVNC server.
 
@@ -133,8 +133,8 @@ Access the visualization node
 **To access the node use a dedicated PBS Professional scheduler queue
 qviz**. The queue has following properties:
 
- |queue |active project |project resources |nodes|min ncpus*|priority|authorization|walltime |
- | --- | --- |
+ |queue |active project |project resources |nodes|min ncpus|priority|authorization|walltime |
+ | --- | --- | --- | --- | --- | --- | --- | --- |
  |**qviz** Visualization queue |yes |none required |2 |4 |150 |no |1 hour / 8 hours |
 
 Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 16 GB of RAM and 1/4 of the GPU capacity. *If more GPU power or RAM is required, it is recommended to allocate one whole node per user, so that all 16 cores, whole RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.*
diff --git a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
index 90edd08ccd20e6c0af2490f66c6f91107b32d128..762904ae552c1f43f8969d3b4f7fc52a48cdc344 100644
--- a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
+++ b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md
@@ -17,7 +17,7 @@ The resources are allocated to the job in a fair-share fashion, subject to const
  |<strong>qfree</strong> |yes |none required |178 w/o accelerator |16 |-1024 |no |12h |
 
 !!! Note "Note"
-	**The qfree queue is not free of charge**. [Normal accounting](resources-allocation-policy.html#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
+	**The qfree queue is not free of charge**. [Normal accounting](#resources-accounting-policy) applies. However, it allows for utilization of free resources, once a Project exhausted all its allocated computational resources. This does not apply for Directors Discreation's projects (DD projects) by default. Usage of qfree after exhaustion of DD projects computational resources is allowed after request for this queue.
 
     **The qexp queue is equipped with the nodes not having the very same CPU clock speed.** Should you need the very same CPU speed, you have to select the proper nodes during the PSB job submission.
 
diff --git a/docs.it4i/anselm-cluster-documentation/storage.md b/docs.it4i/anselm-cluster-documentation/storage.md
index 545bc2d4a9fb0004c52519e24c8414c6e559c6a1..c93d61246011a6a4b5189092f9f8659f73df4737 100644
--- a/docs.it4i/anselm-cluster-documentation/storage.md
+++ b/docs.it4i/anselm-cluster-documentation/storage.md
@@ -110,7 +110,7 @@ The HOME filesystem is mounted in directory /home. Users home directories /home/
 
 The HOME filesystem should not be used to archive data of past Projects or other unrelated data.
 
-The files on HOME filesystem will not be deleted until end of the [users lifecycle](../../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/).
+The files on HOME filesystem will not be deleted until end of the [users lifecycle](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/).
 
 The filesystem is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
 
@@ -314,7 +314,7 @@ Summary
 ----------
 
 |Mountpoint|Usage|Protocol|Net Capacity|Throughput|Limitations|Access|Services|
-|---|---|
+|---|---|---|---|---|---|---|---|
 |/home|home directory|Lustre|320 TiB|2 GB/s|Quota 250GB|Compute and login nodes|backed up|
 |/scratch|cluster shared jobs' data|Lustre|146 TiB|6 GB/s|Quota 100TB|Compute and login nodes|files older 90 days removed|
 |/lscratch|node local jobs' data|local|330 GB|100 MB/s|none|Compute nodes|purged after job ends|
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md
index 63b3ec92072ba31166da07b1e99ca9a7d4b5496a..657cc412b2daaeff5c496f32becf1f56da4ac210 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md
@@ -19,6 +19,7 @@ Verify:
 
 Start vncserver
 ---------------
+
 !!! Note "Note"
 	To access VNC a local vncserver must be  started first and also a tunnel using SSH port forwarding must be established.
 
diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/introduction.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/introduction.md
index e2d83a559056c8bd6f2843595668831dc0366dd5..6ebeac33092d3ae6ce76ca7cc9aac6ba2c1cf825 100644
--- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/introduction.md
+++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/introduction.md
@@ -16,3 +16,9 @@ SSH keys
 
 Read more about [SSH keys management](shell-access-and-data-transfer/ssh-keys/).
 
+Graphical User Interface
+------------------------
+
+Read more about [X Window System](./graphical-user-interface/x-window-system/).
+
+Read more about [Virtual Network Computing (VNC)](./graphical-user-interface/vnc/).
diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
index a401ccb4c389c2470b370c6caf11f28df45c40c1..8998d931afbd15a9aef4ab13d61911dc016cc370 100644
--- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
+++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md
@@ -155,12 +155,16 @@ Installation of the Certificate Into Your Mail Client
 -----------------------------------------------------
 
 The procedure is similar to the following guides:
--   MS Outlook 2010
-    -   [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380)
-    -   [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp)
--   Mozilla Thudnerbird
-    -   [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate)
-    -   [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp)
+
+MS Outlook 2010
+
+-   [How to Remove, Import, and Export Digital certificates](http://support.microsoft.com/kb/179380)
+-   [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/outl-cert-imp)
+
+Mozilla Thudnerbird
+
+-   [Installing an SMIME certificate](http://kb.mozillazine.org/Installing_an_SMIME_certificate)
+-   [Importing a PKCS #12 certificate (in Czech)](http://idoc.vsb.cz/xwiki/wiki/infra/view/uzivatel/moz-cert-imp)
 
 End of User Account Lifecycle
 -----------------------------
@@ -173,4 +177,4 @@ User will get 3 automatically generated warning e-mail messages of the pending r
 -   Second message will be sent 1 month before the removal
 -   Third message will be sent 1 week before the removal.
 
-The messages will inform about the projected removal date and will challenge the user to migrate her/his data
\ No newline at end of file
+The messages will inform about the projected removal date and will challenge the user to migrate her/his data
diff --git a/docs.it4i/salomon/capacity-computing.md b/docs.it4i/salomon/capacity-computing.md
index 28c57a64e27567b5670b59a84330782fd47bd563..831a1451f514e1b023ade80b0f89e12857b2e3a5 100644
--- a/docs.it4i/salomon/capacity-computing.md
+++ b/docs.it4i/salomon/capacity-computing.md
@@ -3,6 +3,7 @@ Capacity computing
 
 Introduction
 ------------
+
 In many cases, it is useful to submit huge (100+) number of computational jobs into the PBS queue system. Huge number of (small) jobs is one of the most effective ways to execute embarrassingly parallel calculations, achieving best runtime, throughput and computer utilization.
 
 However, executing huge number of jobs via the PBS queue may strain the system. This strain may result in slow response to commands, inefficient scheduling and overall degradation of performance and user experience, for all users. For this reason, the number of jobs is **limited to 100 per user, 1500 per job array**
@@ -12,15 +13,17 @@ However, executing huge number of jobs via the PBS queue may strain the system.
 
 -   Use [Job arrays](capacity-computing.md#job-arrays) when running huge number of [multithread](capacity-computing/#shared-jobscript-on-one-node) (bound to one node only) or multinode (multithread across several nodes) jobs
 -   Use [GNU parallel](capacity-computing/#gnu-parallel) when running single core jobs
--   Combine[GNU parallel with Job arrays](capacity-computing/#combining-job-arrays-and-gnu-parallel) when running huge number of single core jobs
+-   Combine [GNU parallel with Job arrays](capacity-computing/#job-arrays-and-gnu-parallel) when running huge number of single core jobs
 
 Policy
 ------
+
 1.  A user is allowed to submit at most 100 jobs. Each job may be [a job array](capacity-computing/#job-arrays).
 2.  The array size is at most 1000 subjobs.
 
 Job arrays
 --------------
+
 !!! Note "Note"
 	Huge number of jobs may be easily submitted and managed as a job array.
 
@@ -152,6 +155,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/)
 
 GNU parallel
 ----------------
+
 !!! Note "Note"
 	Use GNU parallel to run many single core tasks on one node.
 
@@ -223,6 +227,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo
 
 Job arrays and GNU parallel
 -------------------------------
+
 !!! Note "Note"
 	Combine the Job arrays and GNU parallel for best throughput of single core jobs
 
@@ -307,6 +312,7 @@ Please note the #PBS directives in the beginning of the jobscript file, dont' fo
 
 Examples
 --------
+
 Download the examples in [capacity.zip](capacity.zip), illustrating the above listed ways to run huge number of jobs. We recommend to try out the examples, before using this for running production jobs.
 
 Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
diff --git a/docs.it4i/salomon/job-priority.md b/docs.it4i/salomon/job-priority.md
index e84eb0a2b1e5b7c3ffd80d41e8b6303f839646fe..bb7d3e6b48aed5222285b5338bd55f09236f73c3 100644
--- a/docs.it4i/salomon/job-priority.md
+++ b/docs.it4i/salomon/job-priority.md
@@ -27,11 +27,15 @@ Fair-share priority is used for ranking jobs with equal queue priority.
 
 Fair-share priority is calculated as
 
-![](../../img/fairshare_formula.png)
+![](../img/fairshare_formula.png)
 
-where MAX_FAIRSHARE has value 1E6, usage~Project~ is cumulated usage by all members of selected project, usage~Total~ is total usage by all users, by all projects.
+where MAX_FAIRSHARE has value 1E6,
+usage*Project* is cumulated usage by all members of selected project,
+usage*Total* is total usage by all users, by all projects.
 
-Usage counts allocated core hours (ncpus*walltime). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week). Jobs queued in queue qexp are not calculated to project's usage.
+Usage counts allocated core-hours (`ncpus x walltime`). Usage is decayed, or cut in half periodically, at the interval 168 hours (one week).
+Jobs queued in queue qexp are not calculated to project's usage.
+=======
 
 !!! Note "Note"
 	Calculated usage and fair-share priority can be seen at <https://extranet.it4i.cz/rsweb/salomon/projects>.
@@ -50,7 +54,7 @@ Eligible time can be seen as eligible_time attribute of job.
 
 Job execution priority (job sort formula) is calculated as:
 
-![](../../img/job_sort_formula.png)
+![](../img/job_sort_formula.png)
 
 ### Job backfilling
 
diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md
index 35d125823be1206e7f7c823ebaf8c8603a8b09bc..8d0df248986296701302e3ee571c9f7c48091e11 100644
--- a/docs.it4i/salomon/storage.md
+++ b/docs.it4i/salomon/storage.md
@@ -3,34 +3,36 @@ Storage
 
 Introduction
 ------------
-There are two main shared file systems on Salomon cluster, the [HOME](#home)and [SCRATCH](#shared-filesystems).
 
-All login and compute nodes may access same data on shared filesystems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp filesystems.
+There are two main shared file systems on Salomon cluster, the [HOME](#home) and [SCRATCH](#shared-filesystems).
+
+All login and compute nodes may access same data on shared file systems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp file systems.
 
 Policy (in a nutshell)
 ----------------------
-!!! Note "Note"
-	Use [ for your most valuable data and programs.
-    Use [WORK](#work) for your large project files.
-    Use [TEMP](#temp) for large scratch data.
-
+!!! note
+	* Use [HOME](#home) for your most valuable data and programs.
+	* Use [WORK](#work) for your large project files.
+	* Use [TEMP](#temp) for large scratch data.
+!!! warning
 	Do not use for [archiving](#archiving)!
 
 Archiving
 -------------
-Please don't use shared filesystems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service](#cesnet-data-storage), which is available via SSHFS.
+
+Please don't use shared file systems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service](#cesnet-data-storage), which is available via SSHFS.
 
 Shared Filesystems
 ----------------------
-Salomon computer provides two main shared filesystems, the [ HOME filesystem](#home-filesystem) and the [SCRATCH filesystem](#scratch-filesystem). The SCRATCH filesystem is partitioned to [WORK and TEMP workspaces](#shared-workspaces). The HOME filesystem is realized as a tiered NFS disk storage. The SCRATCH filesystem is realized as a parallel Lustre filesystem. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both HOME/SCRATCH filesystems for the purpose of sharing data with other users using fine-grained control.
+Salomon computer provides two main shared file systems, the [HOME file system](#home-filesystem) and the [SCRATCH file system](#scratch-filesystem). The SCRATCH file system is partitioned to [WORK and TEMP workspaces](#shared-workspaces). The HOME file system is realized as a tiered NFS disk storage. The SCRATCH file system is realized as a parallel Lustre file system. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both HOME/SCRATCH file systems for the purpose of sharing data with other users using fine-grained control.
 
-###HOME filesystem
+###HOME file system
 
-The HOME filesystem is realized as a Tiered filesystem, exported via NFS. The first tier has capacity 100 TB, second tier has capacity 400 TB. The filesystem is available on all login and computational nodes. The Home filesystem hosts the [HOME workspace](#home).
+The HOME file system is realized as a Tiered file system, exported via NFS. The first tier has capacity 100 TB, second tier has capacity 400 TB. The file system is available on all login and computational nodes. The Home file system hosts the [HOME workspace](#home).
 
-###SCRATCH filesystem
+###SCRATCH file system
 
-The  architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH filesystem hosts the [WORK and TEMP workspaces](#shared-workspaces).
+The  architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH file system hosts the [WORK and TEMP workspaces](#shared-workspaces).
 
 Configuration of the SCRATCH Lustre storage
 
@@ -48,16 +50,16 @@ Configuration of the SCRATCH Lustre storage
 
 (source <http://www.nas.nasa.gov>)
 
-A user file on the Lustre filesystem can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing.
+A user file on the Lustre file system can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing.
 
 When a client (a  compute  node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the  MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval.
 
 If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results.
 
-There is default stripe configuration for Salomon Lustre filesystems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance:
+There is default stripe configuration for Salomon Lustre file systems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance:
 
-1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre filesystems
-2. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre filesystems  one can specify -1 to use all OSTs in the filesystem.
+1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre file systems
+2. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre file systems  one can specify -1 to use all OSTs in the file system.
 3. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
 
 !!! Note "Note"
@@ -85,7 +87,7 @@ stripe_count:  -1 stripe_size:    1048576 stripe_offset:  -1
 
 In this example, we view current stripe setting of the /scratch/username/ directory. The stripe count is changed to all OSTs, and verified. All files written to this directory will be striped over all (54) OSTs
 
-Use lfs check OSTs to see the number and status of active OSTs for each filesystem on Salomon. Learn more by reading the man page
+Use lfs check OSTs to see the number and status of active OSTs for each file system on Salomon. Learn more by reading the man page
 
 ```bash
 $ lfs check osts
@@ -218,14 +220,14 @@ Shared Workspaces
 
 ###HOME
 
-Users home directories /home/username reside on HOME filesystem. Accessible capacity is 0.5 PB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250 GB per user. If 250 GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
+Users home directories /home/username reside on HOME file system. Accessible capacity is 0.5 PB, shared among all users. Individual users are restricted by file system usage quotas, set to 250 GB per user. If 250 GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
 
 !!! Note "Note"
-	The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
+	The HOME file system is intended for preparation, evaluation, processing and storage of data generated by active Projects.
 
 The HOME  should not be used to archive data of past Projects or other unrelated data.
 
-The files on HOME will not be deleted until end of the [users lifecycle](../../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/).
+The files on HOME will not be deleted until end of the [users lifecycle](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/).
 
 The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
 
@@ -239,14 +241,14 @@ The workspace is backed up, such that it can be restored in case of catasthropi
 
 ### WORK
 
-The WORK workspace resides on SCRATCH filesystem.  Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid.
+The WORK workspace resides on SCRATCH file system.  Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid.
 
 !!! Note "Note"
 	The WORK workspace is intended  to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up.
 
-	Files on the WORK filesystem are **persistent** (not automatically deleted) throughout duration of the project.
+	Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project.
 
-The WORK workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH filesystem.
+The WORK workspace is hosted on SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system.
 
 !!! Note "Note"
 	Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
@@ -264,16 +266,16 @@ The WORK workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as L
 
 ### TEMP
 
-The TEMP workspace resides on SCRATCH filesystem. The TEMP workspace accesspoint is  /scratch/temp.  Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by filesystem usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. >If 100 TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
+The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is  /scratch/temp.  Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. >If 100 TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
 
 !!! Note "Note"
 	The TEMP workspace is intended  for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
 
 	Users are advised to save the necessary data from the TEMP workspace to HOME or WORK after the calculations and clean up the scratch files.
 
-    Files on the TEMP filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
+    Files on the TEMP file system that are **not accessed for more than 90 days** will be automatically **deleted**.
 
-The TEMP workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH filesystem.
+The TEMP workspace is hosted on SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system.
 
 !!! Note "Note"
 	Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
@@ -291,14 +293,14 @@ The TEMP workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as L
 
 RAM disk
 --------
-Every computational node is equipped with filesystem realized in memory, so called RAM disk.
+Every computational node is equipped with file system realized in memory, so called RAM disk.
 
 !!! Note "Note"
-	Use RAM disk in case you need really fast access to your data of limited size during your calculation. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
+	Use RAM disk in case you need really fast access to your data of limited size during your calculation. Be very careful, use of RAM disk file system is at the expense of operational memory.
 
 The local RAM disk is mounted as /ramdisk and is accessible to user at /ramdisk/$PBS_JOBID directory.
 
-The local RAM disk filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk filesystem is limited. Be very careful, use of RAM disk filesystem is at the expense of operational memory.  It is not recommended to allocate large amount of memory and use large amount of data in RAM disk filesystem at the same time.
+The local RAM disk file system is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk file system is limited. Be very careful, use of RAM disk file system is at the expense of operational memory.  It is not recommended to allocate large amount of memory and use large amount of data in RAM disk file system at the same time.
 
 !!! Note "Note"
 	The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
@@ -324,7 +326,7 @@ Summary
 
 CESNET Data Storage
 ------------
-Do not use shared filesystems at IT4Innovations as a backup for large amount of data or long-term archiving purposes.
+Do not use shared file systems at IT4Innovations as a backup for large amount of data or long-term archiving purposes.
 
 !!! Note "Note"
 ../../img/The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/).