diff --git a/docs.it4i/anselm-cluster-documentation/remote-visualization.md b/docs.it4i/anselm-cluster-documentation/remote-visualization.md index d6466340a5daea218cdd9c65e21423f5cc8130bd..2c13509de18fe5b3cedac42e91b48c0605c6d124 100644 --- a/docs.it4i/anselm-cluster-documentation/remote-visualization.md +++ b/docs.it4i/anselm-cluster-documentation/remote-visualization.md @@ -38,7 +38,7 @@ The procedure is: #### 1. Connect to a login node. -Please follow the documentation. +Please [follow the documentation](shell-and-data-access/). #### 2. Run your own instance of TurboVNC server. @@ -133,8 +133,8 @@ Access the visualization node **To access the node use a dedicated PBS Professional scheduler queue qviz**. The queue has following properties: - |queue |active project |project resources |nodes|min ncpus*|priority|authorization|walltime | - | --- | --- | + |queue |active project |project resources |nodes|min ncpus|priority|authorization|walltime | + | --- | --- | --- | --- | --- | --- | --- | --- | |**qviz** Visualization queue |yes |none required |2 |4 |150 |no |1 hour / 8 hours | Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 16 GB of RAM and 1/4 of the GPU capacity. *If more GPU power or RAM is required, it is recommended to allocate one whole node per user, so that all 16 cores, whole RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.*