Commit 91acd06b authored by David Hrbáč's avatar David Hrbáč

Merge branch '50-ukazky-z-konzoly-nemaji-byt-bash' into 'master'

Resolve "Ukázky z konzoly nemají být bash"

Closes #50 and #44

See merge request !117
parents 9a3e252c 520237a8
......@@ -28,7 +28,7 @@ Mellanox
## Mathematical Formulae
Formulas are made with:
### Formulas are made with:
* https://facelessuser.github.io/pymdown-extensions/extensions/arithmatex/
* https://www.mathjax.org/
......
......@@ -41,7 +41,7 @@ Assume we have 900 input files with name beginning with "file" (e. g. file001, .
First, we create a tasklist file (or subjobs list), listing all tasks (subjobs) - all input files in our example:
```bash
```console
$ find . -name 'file*' > tasklist
```
......@@ -78,7 +78,7 @@ If huge number of parallel multicore (in means of multinode multithread, e. g. M
To submit the job array, use the qsub -J command. The 900 jobs of the [example above](capacity-computing/#array_example) may be submitted like this:
```bash
```console
$ qsub -N JOBNAME -J 1-900 jobscript
12345[].dm2
```
......@@ -87,7 +87,7 @@ In this example, we submit a job array of 900 subjobs. Each subjob will run on f
Sometimes for testing purposes, you may need to submit only one-element array. This is not allowed by PBSPro, but there's a workaround:
```bash
```console
$ qsub -N JOBNAME -J 9-10:2 jobscript
```
......@@ -97,7 +97,7 @@ This will only choose the lower index (9 in this example) for submitting/running
Check status of the job array by the qstat command.
```bash
```console
$ qstat -a 12345[].dm2
dm2:
......@@ -110,7 +110,7 @@ Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
The status B means that some subjobs are already running.
Check status of the first 100 subjobs by the qstat command.
```bash
```console
$ qstat -a 12345[1-100].dm2
dm2:
......@@ -128,20 +128,20 @@ Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted.
```bash
```console
$ qdel 12345[].dm2
```
Deleting large job arrays may take a while.
Display status information for all user's jobs, job arrays, and subjobs.
```bash
```console
$ qstat -u $USER -t
```
Display status information for all user's subjobs.
```bash
```console
$ qstat -u $USER -tJ
```
......@@ -156,7 +156,7 @@ GNU parallel is a shell tool for executing jobs in parallel using one or more co
For more information and examples see the parallel man page:
```bash
```console
$ module add parallel
$ man parallel
```
......@@ -171,7 +171,7 @@ Assume we have 101 input files with name beginning with "file" (e. g. file001, .
First, we create a tasklist file, listing all tasks - all input files in our example:
```bash
```console
$ find . -name 'file*' > tasklist
```
......@@ -209,7 +209,7 @@ In this example, tasks from tasklist are executed via the GNU parallel. The jobs
To submit the job, use the qsub command. The 101 tasks' job of the [example above](capacity-computing/#gp_example) may be submitted like this:
```bash
```console
$ qsub -N JOBNAME jobscript
12345.dm2
```
......@@ -239,13 +239,13 @@ Assume we have 992 input files with name beginning with "file" (e. g. file001, .
First, we create a tasklist file, listing all tasks - all input files in our example:
```bash
```console
$ find . -name 'file*' > tasklist
```
Next we create a file, controlling how many tasks will be executed in one subjob
```bash
```console
$ seq 32 > numtasks
```
......@@ -294,7 +294,7 @@ When deciding this values, think about following guiding rules:
To submit the job array, use the qsub -J command. The 992 tasks' job of the [example above](capacity-computing/#combined_example) may be submitted like this:
```bash
```console
$ qsub -N JOBNAME -J 1-992:32 jobscript
12345[].dm2
```
......@@ -310,7 +310,7 @@ Download the examples in [capacity.zip](capacity.zip), illustrating the above li
Unzip the archive in an empty directory on Anselm and follow the instructions in the README file
```bash
```console
$ unzip capacity.zip
$ cat README
```
......@@ -85,7 +85,7 @@ Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes
Nodes equipped with Intel Xeon E5-2665 CPU have set PBS resource attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU have set PBS resource attribute cpu_freq = 23.
```bash
```console
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16:cpu_freq=24 -I
```
......@@ -93,8 +93,8 @@ In this example, we allocate 4 nodes, 16 cores at 2.4GHhz per node.
Intel Turbo Boost Technology is used by default, you can disable it for all nodes of job by using resource attribute cpu_turbo_boost.
```bash
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
```console
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
```
## Memory Architecture
......
......@@ -4,7 +4,9 @@
After logging in, you may want to configure the environment. Write your preferred path definitions, aliases, functions and module loads in the .bashrc file
```bash
```console
$ cat ./bashrc
# ./bashrc
# Source global definitions
......@@ -39,33 +41,33 @@ The modules may be loaded, unloaded and switched, according to momentary needs.
To check available modules use
```bash
$ module avail
```console
$ module avail **or** ml av
```
To load a module, for example the octave module use
```bash
$ module load octave
```console
$ module load octave **or** ml octave
```
loading the octave module will set up paths and environment variables of your active shell such that you are ready to run the octave software
To check loaded modules use
```bash
$ module list
```console
$ module list **or** ml
```
To unload a module, for example the octave module use
```bash
$ module unload octave
```console
$ module unload octave **or** ml -octave
```
Learn more on modules by reading the module man page
```bash
```console
$ man module
```
......@@ -79,7 +81,7 @@ PrgEnv-intel sets up the INTEL development environment in conjunction with the I
All application modules on Salomon cluster (and further) will be build using tool called [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). In case that you want to use some applications that are build by EasyBuild already, you have to modify your MODULEPATH environment variable.
```bash
```console
export MODULEPATH=$MODULEPATH:/apps/easybuild/modules/all/
```
......
......@@ -16,7 +16,7 @@ When allocating computational resources for the job, please specify
Submit the job using the qsub command:
```bash
```console
$ qsub -A Project_ID -q queue -l select=x:ncpus=y,walltime=[[hh:]mm:]ss[.ms] jobscript
```
......@@ -24,25 +24,25 @@ The qsub submits the job into the queue, in another words the qsub command creat
### Job Submission Examples
```bash
```console
$ qsub -A OPEN-0-0 -q qprod -l select=64:ncpus=16,walltime=03:00:00 ./myjob
```
In this example, we allocate 64 nodes, 16 cores per node, for 3 hours. We allocate these resources via the qprod queue, consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. Jobscript myjob will be executed on the first node in the allocation.
```bash
```console
$ qsub -q qexp -l select=4:ncpus=16 -I
```
In this example, we allocate 4 nodes, 16 cores per node, for 1 hour. We allocate these resources via the qexp queue. The resources will be available interactively
```bash
```console
$ qsub -A OPEN-0-0 -q qnvidia -l select=10:ncpus=16 ./myjob
```
In this example, we allocate 10 nvidia accelerated nodes, 16 cores per node, for 24 hours. We allocate these resources via the qnvidia queue. Jobscript myjob will be executed on the first node in the allocation.
```bash
```console
$ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=16 ./myjob
```
......@@ -50,13 +50,13 @@ In this example, we allocate 10 nodes, 16 cores per node, for 12 hours. We alloc
All qsub options may be [saved directly into the jobscript](#example-jobscript-for-mpi-calculation-with-preloaded-inputs). In such a case, no options to qsub are needed.
```bash
```console
$ qsub ./myjob
```
By default, the PBS batch system sends an e-mail only when the job is aborted. Disabling mail events completely can be done like this:
```bash
```console
$ qsub -m n
```
......@@ -66,8 +66,8 @@ $ qsub -m n
Specific nodes may be allocated via the PBS
```bash
qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=16:host=cn171+1:ncpus=16:host=cn172 -I
```console
$ qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=16:host=cn171+1:ncpus=16:host=cn172 -I
```
In this example, we allocate nodes cn171 and cn172, all 16 cores per node, for 24 hours. Consumed resources will be accounted to the Project identified by Project ID OPEN-0-0. The resources will be available interactively.
......@@ -81,7 +81,7 @@ Nodes equipped with Intel Xeon E5-2665 CPU have base clock frequency 2.4GHz, nod
| Intel Xeon E5-2665 | 2.4GHz | cn[1-180], cn[208-209] | 24 |
| Intel Xeon E5-2470 | 2.3GHz | cn[181-207] | 23 |
```bash
```console
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16:cpu_freq=24 -I
```
......@@ -95,8 +95,8 @@ Nodes sharing the same switch may be selected via the PBS resource attribute ibs
We recommend allocating compute nodes of a single switch when best possible computational network performance is required to run the job efficiently:
```bash
qsub -A OPEN-0-0 -q qprod -l select=18:ncpus=16:ibswitch=isw11 ./myjob
```console
$ qsub -A OPEN-0-0 -q qprod -l select=18:ncpus=16:ibswitch=isw11 ./myjob
```
In this example, we request all the 18 nodes sharing the isw11 switch for 24 hours. Full chassis will be allocated.
......@@ -109,8 +109,8 @@ Intel Turbo Boost Technology is on by default. We strongly recommend keeping the
If necessary (such as in case of benchmarking) you can disable the Turbo for all nodes of the job by using the PBS resource attribute cpu_turbo_boost
```bash
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
```console
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I
```
More about the Intel Turbo Boost in the TurboBoost section
......@@ -119,8 +119,8 @@ More about the Intel Turbo Boost in the TurboBoost section
In the following example, we select an allocation for benchmarking a very special and demanding MPI program. We request Turbo off, 2 full chassis of compute nodes (nodes sharing the same IB switches) for 30 minutes:
```bash
$ qsub -A OPEN-0-0 -q qprod
```console
$ qsub -A OPEN-0-0 -q qprod
-l select=18:ncpus=16:ibswitch=isw10:mpiprocs=1:ompthreads=16+18:ncpus=16:ibswitch=isw20:mpiprocs=16:ompthreads=1
-l cpu_turbo_boost=0,walltime=00:30:00
-N Benchmark ./mybenchmark
......@@ -135,7 +135,7 @@ Although this example is somewhat artificial, it demonstrates the flexibility of
!!! note
Check status of your jobs using the **qstat** and **check-pbs-jobs** commands
```bash
```console
$ qstat -a
$ qstat -a -u username
$ qstat -an -u username
......@@ -144,7 +144,7 @@ $ qstat -f 12345.srv11
Example:
```bash
```console
$ qstat -a
srv11:
......@@ -160,19 +160,17 @@ In this example user1 and user2 are running jobs named job1, job2 and job3x. The
Check status of your jobs using check-pbs-jobs command. Check presence of user's PBS jobs' processes on execution hosts. Display load, processes. Display job standard and error output. Continuously display (tail -f) job standard or error output.
```bash
```console
$ check-pbs-jobs --check-all
$ check-pbs-jobs --print-load --print-processes
$ check-pbs-jobs --print-job-out --print-job-err
$ check-pbs-jobs --jobid JOBID --check-all --print-all
$ check-pbs-jobs --jobid JOBID --tailf-job-out
```
Examples:
```bash
```console
$ check-pbs-jobs --check-all
JOB 35141.dm2, session_id 71995, user user2, nodes cn164,cn165
Check session id: OK
......@@ -183,7 +181,7 @@ cn165: No process
In this example we see that job 35141.dm2 currently runs no process on allocated node cn165, which may indicate an execution error.
```bash
```console
$ check-pbs-jobs --print-load --print-processes
JOB 35141.dm2, session_id 71995, user user2, nodes cn164,cn165
Print load
......@@ -199,7 +197,7 @@ cn164: 99.7 run-task
In this example we see that job 35141.dm2 currently runs process run-task on node cn164, using one thread only, while node cn165 is empty, which may indicate an execution error.
```bash
```console
$ check-pbs-jobs --jobid 35141.dm2 --print-job-out
JOB 35141.dm2, session_id 71995, user user2, nodes cn164,cn165
Print job standard output:
......@@ -218,19 +216,19 @@ In this example, we see actual output (some iteration loops) of the job 35141.dm
You may release your allocation at any time, using qdel command
```bash
```console
$ qdel 12345.srv11
```
You may kill a running job by force, using qsig command
```bash
```console
$ qsig -s 9 12345.srv11
```
Learn more by reading the pbs man page
```bash
```console
$ man pbs_professional
```
......@@ -246,7 +244,7 @@ The Jobscript is a user made script, controlling sequence of commands for execut
!!! note
The jobscript or interactive shell is executed on first of the allocated nodes.
```bash
```console
$ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
$ qstat -n -u username
......@@ -262,7 +260,7 @@ In this example, the nodes cn17, cn108, cn109 and cn110 were allocated for 1 hou
The jobscript or interactive shell is by default executed in home directory
```bash
```console
$ qsub -q qexp -l select=4:ncpus=16 -I
qsub: waiting for job 15210.srv11 to start
qsub: job 15210.srv11 ready
......@@ -280,7 +278,7 @@ The allocated nodes are accessible via ssh from login nodes. The nodes may acces
Calculations on allocated nodes may be executed remotely via the MPI, ssh, pdsh or clush. You may find out which nodes belong to the allocation by reading the $PBS_NODEFILE file
```bash
```console
qsub -q qexp -l select=4:ncpus=16 -I
qsub: waiting for job 15210.srv11 to start
qsub: job 15210.srv11 ready
......
......@@ -19,7 +19,7 @@ The compute nodes may be accessed via the regular Gigabit Ethernet network inter
## Example
```bash
```console
$ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
$ qstat -n -u username
Req'd Req'd Elap
......
......@@ -36,14 +36,14 @@ Most of the information needed by PRACE users accessing the Anselm TIER-1 system
Before you start to use any of the services don't forget to create a proxy certificate from your certificate:
```bash
$ grid-proxy-init
```console
$ grid-proxy-init
```
To check whether your proxy certificate is still valid (by default it's valid 12 hours), use:
```bash
$ grid-proxy-info
```console
$ grid-proxy-info
```
To access Anselm cluster, two login nodes running GSI SSH service are available. The service is available from public Internet as well as from the internal PRACE network (accessible only from other PRACE partners).
......@@ -58,14 +58,14 @@ It is recommended to use the single DNS name anselm-prace.it4i.cz which is distr
| login1-prace.anselm.it4i.cz | 2222 | gsissh | login1 |
| login2-prace.anselm.it4i.cz | 2222 | gsissh | login2 |
```bash
$ gsissh -p 2222 anselm-prace.it4i.cz
```console
$ gsissh -p 2222 anselm-prace.it4i.cz
```
When logging from other PRACE system, the prace_service script can be used:
```bash
$ gsissh `prace_service -i -s anselm`
```console
$ gsissh `prace_service -i -s anselm`
```
#### Access From Public Internet:
......@@ -78,26 +78,26 @@ It is recommended to use the single DNS name anselm.it4i.cz which is distributed
| login1.anselm.it4i.cz | 2222 | gsissh | login1 |
| login2.anselm.it4i.cz | 2222 | gsissh | login2 |
```bash
$ gsissh -p 2222 anselm.it4i.cz
```console
$ gsissh -p 2222 anselm.it4i.cz
```
When logging from other PRACE system, the prace_service script can be used:
```bash
$ gsissh `prace_service -e -s anselm`
```console
$ gsissh `prace_service -e -s anselm`
```
Although the preferred and recommended file transfer mechanism is [using GridFTP](prace/#file-transfers), the GSI SSH implementation on Anselm supports also SCP, so for small files transfer gsiscp can be used:
```bash
$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ anselm.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_
```console
$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ anselm.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_
$ gsiscp -P 2222 anselm.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
$ gsiscp -P 2222 anselm.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ anselm-prace.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_
$ gsiscp -P 2222 _LOCAL_PATH_TO_YOUR_FILE_ anselm-prace.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_
$ gsiscp -P 2222 anselm-prace.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
$ gsiscp -P 2222 anselm-prace.it4i.cz:_ANSELM_PATH_TO_YOUR_FILE_ _LOCAL_PATH_TO_YOUR_FILE_
```
### Access to X11 Applications (VNC)
......@@ -106,8 +106,8 @@ If the user needs to run X11 based graphical application and does not have a X11
If the user uses GSI SSH based access, then the procedure is similar to the SSH based access, only the port forwarding must be done using GSI SSH:
```bash
$ gsissh -p 2222 anselm.it4i.cz -L 5961:localhost:5961
```console
$ gsissh -p 2222 anselm.it4i.cz -L 5961:localhost:5961
```
### Access With SSH
......@@ -133,26 +133,26 @@ There's one control server and three backend servers for striping and/or backup
Copy files **to** Anselm by running the following commands on your local machine:
```bash
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
```
Or by using prace_service script:
```bash
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
```
Copy files **from** Anselm:
```bash
$ globus-url-copy gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```
Or by using prace_service script:
```bash
$ globus-url-copy gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```
### Access From Public Internet
......@@ -166,26 +166,26 @@ Or by using prace_service script:
Copy files **to** Anselm by running the following commands on your local machine:
```bash
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
```
Or by using prace_service script:
```bash
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
```
Copy files **from** Anselm:
```bash
$ globus-url-copy gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```
Or by using prace_service script:
```bash
$ globus-url-copy gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```console
$ globus-url-copy gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
```
Generally both shared file systems are available through GridFTP:
......@@ -209,8 +209,8 @@ All system wide installed software on the cluster is made available to the users
PRACE users can use the "prace" module to use the [PRACE Common Production Environment](http://www.prace-ri.eu/prace-common-production-environment/).
```bash
$ module load prace
```console
$ module load prace
```
### Resource Allocation and Job Execution
......@@ -241,8 +241,8 @@ Users who have undergone the full local registration procedure (including signin
!!! hint
The **it4ifree** command is a part of it4i.portal.clients package, [located here](https://pypi.python.org/pypi/it4i.portal.clients).
```bash
$ it4ifree
```console
$ it4ifree
Password:
PID Total Used ...by me Free
-------- ------- ------ -------- -------
......@@ -252,9 +252,9 @@ Users who have undergone the full local registration procedure (including signin
By default file system quota is applied. To check the current status of the quota use
```bash
$ lfs quota -u USER_LOGIN /home
$ lfs quota -u USER_LOGIN /scratch
```console
$ lfs quota -u USER_LOGIN /home
$ lfs quota -u USER_LOGIN /scratch
```
If the quota is insufficient, please contact the [support](prace/#help-and-support) and request an increase.
......@@ -46,7 +46,7 @@ To have the OpenGL acceleration, **24 bit color depth must be used**. Otherwise
This example defines desktop with dimensions 1200x700 pixels and 24 bit color depth.
```bash
```console
$ module load turbovnc/1.2.2
$ vncserver -geometry 1200x700 -depth 24
......@@ -58,7 +58,7 @@ Log file is /home/username/.vnc/login2:1.log
#### 3. Remember Which Display Number Your VNC Server Runs (You Will Need It in the Future to Stop the Server)
```bash
```console
$ vncserver -list
TurboVNC server sessions:
......@@ -71,7 +71,7 @@ In this example the VNC server runs on display **:1**.
#### 4. Remember the Exact Login Node, Where Your VNC Server Runs
```bash
```console
$ uname -n
login2
```
......@@ -82,7 +82,7 @@ In this example the VNC server runs on **login2**.
To get the port you have to look to the log file of your VNC server.
```bash
```console
$ grep -E "VNC.*port" /home/username/.vnc/login2:1.log
20/02/2015 14:46:41 Listening for VNC connections on TCP port 5901
```
......@@ -93,7 +93,7 @@ In this example the VNC server listens on TCP port **5901**.
Tunnel the TCP port on which your VNC server is listenning.
```bash
```console
$ ssh login2.anselm.it4i.cz -L 5901:localhost:5901
```
......@@ -109,7 +109,7 @@ Get it from: <http://sourceforge.net/projects/turbovnc/>
Mind that you should connect through the SSH tunneled port. In this example it is 5901 on your workstation (localhost).
```bash
```console
$ vncviewer localhost:5901
```
......@@ -123,7 +123,7 @@ Now you should have working TurboVNC session connected to your workstation.
Don't forget to correctly shutdown your own VNC server on the login node!
```bash
```console
$ vncserver -kill :1
```
......@@ -147,13 +147,13 @@ To access the visualization node, follow these steps:
This step is necessary to allow you to proceed with next steps.
```bash
```console
$ qsub -I -q qviz -A PROJECT_ID
```
In this example the default values for CPU cores and usage time are used.
```bash
```console
$ qsub -I -q qviz -A PROJECT_ID -l select=1:ncpus=16 -l walltime=02:00:00
```
......@@ -163,7 +163,7 @@ In this example a whole node for 2 hours is requested.
If there are free resources for your request, you will have a shell unning on an assigned node. Please remember the name of the node.
```bash
```console
$ uname -n
srv8
```
......@@ -174,7 +174,7 @@ In this example the visualization session was assigned to node **srv8**.
Setup the VirtualGL connection to the node, which PBSPro allocated for our job.
```bash