Commit 24ab3cf0 authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

add auto filter

parent ef3b6417
# Převod html dokumentace do md formátu
html_md.sh ... převede html do md, vytvoří složku info, kde se evidují odkazy na soubory ve složkách
html_md.sh -d -html ... odstraní všechny html soubory
html_md.sh -d -md ... odstraní všechny md souborz
filter.txt ... filtrovaný text
Introduction
============
Welcome to Anselm supercomputer cluster. The Anselm cluster consists of
209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving
......
Shell access and data transfer
==============================
Interactive Login
-----------------
......@@ -7,7 +8,7 @@ The Anselm cluster is accessed by SSH protocol via login nodes login1
and login2 at address anselm.it4i.cz. The login nodes may be addressed
specifically, by prepending the login node name to the address.
Login address Port Protocol Login node
------ ----------
----------------------- ------ ---------- ----------------------------------------------
anselm.it4i.cz 22 ssh round-robin DNS record for login1 and login2
login1.anselm.it4i.cz 22 ssh login1
login2.anselm.it4i.cz 22 ssh login2
......@@ -20,12 +21,12 @@ d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA)</span>
 
Private keys authentication:
On **Linux** or **Mac**, use
``` {.prettyprint .lang-sh}
```
local $ ssh -i /path/to/id_rsa username@anselm.it4i.cz
```
If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this
command to set lower permissions to private key file.
``` {.prettyprint .lang-sh}
```
local $ chmod 600 /path/to/id_rsa
```
On **Windows**, use [PuTTY ssh
......@@ -51,7 +52,7 @@ protocols. <span class="discreet">(Not available yet.) In case large
volumes of data are transferred, use dedicated data mover node
dm1.anselm.it4i.cz for increased performance.</span>
Address Port Protocol
---- ----------- ------------------
-------------------------------------------------- ---------------------------------- -----------------------------------------
anselm.it4i.cz 22 scp, sftp
login1.anselm.it4i.cz 22 scp, sftp
login2.anselm.it4i.cz 22 scp, sftp
......@@ -67,26 +68,26 @@ be expected.  Fast cipher (aes128-ctr) should be used.
If you experience degraded data transfer performance, consult your local
network provider.
On linux or Mac, use scp or sftp client to transfer the data to Anselm:
``` {.prettyprint .lang-sh}
```
local $ scp -i /path/to/id_rsa my-local-file username@anselm.it4i.cz:directory/file
```
``` {.prettyprint .lang-sh}
```
local $ scp -i /path/to/id_rsa -r my-local-dir username@anselm.it4i.cz:directory
```
> or
``` {.prettyprint .lang-sh}
```
local $ sftp -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz
```
Very convenient way to transfer files in and out of the Anselm computer
is via the fuse filesystem
[sshfs](http://linux.die.net/man/1/sshfs)
``` {.prettyprint .lang-sh}
```
local $ sshfs -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz:. mountpoint
```
Using sshfs, the users Anselm home directory will be mounted on your
local computer, just like an external disk.
Learn more on ssh, scp and sshfs by reading the manpages
``` {.prettyprint .lang-sh}
```
$ man ssh
$ man scp
$ man sshfs
......
Outgoing connections
====================
Connection restrictions
-----------------------
Outgoing connections, from Anselm Cluster login nodes to the outside
world, are restricted to following ports:
Port Protocol
......
Shell access and data transfer
==============================
Interactive Login
-----------------
......@@ -7,7 +8,7 @@ The Anselm cluster is accessed by SSH protocol via login nodes login1
and login2 at address anselm.it4i.cz. The login nodes may be addressed
specifically, by prepending the login node name to the address.
Login address Port Protocol Login node
------ ----------
----------------------- ------ ---------- ----------------------------------------------
anselm.it4i.cz 22 ssh round-robin DNS record for login1 and login2
login1.anselm.it4i.cz 22 ssh login1
login2.anselm.it4i.cz 22 ssh login2
......@@ -20,12 +21,12 @@ d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA)</span>
 
Private keys authentication:
On **Linux** or **Mac**, use
``` {.prettyprint .lang-sh}
```
local $ ssh -i /path/to/id_rsa username@anselm.it4i.cz
```
If you see warning message "UNPROTECTED PRIVATE KEY FILE!", use this
command to set lower permissions to private key file.
``` {.prettyprint .lang-sh}
```
local $ chmod 600 /path/to/id_rsa
```
On **Windows**, use [PuTTY ssh
......@@ -51,7 +52,7 @@ protocols. <span class="discreet">(Not available yet.) In case large
volumes of data are transferred, use dedicated data mover node
dm1.anselm.it4i.cz for increased performance.</span>
Address Port Protocol
---- ----------- ------------------
-------------------------------------------------- ---------------------------------- -----------------------------------------
anselm.it4i.cz 22 scp, sftp
login1.anselm.it4i.cz 22 scp, sftp
login2.anselm.it4i.cz 22 scp, sftp
......@@ -67,26 +68,26 @@ be expected.  Fast cipher (aes128-ctr) should be used.
If you experience degraded data transfer performance, consult your local
network provider.
On linux or Mac, use scp or sftp client to transfer the data to Anselm:
``` {.prettyprint .lang-sh}
```
local $ scp -i /path/to/id_rsa my-local-file username@anselm.it4i.cz:directory/file
```
``` {.prettyprint .lang-sh}
```
local $ scp -i /path/to/id_rsa -r my-local-dir username@anselm.it4i.cz:directory
```
> or
``` {.prettyprint .lang-sh}
```
local $ sftp -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz
```
Very convenient way to transfer files in and out of the Anselm computer
is via the fuse filesystem
[sshfs](http://linux.die.net/man/1/sshfs)
``` {.prettyprint .lang-sh}
```
local $ sshfs -o IdentityFile=/path/to/id_rsa username@anselm.it4i.cz:. mountpoint
```
Using sshfs, the users Anselm home directory will be mounted on your
local computer, just like an external disk.
Learn more on ssh, scp and sshfs by reading the manpages
``` {.prettyprint .lang-sh}
```
$ man ssh
$ man scp
$ man sshfs
......
Storage
=======
There are two main shared file systems on Anselm cluster, the
[HOME](#home) and [SCRATCH](#scratch). All
......@@ -64,12 +65,12 @@ Use the lfs getstripe for getting the stripe parameters. Use the lfs
setstripe command for setting the stripe parameters to get optimal I/O
performance The correct stripe setting depends on your needs and file
access patterns. 
``` {.prettyprint .lang-sh}
```
$ lfs getstripe dir|filename
$ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename
```
Example:
``` {.prettyprint .lang-sh}
```
$ lfs getstripe /scratch/username/
/scratch/username/
stripe_count 1 stripe_size 1048576 stripe_offset -1
......@@ -84,7 +85,7 @@ and verified. All files written to this directory will be striped over
10 OSTs
Use lfs check OSTs to see the number and status of active OSTs for each
filesystem on Anselm. Learn more by reading the man page
``` {.prettyprint .lang-sh}
```
$ lfs check osts
$ man lfs
```
......@@ -223,11 +224,11 @@ Number of OSTs
### <span>Disk usage and quota commands</span>
<span>User quotas on the file systems can be checked and reviewed using
following command:</span>
``` {.prettyprint .lang-sh}
```
$ lfs quota dir
```
Example for Lustre HOME directory:
``` {.prettyprint .lang-sh}
```
$ lfs quota /home
Disk quotas for user user001 (uid 1234):
Filesystem kbytes quota limit grace files quota limit grace
......@@ -239,7 +240,7 @@ Disk quotas for group user001 (gid 1234):
In this example, we view current quota size limit of 250GB and 300MB
currently used by user001.
Example for Lustre SCRATCH directory:
``` {.prettyprint .lang-sh}
```
$ lfs quota /scratch
Disk quotas for user user001 (uid 1234):
Filesystem kbytes quota limit grace files quota limit grace
......@@ -253,11 +254,11 @@ currently used by user001.
 
To have a better understanding of where the space is exactly used, you
can use following command to find out.
``` {.prettyprint .lang-sh}
```
$ du -hs dir
```
Example for your HOME directory:
``` {.prettyprint .lang-sh}
```
$ cd /home
$ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
258M cuda-samples
......@@ -272,10 +273,10 @@ is sorted in descending order from largest to smallest
files/directories.
<span>To have a better understanding of previous commands, you can read
manpages.</span>
``` {.prettyprint .lang-sh}
```
$ man lfs
```
``` {.prettyprint .lang-sh}
```
$ man du
```
### Extended ACLs
......@@ -287,7 +288,7 @@ number of named user and named group entries.
ACLs on a Lustre file system work exactly like ACLs on any Linux file
system. They are manipulated with the standard tools in the standard
manner. Below, we create a directory and allow a specific user access.
``` {.prettyprint .lang-sh}
```
[vop999@login1.anselm ~]$ umask 027
[vop999@login1.anselm ~]$ mkdir test
[vop999@login1.anselm ~]$ ls -ld test
......@@ -388,7 +389,7 @@ files in /tmp directory are automatically purged.
**
----------
Mountpoint Usage Protocol Net Capacity Throughput Limitations Access Services
------------------- ---- ---------- ---------------- ------------ ------------- -- ------
------------------------------------------ --------------------------- ---------- ---------------- ------------ ------------- ------------------------- -----------------------------
<span class="monospace">/home</span> home directory Lustre 320 TiB 2 GB/s Quota 250GB Compute and login nodes backed up
<span class="monospace">/scratch</span> cluster shared jobs' data Lustre 146 TiB 6 GB/s Quota 100TB Compute and login nodes files older 90 days removed
<span class="monospace">/lscratch</span> node local jobs' data local 330 GB 100 MB/s none Compute nodes purged after job ends
......
VPN Access
==========
Accessing IT4Innovations internal resources via VPN
-----
---------------------------------------------------
**Failed to initialize connection subsystem Win 8.1 - 02-10-15 MS
patch**
Workaround can be found at
......@@ -20,7 +21,7 @@ the following operating systems:
- <span>MacOS</span>
It is impossible to connect to VPN from other operating systems.
<span>VPN client installation</span>
-------------
------------------------------------
You can install VPN client from web interface after successful login
with LDAP credentials on address <https://vpn1.it4i.cz/anselm>
![](https://docs.it4i.cz/anselm-cluster-documentation/login.jpg/@@images/30271119-b392-4db9-a212-309fb41925d6.jpeg)
......@@ -48,6 +49,7 @@ successfull](https://docs.it4i.cz/anselm-cluster-documentation/downloadfilesucce
After successful download of installation file, you have to execute this
tool with administrator's rights and install VPN client manually.
Working with VPN client
-----------------------
You can use graphical user interface or command line interface to run
VPN client on all supported operating systems. We suggest using GUI.
![Icon](https://docs.it4i.cz/anselm-cluster-documentation/icon.jpg "Icon")
......
Graphical User Interface
========================
X Window System
---------------
......
Compute Nodes
=============
Nodes Configuration
-------------------
......@@ -100,7 +101,7 @@ nodes.****
****Figure Anselm bullx B510 servers****
### Compute Nodes Summary********
Node type Count Range Memory Cores [Access](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/resources-allocation-policy)
----- ------- --------------- -------- ------------- -----
---------------------------- ------- --------------- -------- ------------- -----------------------------------------------------------------------------------------------------------------------------------------------
Nodes without accelerator 180 cn[1-180] 64GB 16 @ 2.4Ghz qexp, qprod, qlong, qfree
Nodes with GPU accelerator 23 cn[181-203] 96GB 16 @ 2.3Ghz qgpu, qprod
Nodes with MIC accelerator 4 cn[204-207] 96GB 16 @ 2.3GHz qmic, qprod
......@@ -137,7 +138,7 @@ with accelerator). Processors support Advanced Vector Extensions (AVX)
Nodes equipped with Intel Xeon E5-2665 CPU have set PBS resource
attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU
have set PBS resource attribute cpu_freq = 23.
``` {.prettyprint .lang-sh}
```
$ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16:cpu_freq=24 -I
```
In this example, we allocate 4 nodes, 16 cores at 2.4GHhz per node.
......
Environment and Modules
=======================
### Environment Customization
After logging in, you may want to configure the environment. Write your
preferred path definitions, aliases, functions and module loads in the
.bashrc file
``` {.prettyprint .lang-sh}
```
# ./bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
......@@ -39,25 +40,25 @@ Modules Path Expansion](#EasyBuild).
The modules may be loaded, unloaded and switched, according to momentary
needs.
To check available modules use
``` {.prettyprint .lang-sh}
```
$ module avail
```
To load a module, for example the octave module  use
``` {.prettyprint .lang-sh}
```
$ module load octave
```
loading the octave module will set up paths and environment variables of
your active shell such that you are ready to run the octave software
To check loaded modules use
``` {.prettyprint .lang-sh}
```
$ module list
```
 To unload a module, for example the octave module use
``` {.prettyprint .lang-sh}
```
$ module unload octave
```
Learn more on modules by reading the module man page
``` {.prettyprint .lang-sh}
```
$ man module
```
Following modules set up the development environment
......@@ -72,7 +73,7 @@ using tool called
In case that you want to use some applications that are build by
EasyBuild already, you have to modify your MODULEPATH environment
variable.
``` {.prettyprint .lang-sh}
```
export MODULEPATH=$MODULEPATH:/apps/easybuild/modules/all/
```
This command expands your searched paths to modules. You can also add
......
Hardware Overview
=================
The Anselm cluster consists of 209 computational nodes named cn[1-209]
of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated
......@@ -353,7 +354,7 @@ Total max. LINPACK performance  (Rmax)
Total amount of RAM
15.136 TB
Node Processor Memory Accelerator
------------------ ---------------- -------- ----------------------
------------------ --------------------------------------- -------- ----------------------
w/o accelerator 2x Intel Sandy Bridge E5-2665, 2.4GHz 64GB -
GPU accelerated 2x Intel Sandy Bridge E5-2470, 2.3GHz 96GB NVIDIA Kepler K20
MIC accelerated 2x Intel Sandy Bridge E5-2470, 2.3GHz 96GB Intel Xeon Phi P5110
......
Introduction
============
Welcome to Anselm supercomputer cluster. The Anselm cluster consists of
209 compute nodes, totaling 3344 compute cores with 15TB RAM and giving
......
Network
=======
All compute and login nodes of Anselm are interconnected by
[Infiniband](http://en.wikipedia.org/wiki/InfiniBand)
......@@ -29,7 +30,7 @@ aliases cn1-cn209.
The network provides **114MB/s** transfer rates via the TCP connection.
Example
-------
``` {.prettyprint .lang-sh}
```
$ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
$ qstat -n -u username
Req'd Req'd Elap
......
PRACE User Support
==================
Intro
-----
......@@ -31,7 +32,7 @@ to access the web interface of the local (IT4Innovations) request
tracker and thus a new ticket should be created by sending an e-mail to
support[at]it4i.cz.
Obtaining Login Credentials
----
---------------------------
In general PRACE users already have a PRACE account setup through their
HOMESITE (institution from their country) as a result of rewarded PRACE
project proposal. This includes signed PRACE AuP, generated and
......@@ -80,7 +81,7 @@ class="monospace">anselm-prace.it4i.cz</span> which is distributed
between the two login nodes. If needed, user can login directly to one
of the login nodes. The addresses are:
Login address Port Protocol Login node
------ ------ ---------- ------------------
----------------------------- ------ ---------- ------------------
anselm-prace.it4i.cz 2222 gsissh login1 or login2
login1-prace.anselm.it4i.cz 2222 gsissh login1
login2-prace.anselm.it4i.cz 2222 gsissh login2
......@@ -96,7 +97,7 @@ class="monospace">anselm.it4i.cz</span> which is distributed between the
two login nodes. If needed, user can login directly to one of the login
nodes. The addresses are:
Login address Port Protocol Login node
------ ---------- ------------------
----------------------- ------ ---------- ------------------
anselm.it4i.cz 2222 gsissh login1 or login2
login1.anselm.it4i.cz 2222 gsissh login1
login2.anselm.it4i.cz 2222 gsissh login2
......@@ -145,7 +146,7 @@ There's one control server and three backend servers for striping and/or
backup in case one of them would fail.
**Access from PRACE network:**
Login address Port Node role
------- ------ ------
------------------------------ ------ -----------------------------
gridftp-prace.anselm.it4i.cz 2812 Front end /control server
login1-prace.anselm.it4i.cz 2813 Backend / data mover server
login2-prace.anselm.it4i.cz 2813 Backend / data mover server
......@@ -162,7 +163,7 @@ Or by using <span class="monospace">prace_service</span> script:
 
**Access from public Internet:**
Login address Port Node role
- ------ ------
------------------------ ------ -----------------------------
gridftp.anselm.it4i.cz 2812 Front end /control server
login1.anselm.it4i.cz 2813 Backend / data mover server
login2.anselm.it4i.cz 2813 Backend / data mover server
......@@ -179,7 +180,7 @@ Or by using <span class="monospace">prace_service</span> script:
 
Generally both shared file systems are available through GridFTP:
File system mount point Filesystem Comment
-- ------------ ------------------
------------------------- ------------ ----------------------------------------------------------------
/home Lustre Default HOME directories of users in format /home/prace/login/
/scratch Lustre Shared SCRATCH mounted on the whole cluster
More information about the shared file systems is available
......@@ -212,7 +213,7 @@ execution is in this [section of general
documentation](https://docs.it4i.cz/anselm-cluster-documentation/resource-allocation-and-job-execution/introduction).
For PRACE users, the default production run queue is "qprace". PRACE
users can also use two other queues "qexp" and "qfree".
------
-------------------------------------------------------------------------------------------------------------------------
queue Active project Project resources Nodes priority authorization walltime
default/max
--------------------- ---------------- ------------------- --------------------- ---------- --------------- -------------
......@@ -223,7 +224,7 @@ users can also use two other queues "qexp" and "qfree".
**qfree** yes none required 178 w/o accelerator very low no 12 / 12h
Free resource queue
------
-------------------------------------------------------------------------------------------------------------------------
**qprace**, the PRACE Production queue****This queue is intended for
normal production runs. It is required that active project with nonzero
remaining resources is specified to enter the qprace. The queue runs
......
Remote visualization service
============================
Introduction {#schematic-overview}
Introduction
------------
The goal of this service is to provide the users a GPU accelerated use
of OpenGL applications, especially for pre- and post- processing work,
......@@ -28,7 +28,7 @@ Schematic overview
------------------
![rem_vis_scheme](https://docs.it4i.cz/anselm-cluster-documentation/scheme.png "rem_vis_scheme")
![rem_vis_legend](https://docs.it4i.cz/anselm-cluster-documentation/legend.png "rem_vis_legend")
How to use the service {#setup-and-start-your-own-turbovnc-server}
How to use the service
----------------------
### Setup and start your own TurboVNC server.
TurboVNC is designed and implemented for cooperation with VirtualGL and
......@@ -48,7 +48,7 @@ Otherwise only the geometry (desktop size) definition is needed.
*At first VNC server run you need to define a password.*
This example defines desktop with dimensions 1200x700 pixels and 24 bit
color depth.
``` {.code .highlight .white .shell}
```
$ module load turbovnc/1.2.2
$ vncserver -geometry 1200x700 -depth 24
Desktop 'TurboVNClogin2:1 (username)' started on display login2:1
......@@ -56,7 +56,7 @@ Starting applications specified in /home/username/.vnc/xstartup.turbovnc
Log file is /home/username/.vnc/login2:1.log
```
#### 3. Remember which display number your VNC server runs (you will need it in the future to stop the server). {#3-remember-which-display-number-your-vnc-server-runs-you-will-need-it-in-the-future-to-stop-the-server}
``` {.code .highlight .white .shell}
```
$ vncserver -list
TurboVNC server sessions
X DISPLAY # PROCESS ID
......@@ -64,21 +64,21 @@ X DISPLAY # PROCESS ID
```
In this example the VNC server runs on display **:1**.
#### 4. Remember the exact login node, where your VNC server runs. {#4-remember-the-exact-login-node-where-your-vnc-server-runs}
``` {.code .highlight .white .shell}
```
$ uname -n
login2
```
In this example the VNC server runs on **login2**.
#### 5. Remember on which TCP port your own VNC server is running. {#5-remember-on-which-tcp-port-your-own-vnc-server-is-running}
To get the port you have to look to the log file of your VNC server.
``` {.code .highlight .white .shell}
```
$ grep -E "VNC.*port" /home/username/.vnc/login2:1.log
20/02/2015 14:46:41 Listening for VNC connections on TCP port 5901
```
In this example the VNC server listens on TCP port **5901**.
#### 6. Connect to the login node where your VNC server runs with SSH to tunnel your VNC session. {#6-connect-to-the-login-node-where-your-vnc-server-runs-with-ssh-to-tunnel-your-vnc-session}
Tunnel the TCP port on which your VNC server is listenning.
``` {.code .highlight .white .shell}
```
$ ssh login2.anselm.it4i.cz -L 5901:localhost:5901
```
*If you use Windows and Putty, please refer to port forwarding setup
......@@ -89,7 +89,7 @@ Get it from<http://sourceforge.net/projects/turbovnc/>
#### 8. Run TurboVNC Viewer from your workstation. {#8-run-turbovnc-viewer-from-your-workstation}
Mind that you should connect through the SSH tunneled port. In this
example it is 5901 on your workstation (localhost).
``` {.code .highlight .white .shell}
```
$ vncviewer localhost:5901
```
*If you use Windows version of TurboVNC Viewer, just run the Viewer and
......@@ -100,11 +100,11 @@ workstation.*
#### 10. After you end your visualization session. {#10-after-you-end-your-visualization-session}
*Don't forget to correctly shutdown your own VNC server on the login
node!*
``` {.code .highlight .white .shell}
```
$ vncserver -kill :1
```
Access the visualization node
------
-----------------------------
To access the node use a dedicated PBS Professional scheduler queue
**qviz**. The queue has following properties:
<table>
......@@ -154,12 +154,12 @@ hours maximum.*
To access the visualization node, follow these steps:
#### 1. In your VNC session, open a terminal and allocate a node using PBSPro qsub command. {#1-in-your-vnc-session-open-a-terminal-and-allocate-a-node-using-pbspro-qsub-command}
*This step is necessary to allow you to proceed with next steps.*
``` {.code .highlight .white .shell}
```
$ qsub -I -q qviz -A PROJECT_ID
```
In this example the default values for CPU cores and usage time are
used.
``` {.code .highlight .white .shell}
```
$ qsub -I -q qviz -A PROJECT_ID -l select=1:ncpus=16 -l walltime=02:00:00
```
*Substitute **PROJECT_ID** with the assigned project identification
......@@ -167,7 +167,7 @@ string.*
In this example a whole node for 2 hours is requested.
If there are free resources for your request, you will have a shell
running on an assigned node. Please remember the name of the node.
``` {.code .highlight .white .shell}
```
$ uname -n
srv8
```
......@@ -175,24 +175,24 @@ In this example the visualization session was assigned to node **srv8**.
#### 2. In your VNC session open another terminal (keep the one with interactive PBSPro job open). {#2-in-your-vnc-session-open-another-terminal-keep-the-one-with-interactive-pbspro-job-open}
Setup the VirtualGL connection to the node, which PBSPro allocated for
your job.
``` {.code .highlight .white .shell}
```
$ vglconnect srv8
```
You will be connected with created VirtualGL tunnel to the visualization
node, where you will have a shell.
#### 3. Load the VirtualGL module. {#3-load-the-virtualgl-module}
``` {.code .highlight .white .shell}
```
$ module load virtualgl/2.4
```
#### 4. Run your desired OpenGL accelerated application using VirtualGL script "vglrun". {#4-run-your-desired-opengl-accelerated-application-using-virtualgl-script-vglrun}
``` {.code .highlight .white .shell}
```
$ vglrun glxgears
```
Please note, that if you want to run an OpenGL application which is
available through modules, you need at first load the respective module.