Newer
Older
The goal of this service is to provide the users a GPU accelerated use of OpenGL applications, especially for pre- and post- processing work, where not only the GPU performance is needed but also fast access to the shared file systems of the cluster and a reasonable amount of RAM.
The service is based on integration of open source tools VirtualGL and TurboVNC together with the cluster's job scheduler PBS Professional.
Currently two compute nodes are dedicated for this service with following configuration for each node:
| [**Visualization node configuration**](compute-nodes/) | |
| ------------------------------------------------------ | --------------------------------------- |
| CPU | 2 x Intel Sandy Bridge E5-2670, 2.6 GHz |
| Processor cores | 16 (2 x 8 cores) |
| RAM | 64 GB, min. 4 GB per core |
| GPU | NVIDIA Quadro 4000, 2 GB RAM |
| Local disk drive | yes - 500 GB |
| Compute network | InfiniBand QDR |
TurboVNC is designed and implemented for cooperation with VirtualGL and available for free for all major platforms. For more information and download, please refer to: <http://sourceforge.net/projects/turbovnc/>
**Always use TurboVNC on both sides** (server and client) **don't mix TurboVNC and other VNC implementations** (TightVNC, TigerVNC, ...) as the VNC protocol implementation may slightly differ and diminish your user experience by introducing picture artifacts, etc.
To have the OpenGL acceleration, **24 bit color depth must be used**. Otherwise only the geometry (desktop size) definition is needed.
This example defines desktop with dimensions 1200x700 pixels and 24 bit color depth.
```bash
$ module load turbovnc/1.2.2
$ vncserver -geometry 1200x700 -depth 24
Desktop 'TurboVNC: login2:1 (username)' started on display login2:1
Starting applications specified in /home/username/.vnc/xstartup.turbovnc
Log file is /home/username/.vnc/login2:1.log
```
#### 3. Remember which display number your VNC server runs (you will need it in the future to stop the server)
In this example the VNC server runs on display **:1**.
#### 4. Remember the exact login node, where your VNC server runs
#### 5. Remember on which TCP port your own VNC server is running
To get the port you have to look to the log file of your VNC server.
```bash
$ grep -E "VNC.*port" /home/username/.vnc/login2:1.log
20/02/2015 14:46:41 Listening for VNC connections on TCP port 5901
```
In this example the VNC server listens on TCP port **5901**.
#### 6. Connect to the login node where your VNC server runs with SSH to tunnel your VNC session
Tunnel the TCP port on which your VNC server is listenning.
```bash
$ ssh login2.anselm.it4i.cz -L 5901:localhost:5901
```
If you use Windows and Putty, please refer to port forwarding setup in the documentation:
[x-window-and-vnc#section-12](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/)
#### 7. If you don't have Turbo VNC installed on your workstation
Get it from: <http://sourceforge.net/projects/turbovnc/>
Mind that you should connect through the SSH tunneled port. In this example it is 5901 on your workstation (localhost).
If you use Windows version of TurboVNC Viewer, just run the Viewer and use address **localhost:5901**.
#### 9. Proceed to the chapter "Access the visualization node"
Now you should have working TurboVNC session connected to your workstation.
Don't forget to correctly shutdown your own VNC server on the login node!
**To access the node use a dedicated PBS Professional scheduler queue
| queue | active project | project resources | nodes | min ncpus | priority | authorization | walltime |
| ---------------------------- | -------------- | ----------------- | ----- | --------- | -------- | ------------- | ---------------- |
| **qviz** Visualization queue | yes | none required | 2 | 4 | 150 | no | 1 hour / 8 hours |
Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 16 GB of RAM and 1/4 of the GPU capacity.
!!! Note
If more GPU power or RAM is required, it is recommended to allocate one whole node per user, so that all 16 cores, whole RAM and whole GPU is exclusive. This is currently also the maximum allowed allocation per one user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
To access the visualization node, follow these steps:
#### 1. In your VNC session, open a terminal and allocate a node using PBSPro qsub command
In this example the default values for CPU cores and usage time are used.
```bash
$ qsub -I -q qviz -A PROJECT_ID -l select=1:ncpus=16 -l walltime=02:00:00
```
Substitute **PROJECT_ID** with the assigned project identification string.
In this example a whole node for 2 hours is requested.
If there are free resources for your request, you will have a shell unning on an assigned node. Please remember the name of the node.
In this example the visualization session was assigned to node **srv8**.
#### 2. In your VNC session open another terminal (keep the one with interactive PBSPro job open)
Setup the VirtualGL connection to the node, which PBSPro allocated for our job.
You will be connected with created VirtualGL tunnel to the visualization ode, where you will have a shell.
#### 4. Run your desired OpenGL accelerated application using VirtualGL script "vglrun"
Please note, that if you want to run an OpenGL application which is vailable through modules, you need at first load the respective module. . g. to run the **Mentat** OpenGL application from **MARC** software ackage use:
```bash
$ module load marc/2013.1
$ vglrun mentat
```
#### 5. After you end your work with the OpenGL application
Just logout from the visualization node and exit both opened terminals nd end your VNC server session as described above.
If you want to increase the responsibility of the visualization, please adjust your TurboVNC client settings in this way:

To have an idea how the settings are affecting the resulting picture utility three levels of "JPEG image quality" are demonstrated:
