Newer
Older
The goal of this service is to provide users with GPU accelerated use of OpenGL applications, especially for pre- and post- processing work, where not only GPU performance is needed but also fast access to the shared file systems of the cluster and a reasonable amount of RAM.
The service is based on integration of the open source tools VirtualGL and TurboVNC together with the cluster's job scheduler PBS Professional.
Currently there are two dedicated compute nodes for this service with the following configuration for each node:
| [**Visualization node configuration**](compute-nodes/) | |
| ------------------------------------------------------ | --------------------------------------- |
| CPU | 2 x Intel Sandy Bridge E5-2670, 2.6 GHz |
| Processor cores | 16 (2 x 8 cores) |
| RAM | 64 GB, min. 4 GB per core |
| GPU | NVIDIA Quadro 4000, 2 GB RAM |
TurboVNC is designed and implemented for cooperation with VirtualGL and is available for free for all major platforms. For more information and download, please refer to: <http://sourceforge.net/projects/turbovnc/>
**Always use TurboVNC on both sides** (server and client) **don't mix TurboVNC and other VNC implementations** (TightVNC, TigerVNC, ...) as the VNC protocol implementation may slightly differ and diminish your user experience by introducing picture artifacts, etc.
To have OpenGL acceleration, **24 bit color depth must be used**. Otherwise only the geometry (desktop size) definition is needed.
The first time the VNC server is run you need to define a password.
This example defines a desktop with the dimensions of 1200x700 pixels and 24 bit color depth.
$ module load turbovnc/1.2.2
$ vncserver -geometry 1200x700 -depth 24
Desktop 'TurboVNC: login2:1 (username)' started on display login2:1
Starting applications specified in /home/username/.vnc/xstartup.turbovnc
Log file is /home/username/.vnc/login2:1.log
```
#### 3. Remember Which Display Number Your VNC Server Runs (You Will Need It in the Future to Stop the Server)
In this example the VNC server runs on display **:1**.
#### 4. Remember the Exact Login Node Where Your VNC Server Runs
#### 5. Remember on Which TCP Port Your Own VNC Server Is Running
To get the port you have to look to the log file of your VNC server.
$ grep -E "VNC.*port" /home/username/.vnc/login2:1.log
20/02/2015 14:46:41 Listening for VNC connections on TCP port 5901
```
In this example the VNC server listens on TCP port **5901**.
#### 6. Connect to the Login Node Where Your VNC Server Runs With SSH to Tunnel Your VNC Session
Tunnel the TCP port on which your VNC server is listenning.
If you use Windows and Putty, please refer to port forwarding setup in the documentation:
[x-window-and-vnc#section-12](../general/accessing-the-clusters/graphical-user-interface/x-window-system/)
#### 7. If You Don't Have Turbo VNC Installed on Your Workstation
Get it from: <http://sourceforge.net/projects/turbovnc/>
Mind that you should connect through the SSH tunneled port. In this example it is 5901 on your workstation (localhost).
If you use the Windows version of TurboVNC Viewer, just run the Viewer and use the address **localhost:5901**.
#### 9. Proceed to the Chapter "Access the Visualization Node"
Now you should have a working TurboVNC session connected to your workstation.
Don't forget to correctly shutdown your own VNC server on the login node!
**To access the node use the dedicated PBS Professional scheduler queue
qviz**. The queue has the following properties:
| queue | active project | project resources | nodes | min ncpus | priority | authorization | walltime |
| ---------------------------- | -------------- | ----------------- | ----- | --------- | -------- | ------------- | ---------------- |
| **qviz** Visualization queue | yes | none required | 2 | 4 | 150 | no | 1 hour / 8 hours |
Currently when accessing the node, each user gets 4 cores of a CPU allocated, thus approximately 16 GB of RAM and 1/4 of the GPU capacity.
If more GPU power or RAM is required, it is recommended to allocate one whole node per user, so that all 16 cores, the whole RAM, and the whole GPU is exclusive. This is currently also the maximum allocation allowed per user. One hour of work is allocated by default, the user may ask for 2 hours maximum.
To access the visualization node, follow these steps:
#### 1. In Your VNC Session, Open a Terminal and Allocate a Node Using the PBSPro qsub Command
This step is necessary to allow you to proceed with the next steps.
In this example the default values for CPU cores and usage time are used.
$ qsub -I -q qviz -A PROJECT_ID -l select=1:ncpus=16 -l walltime=02:00:00
```
Substitute **PROJECT_ID** with the assigned project identification string.
In this example a whole node is requested for 2 hours.
If there are free resources for your request, you will have a shell running on an assigned node. Please remember the name of the node.
In this example the visualization session was assigned to node **srv8**.
#### 2. In Your VNC Session Open Another Terminal (Keep the One With Interactive PBSPro Job Open)
Setup the VirtualGL connection to the node, which PBSPro allocated for our job.
You will be connected with the created VirtualGL tunnel to the visualization node, where you will have a shell.
#### 4. Run Your Desired OpenGL Accelerated Application Using the VirtualGL Script "Vglrun"
If you want to run an OpenGL application which is available through modules, you need to first load the respective module. E.g. to run the **Mentat** OpenGL application from **MARC** software package use:
#### 5. After You End Your Work With the OpenGL Application
Just logout from the visualization node and exit both opened terminals and end your VNC server session as described above.
If you want to increase the responsibility of the visualization, please adjust your TurboVNC client settings in this way:

To have an idea how the settings are affecting the resulting picture utility three levels of "JPEG image quality" are demonstrated:
