insitu.md 6.59 KB
Newer Older
Petr Strakos's avatar
Petr Strakos committed
1
# In situ visualization
Petr Strakos's avatar
Petr Strakos committed
2 3 4

## Introduction

Petr Strakos's avatar
Petr Strakos committed
5
In situ visualization is a possibility how to visualize your data while your computation is progressing on multiple nodes of a cluster. It is a visualization pipeline that can be used on our [Salomon][salomon_web] supercomputer. The pipeline is based on [ParaView Catalyst][catalyst_web] library.
Petr Strakos's avatar
Petr Strakos committed
6

Petr Strakos's avatar
Petr Strakos committed
7
To leverage the possibilities of the in situ visualization by Catalyst library, you have to write an adaptor code that will use the actual data from your simulation and process them in the way they can be passed to ParaView for visualization. We provide a simple example of such simulator/adaptor code that bind together to provide the in situ visualization.
Petr Strakos's avatar
Petr Strakos committed
8

Petr Strakos's avatar
Petr Strakos committed
9
Detailed description of the Catalyst API can be found [here][catalyst_guide]. We restrict ourselves to provide more of an overall description of the code together with specifications for building, and explanation about how to run the code on the cluster.  
Petr Strakos's avatar
Petr Strakos committed
10
 
Petr Strakos's avatar
Petr Strakos committed
11 12 13

## Installed Version

Petr Strakos's avatar
Petr Strakos committed
14
The Catalyst library is part of the ParaView module. More about ParaView can be found [here][paraview_web]. We use version 5.6.0. It has been compiled with intel/2017a and installed on the Salomon cluster.
Petr Strakos's avatar
Petr Strakos committed
15 16 17

## Usage

Petr Strakos's avatar
Petr Strakos committed
18
All code concerning the simulator/adaptor are available to download from [here][code]. It is a package with the following files: CMakeLists.txt, FEAdaptor.h, FEAdaptor.cxx, FEDataStructures.h, FEDataStructures.cxx, FEDriver.cxx and feslicescript.py.
Petr Strakos's avatar
Petr Strakos committed
19 20 21 22 23 24 25

First unpack the [code][code]. You can do it by

```console
$ tar xvf package_name
```

Petr Strakos's avatar
Petr Strakos committed
26
Use CMake to manage the build process but before that load the appropriate modules (CMake, compiler, ParaView) by
Petr Strakos's avatar
Petr Strakos committed
27 28 29 30 31 32 33 34 35 36 37 38

```console
$ ml CMake intel/2017a ParaView/5.6.0-intel-2017a-mpi
```

```console
$ mkdir build
$ cd build
$ cmake ../ 
```

Now you can build the simulator/adaptor code using make
Petr Strakos's avatar
Petr Strakos committed
39

Petr Strakos's avatar
Petr Strakos committed
40 41 42 43
```console
$ make
```

Petr Strakos's avatar
Petr Strakos committed
44
It will generate the CxxFullExampleAdaptor executable file. This can be later run together with ParaView and it will provide the in situ visualization example.
Petr Strakos's avatar
Petr Strakos committed
45 46 47

## Code explanation

Petr Strakos's avatar
Petr Strakos committed
48 49 50
Provided example is a simple MPI program. Main executing part is written in FEDriver.cxx. It is a simulator code that creates computational grid   


Petr Strakos's avatar
Petr Strakos committed
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115
```c++
#include "FEDataStructures.h"
#include <mpi.h>
#include <stdio.h>
#include <unistd.h>
#include <iostream>
#include <stdlib.h>

#ifdef USE_CATALYST
#include "FEAdaptor.h"
#endif

// Example of a C++ adaptor for a simulation code

int main(int argc, char** argv)
{
  // Check the input arguments for area size
  if (argc < 4) {
	printf("Not all arguments for grid definition supplied\n");
	return 0;
  }

  unsigned int pointsX = abs(std::stoi(argv[1])); 
  unsigned int pointsY = abs(std::stoi(argv[2]));
  unsigned int pointsZ = abs(std::stoi(argv[3]));
  
  //MPI_Init(&argc, &argv);
  MPI_Init(NULL, NULL);
  Grid grid;

  unsigned int numPoints[3] = { pointsX, pointsY, pointsZ };
  double spacing[3] = { 1, 1.1, 1.3 };
  grid.Initialize(numPoints, spacing);
  Attributes attributes;
  attributes.Initialize(&grid);

#ifdef USE_CATALYST
  // The first argument is the program name
  FEAdaptor::Initialize(argc - 4, &argv[4]);
#endif
  unsigned int numberOfTimeSteps = 1000;
  for (unsigned int timeStep = 0; timeStep < numberOfTimeSteps; timeStep++)
  {
    // use a time step length of 0.1
    double time = timeStep * 0.1;
    attributes.UpdateFields(time);
#ifdef USE_CATALYST
    FEAdaptor::CoProcess(grid, attributes, time, timeStep, timeStep == numberOfTimeSteps - 1);
#endif
    
    // Get the name of the processor
    char processor_name[MPI_MAX_PROCESSOR_NAME];
    int name_len;
    MPI_Get_processor_name(processor_name, &name_len);

    printf("This is processor %s, time step: %0.3f\n", processor_name, time);
    usleep(500000);
  }

#ifdef USE_CATALYST
  FEAdaptor::Finalize();
#endif
  MPI_Finalize();

  return 0;
Petr Strakos's avatar
Petr Strakos committed
116 117
}
```
Petr Strakos's avatar
Petr Strakos committed
118 119

mpirun -n 2 ./CxxFullExample 30 30 30 ../SampleScripts/feslicescript.py 
Petr Strakos's avatar
Petr Strakos committed
120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187


### Launching Server

To launch the server, you must first allocate compute nodes, for example

```console
$ qsub -I -q qprod -A OPEN-0-0 -l select=2
```

to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution][1] for details.

After the interactive session is opened, load the ParaView module (following examples for Salomon, Anselm instructions in comments):

```console
$ ml ParaView/5.1.2-intel-2017a-mpi
```

Now launch the parallel server, with number of nodes times 24 (16 on Anselm) processes:

```console
$ mpirun -np 48 pvserver --use-offscreen-rendering
    Waiting for client...
    Connection URL: cs://r37u29n1006:11111
    Accepting connection(s): r37u29n1006:11111i

Anselm:
$ mpirun -np 32 pvserver --use-offscreen-rendering
    Waiting for client...
    Connection URL: cs://cn77:11111
    Accepting connection(s): cn77:11111
```

Note the that the server is listening on compute node r37u29n1006 in this case, we shall use this information later.

### Client Connection

Because a direct connection is not allowed to compute nodes on Salomon, you must establish a SSH tunnel to connect to the server. Choose a port number on your PC to be forwarded to ParaView server, for example 12345. If your PC is running Linux, use this command to establish a SSH tunnel:

```console
Salomon: $ ssh -TN -L 12345:r37u29n1006:11111 username@salomon.it4i.cz
Anselm: $ ssh -TN -L 12345:cn77:11111 username@anselm.it4i.cz
```

replace username with your login and r37u29n1006 (cn77) with the name of compute node your ParaView server is running on (see previous step).

If you use PuTTY on Windows, load Salomon connection configuration, then go to *Connection* -> *SSH* -> *Tunnels* to set up the port forwarding.

Fill the Source port and Destination fields. **Do not forget to click the Add button.**

![](../../img/paraview_ssh_tunnel_salomon.png "SSH Tunnel in PuTTY")

Now launch ParaView client installed on your desktop PC. Select *File* -> *Connect...* and fill in the following :

![](../../img/paraview_connect_salomon.png "ParaView - Connect to server")

The configuration is now saved for later use. Now click Connect to connect to the ParaView server. In your terminal where you have interactive session with ParaView server launched, you should see:

```console
Client connected.
```

You can now use Parallel ParaView.

### Close Server

Remember to close the interactive session after you finish working with ParaView server, as it will remain launched even after your client is disconnected and will continue to consume resources.

Petr Strakos's avatar
Petr Strakos committed
188 189 190 191
[salomon_web]: https://docs.it4i.cz/salomon/introduction/
[catalyst_web]: https://www.paraview.org/in-situ/
[paraview_web]: http://www.paraview.org/
[catalyst_guide]: https://www.paraview.org/files/catalyst/docs/ParaViewCatalystUsersGuide_v2.pdf