diff --git a/docs.it4i/software/viz/insitu.md b/docs.it4i/software/viz/insitu.md
index 782d682103451f6c0bcd31f925f2d2854bc91505..2a9b34104b19053f0e59e634796d1234278b2d67 100644
--- a/docs.it4i/software/viz/insitu.md
+++ b/docs.it4i/software/viz/insitu.md
@@ -15,123 +15,142 @@ The Catalyst library is part of the ParaView module. More about ParaView can be
 
 ## Usage
 
-All code concerning the simulator/adaptor are available to download from [here][code]. It is a package with the following files: CMakeLists.txt, FEAdaptor.h, FEAdaptor.cxx, FEDataStructures.h, FEDataStructures.cxx, FEDriver.cxx and feslicescript.py.
+All code concerning the simulator/adaptor is available to download from [here][code]. It is a package with the following files: CMakeLists.txt, FEAdaptor.h, FEAdaptor.cxx, FEDataStructures.h, FEDataStructures.cxx, FEDriver.cxx and feslicescript.py.
 
-First unpack the [code][code]. You can do it by
+After the download unpack the code by
 
 ```console
 $ tar xvf package_name
 ```
 
-Use CMake to manage the build process but before that load the appropriate modules (CMake, compiler, ParaView) by
+Then use CMake to manage the build process, but before that load the appropriate modules (CMake, ParaView) by
 
 ```console
-$ ml CMake intel/2017a ParaView/5.6.0-intel-2017a-mpi
+$ ml CMake ParaView/5.6.0-intel-2017a-mpi
 ```
 
+This module set also loads necessary intel compiler within the dependencies
+
 ```console
 $ mkdir build
 $ cd build
 $ cmake ../ 
 ```
 
-Now you can build the simulator/adaptor code using make
+Now you can build the simulator/adaptor code by using make
 
 ```console
 $ make
 ```
 
-It will generate the CxxFullExampleAdaptor executable file. This can be later run together with ParaView and it will provide the in situ visualization example.
+It will generate the CxxFullExampleAdaptor executable file. This will be later run together with ParaView and it will provide the in situ visualization example.
 
 ## Code explanation
 
-Provided example is a simple MPI program. Main executing part is written in FEDriver.cxx. It is a simulator code that creates computational grid and performs simulation-adaptor interaction (see below). 
-
-Dimensions of the computational grid in terms of number of points in x, y, z direction are supplied as input parameters to the *main* function (see lines 22-24 in code below). Based on the number of MPI ranks each MPI process creates different part of the overall grid. This is done by grid initialization, see line 30. The respective code for this is in FEDataStructures.cxx.
+Provided example is a simple MPI program. Main executing part is written in FEDriver.cxx. It is a simulator code that creates computational grid and performs simulator/adaptor interaction (see below). 
 
-The fourth parametr in *main* is for the name of a Python script (we use feslicescript.py). It sets up the ParaView-Catalyst pipeline. After the Adaptor initialization on line 36 we start the simulation by linearly progressing *timeStep* value in the *for* loop. Each iteration of the loop upates the grid attributes (Velocity and Pressure) by calling the i*UpdateFields* method from *Attributes* class
+Dimensions of the computational grid in terms of number of points in x, y, z direction are supplied as input parameters to the *main* function (see lines 22-24 in code below). Based on the number of MPI ranks each MPI process creates different part of the overall grid. This is done by grid initialization, see line 30. The respective code for this is in FEDataStructures.cxx. The parametr nr. 4 in *main* is for the name of a Python script (we use feslicescript.py). It sets up the ParaView-Catalyst pipeline, see line 36. The simulation starts by linearly progressing *timeStep* value in the *for* loop. Each iteration of the loop upates the grid attributes (Velocity and Pressure) by calling the i*UpdateFields* method from *Attributes* class. Next in the loop, the adaptor's CoProcess function is called by using actual parameters (grid, time). To provide some information about the state of the simulation the actual time step is print together with the name of the processor that handles the computation inside the allocated MPI rank. Before the loop ends appropriate sleep time is used to give sime time visualization to update. After the simulation loop ends, clean up is done by calling *Finalize* function on adaptor and also *MPI_Finalize* on MPI processes.
  
 ![](insitu/img/FEDriver.png "FEDriver.cxx")
 
+Adaptor's initialization performs several necessary steps, see the code below. It creates vtkCPProcessor using Catalyst library and adds pipeline to it. The pipeline is initialized by the reffered Python script. 
+
+![](insitu/img/Initialize.png "Initialization of the adaptor, within FEAdaptor.cxx")
+
+As a Python script to initialize the Catalyst pipeline we use the feslicescript.py. You enable the live visualization in here and set the proper connection port. You can also another commands and functions to configure it for saving the data during the visualization or another tasks that are available from the ParaView environment. For more details see the [Catalyst guide][catalyst_guide].
+
+![](insitu/img/feslicescript.png "Catalyst pipeline setup by Python script")
 
-In the *UpdateFields* method the velocity progresses with the value of *time* and with the specific value of *setting* which depends on the actual MPI rank. In this way, different processes can be visually distinguished during the simulation.   
+The *UpdateFields* method from the *Attributes* class, updates the velocity value in respect to the value of *time* and the value of *setting* which depends on the actual MPI rank, see the code below. In this way, different processes can be visually distinguished during the simulation.   
 
-![](insitu/img/UpdateFields.png "UpdateFields method of the Attributes class")
+![](insitu/img/UpdateFields.png "UpdateFields method of the Attributes class from FEDataStructures.cxx")
 
-Further in the simulation loop, the adaptor's CoProcess function is called by using actual parameters of the grid in time.
+As mentioned before, further in the simulation loop, the adaptor's CoProcess function is called by using actual parameters of the *grid*, *time*, and *timeStep*. In the function proper representation and eescription of the data is created. Such data are then passed to the Processor that has been created during the adaptor's initialization. The code of the CoProcess function is shown below.
 
-![](insitu/img/CoProcess.png "CoProcess function of the adaptor")
+![](insitu/img/CoProcess.png "CoProcess function of the adaptor, within FEAdaptor.cxx")
 
-mpirun -n 2 ./CxxFullExample 30 30 30 ../SampleScripts/feslicescript.py 
 
+### Launching the simulator/adaptor with ParaView
 
-### Launching Server
+To launch ParaView you have two standard options on Salomon. You can run ParaView from your local machine in client-server mode by following the information [here][paraview_it4i] or you can connect to the cluster using VNC and graphical environment by following the information on [VNC][vnc_it4i]. In both cases we will use ParaView version 5.6.0 and its respective module.
+
+Either you use client-server mode or VNC for ParaView you have to allocate some computing resources. This is done by the console commands below (supply valid Project ID). 
+
+For client-server mode of ParaView we allocate the resources by
+
+```console
+$ qsub -I -q qprod -A PROJECT-ID -l select=2
+```
 
-To launch the server, you must first allocate compute nodes, for example
+In case of VNC connection we use X11 forwarding by -X option to allow graphical environment on interactive session.
 
 ```console
-$ qsub -I -q qprod -A OPEN-0-0 -l select=2
+$ qsub -IX -q qprod -A PROJECT-ID -l select=2
 ```
 
-to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution][1] for details.
+The issued console commands launche an interactive session on 2 nodes. This is the minimal setup to test that the simulator/adaptor code runs on multiple nodes. 
 
-After the interactive session is opened, load the ParaView module (following examples for Salomon, Anselm instructions in comments):
+After the interactive session is opened, load the ParaView module
 
 ```console
-$ ml ParaView/5.1.2-intel-2017a-mpi
+$ ml ParaView/5.6.0-intel-2017a-mpi
 ```
 
-Now launch the parallel server, with number of nodes times 24 (16 on Anselm) processes:
+When the module is loaded and you run the client-server mode, launch the mpirun command for pvserver as used in description for [ParaView client-server][paraview_it4i] but use also the *&* sign at the end of the command. Then you can use the console later for running the simulator/adaptor code. If you run the VNC session, after loading the ParaView module, setup the environmental parameter for correct keyboard input and then run the ParaView in the background using the *&* sign
 
 ```console
-$ mpirun -np 48 pvserver --use-offscreen-rendering
-    Waiting for client...
-    Connection URL: cs://r37u29n1006:11111
-    Accepting connection(s): r37u29n1006:11111i
-
-Anselm:
-$ mpirun -np 32 pvserver --use-offscreen-rendering
-    Waiting for client...
-    Connection URL: cs://cn77:11111
-    Accepting connection(s): cn77:11111
+$ export QT_XKB_CONFIG_ROOT=/usr/share/X11/xkb
+$ paraview &
 ```
 
-Note the that the server is listening on compute node r37u29n1006 in this case, we shall use this information later.
+If you have proceeded to the end in the description for ParaView client-server mode and run ParaView localy or you have launched ParaView remotely using VNC the further steps are the same for both options. In the ParaView go to the *Catalyst* -> *Connect* and hit *OK* in the pop up window questioning for the connection port. After that ParaView starts listening for incomming data to the in situ visualization.
 
-### Client Connection
+![](insitu/img/Catalyst_connect.png "Starting Catalyst connection in ParaView")
 
-Because a direct connection is not allowed to compute nodes on Salomon, you must establish a SSH tunnel to connect to the server. Choose a port number on your PC to be forwarded to ParaView server, for example 12345. If your PC is running Linux, use this command to establish a SSH tunnel:
+Go to your build directory and run the built simulator/adaptor code from console by
 
 ```console
-Salomon: $ ssh -TN -L 12345:r37u29n1006:11111 username@salomon.it4i.cz
-Anselm: $ ssh -TN -L 12345:cn77:11111 username@anselm.it4i.cz
+mpirun -n 2 ./CxxFullExample 30 30 30 ../feslicescript.py 
 ```
 
-replace username with your login and r37u29n1006 (cn77) with the name of compute node your ParaView server is running on (see previous step).
+Programs starts to compute on the allocated nodes and prints out the response
 
-If you use PuTTY on Windows, load Salomon connection configuration, then go to *Connection* -> *SSH* -> *Tunnels* to set up the port forwarding.
+![](insitu/img/Simulator_response.png "Simulator/adaptor response")
 
-Fill the Source port and Destination fields. **Do not forget to click the Add button.**
+In ParaView you should see a new pipeline called *input* in the *Pipeline Browser*.
 
-![](../../img/paraview_ssh_tunnel_salomon.png "SSH Tunnel in PuTTY")
+![](insitu/img/Input_pipeline.png "Input pipeline")
 
-Now launch ParaView client installed on your desktop PC. Select *File* -> *Connect...* and fill in the following :
+By clicking on it another item called *Extract: input* is added.
 
-![](../../img/paraview_connect_salomon.png "ParaView - Connect to server")
+![](insitu/img/Extract_input.png "Extract: input")
 
-The configuration is now saved for later use. Now click Connect to connect to the ParaView server. In your terminal where you have interactive session with ParaView server launched, you should see:
+If you click on the eye icon left to the *Extract: input* item the data finally appears.
 
-```console
-Client connected.
-```
+![](insitu/img/Data_shown.png "Sended data")
+
+To visualize the velocity property on the geometry, go to the *Properties* tab and choose *velocity* instead of *Solid Color* in *Coloring* menu.
+
+![](insitu/img/Show_velocity.png "Show velocity data")
+
+The final result looks like in the image below, where different domains dependent on the number of allocated resources can be seen and progress through the time
 
-You can now use Parallel ParaView.
+![](insitu/img/Result.png "Velocity results")
 
-### Close Server
+### Cleanup
 
-Remember to close the interactive session after you finish working with ParaView server, as it will remain launched even after your client is disconnected and will continue to consume resources.
+After you finish the in situ visualization you should a provide proper cleanup.
+
+Terminate the simulation if it is still running. 
+
+In VNC session close the ParaView and the interactive job. Close the VNC client. Kill the Vncserver as described [here][vnc_it4i].
+
+In client-server mode of ParaView disconnect from the server in ParaView and close the interactive job. 
 
 [salomon_web]: https://docs.it4i.cz/salomon/introduction/
 [catalyst_web]: https://www.paraview.org/in-situ/
-[paraview_web]: http://www.paraview.org/
 [catalyst_guide]: https://www.paraview.org/files/catalyst/docs/ParaViewCatalystUsersGuide_v2.pdf
+[paraview_web]: http://www.paraview.org/
+[code]: insitu/insitu.tar.gz
+[paraview_it4i]: https://docs.it4i.cz/software/viz/paraview/
+[vnc_it4i]: https://docs.it4i.cz/general/accessing-the-clusters/graphical-user-interface/vnc/
diff --git a/docs.it4i/software/viz/insitu/CMakeLists.txt b/docs.it4i/software/viz/insitu/CMakeLists.txt
index 387365643ce8b85929ad22f24ec22c7dd8e06909..b0b7d630a00e55b1ca20af86995cc7dae5648194 100644
--- a/docs.it4i/software/viz/insitu/CMakeLists.txt
+++ b/docs.it4i/software/viz/insitu/CMakeLists.txt
@@ -33,6 +33,6 @@ option(BUILD_TESTING "Build Testing" OFF)
 # Setup testing.
 if (BUILD_TESTING)
   include(CTest)
-  add_test(NAME CxxFullExampleTest COMMAND CxxFullExample ${CMAKE_CURRENT_SOURCE_DIR}/SampleScripts/feslicescript.py)
+  add_test(NAME CxxFullExampleTest COMMAND CxxFullExample ${CMAKE_CURRENT_SOURCE_DIR}/feslicescript.py)
   set_tests_properties(CxxFullExampleTest PROPERTIES LABELS "PARAVIEW;CATALYST")
 endif()
diff --git a/docs.it4i/software/viz/insitu/FEDriver.cxx b/docs.it4i/software/viz/insitu/FEDriver.cxx
index ed817265f16a1c1830643b0423d61e4fc3f52fbd..90848a04d6790dec836ec115abb03126e48e9458 100644
--- a/docs.it4i/software/viz/insitu/FEDriver.cxx
+++ b/docs.it4i/software/viz/insitu/FEDriver.cxx
@@ -11,9 +11,9 @@
 
 int main(int argc, char** argv)
 {
-  // Check the input arguments for area size
-  if (argc < 4) {
-	printf("Not all arguments for grid definition supplied\n");
+  // Check the input arguments
+  if (argc < 5) {
+	printf("Not all arguments supplied (grid definition, Python script name)\n");
 	return 0;
   }
 
@@ -21,7 +21,7 @@ int main(int argc, char** argv)
   unsigned int pointsY = abs(std::stoi(argv[2]));
   unsigned int pointsZ = abs(std::stoi(argv[3]));
   
-  //MPI_Init(&argc, &argv);
+  // MPI_Init(&argc, &argv);
   MPI_Init(NULL, NULL);
   Grid grid;
 
@@ -32,13 +32,13 @@ int main(int argc, char** argv)
   attributes.Initialize(&grid);
 
 #ifdef USE_CATALYST
-  // The first argument is the program name
+  // The argument nr. 4 is the Python script name
   FEAdaptor::Initialize(argv[4]);
 #endif
   unsigned int numberOfTimeSteps = 1000;
   for (unsigned int timeStep = 0; timeStep < numberOfTimeSteps; timeStep++)
   {
-    // use a time step length of 0.1
+    // Use a time step of length 0.1
     double time = timeStep * 0.1;
     attributes.UpdateFields(time);
 #ifdef USE_CATALYST
@@ -50,6 +50,7 @@ int main(int argc, char** argv)
     int name_len;
     MPI_Get_processor_name(processor_name, &name_len);
 
+    // Print actual time step and processor name that handles the calculation
     printf("This is processor %s, time step: %0.3f\n", processor_name, time);
     usleep(500000);
   }
diff --git a/docs.it4i/software/viz/insitu/img/Catalyst_connect.png b/docs.it4i/software/viz/insitu/img/Catalyst_connect.png
new file mode 100644
index 0000000000000000000000000000000000000000..f9a4901c52728b552ce8f4000d7906836e7aee20
Binary files /dev/null and b/docs.it4i/software/viz/insitu/img/Catalyst_connect.png differ
diff --git a/docs.it4i/software/viz/insitu/img/Data_shown.png b/docs.it4i/software/viz/insitu/img/Data_shown.png
new file mode 100644
index 0000000000000000000000000000000000000000..2342962bd946dae806625f8c692688e9c3a57da7
Binary files /dev/null and b/docs.it4i/software/viz/insitu/img/Data_shown.png differ
diff --git a/docs.it4i/software/viz/insitu/img/Extract_input.png b/docs.it4i/software/viz/insitu/img/Extract_input.png
new file mode 100644
index 0000000000000000000000000000000000000000..6bdcba1b6609f5aa4def63c91c261ec31210dd58
Binary files /dev/null and b/docs.it4i/software/viz/insitu/img/Extract_input.png differ
diff --git a/docs.it4i/software/viz/insitu/img/FEDriver.png b/docs.it4i/software/viz/insitu/img/FEDriver.png
index ad497ce5bd9d72c5c1763fb2ded181827e75c251..59c3b8b18d930106c77a51c33f8a7b624b054c2b 100644
Binary files a/docs.it4i/software/viz/insitu/img/FEDriver.png and b/docs.it4i/software/viz/insitu/img/FEDriver.png differ
diff --git a/docs.it4i/software/viz/insitu/img/Finalize.png b/docs.it4i/software/viz/insitu/img/Finalize.png
new file mode 100644
index 0000000000000000000000000000000000000000..9d7ae9fb1853898a4a4c8b3a0a097b98b5fc40a7
Binary files /dev/null and b/docs.it4i/software/viz/insitu/img/Finalize.png differ
diff --git a/docs.it4i/software/viz/insitu/img/Initialize.png b/docs.it4i/software/viz/insitu/img/Initialize.png
new file mode 100644
index 0000000000000000000000000000000000000000..19ff205cc2b257a4334e227dce28d2208dd99ea4
Binary files /dev/null and b/docs.it4i/software/viz/insitu/img/Initialize.png differ
diff --git a/docs.it4i/software/viz/insitu/img/Input_pipeline.png b/docs.it4i/software/viz/insitu/img/Input_pipeline.png
new file mode 100644
index 0000000000000000000000000000000000000000..82f8ca1c9952fc94bb05095dc5a65eb83e455955
Binary files /dev/null and b/docs.it4i/software/viz/insitu/img/Input_pipeline.png differ
diff --git a/docs.it4i/software/viz/insitu/img/Result.png b/docs.it4i/software/viz/insitu/img/Result.png
new file mode 100644
index 0000000000000000000000000000000000000000..f198073a0209e4afc562708c0bb68e3b94d85dd1
Binary files /dev/null and b/docs.it4i/software/viz/insitu/img/Result.png differ
diff --git a/docs.it4i/software/viz/insitu/img/Show_velocity.png b/docs.it4i/software/viz/insitu/img/Show_velocity.png
new file mode 100644
index 0000000000000000000000000000000000000000..5870f929f30fdb201687705a46e79122abcc619c
Binary files /dev/null and b/docs.it4i/software/viz/insitu/img/Show_velocity.png differ
diff --git a/docs.it4i/software/viz/insitu/img/Simulator_response.png b/docs.it4i/software/viz/insitu/img/Simulator_response.png
new file mode 100644
index 0000000000000000000000000000000000000000..c52f48a69dc20bc9966f05e35181c95485cdcc34
Binary files /dev/null and b/docs.it4i/software/viz/insitu/img/Simulator_response.png differ
diff --git a/docs.it4i/software/viz/insitu/img/feslicescript.png b/docs.it4i/software/viz/insitu/img/feslicescript.png
new file mode 100644
index 0000000000000000000000000000000000000000..29d5b7fa395b007990d49e0acab120f4f72ddf26
Binary files /dev/null and b/docs.it4i/software/viz/insitu/img/feslicescript.png differ
diff --git a/docs.it4i/software/viz/insitu/insitu.tar.gz b/docs.it4i/software/viz/insitu/insitu.tar.gz
new file mode 100644
index 0000000000000000000000000000000000000000..a58ee6017e061765ba7dcbf405a8ff60a8a2c222
Binary files /dev/null and b/docs.it4i/software/viz/insitu/insitu.tar.gz differ