Skip to content
Snippets Groups Projects

Simulator Addon Guide



Content

  1. Add-on Description
  2. Installation
  3. Add-on functionality
  4. Generated Outputs
  5. Contributors
  6. License
  7. Acknowledgement


Add-on Description

  • Creates a 3D virtual environment of a railway track as a virtual replica of a real track.
  • Creates movement of a virtual train respecting user defined conditions.
  • Generates various sensory outputs such as RGB image, LIDAR, GPS, thermovision, segmentation map with a ground truth classification, etc.
  • Simulates multiple critical scenarios in the virtal environment that might arise on a real track, e.g., collision events, different light and weather conditions (rain, snow, and clouds), etc.
  • Generated environment can be populated by static or dynamic objects.
  • Outputs from the simulator can be used to develop object detection methods aplicable in the train environment.


Installation

Add-on is compatible with Blender version 3.3 LTS.

For add-on installation you can read Installation and Setup Guide.



Add-on functionality

Basic Usage:

  • Go to 3D View Sidebar (N) > Simulator tab.
  • In the Create scene panel, select Scenario from the dropdown. Click on Load scenario and then click on Create scene

Scene

  • Save scene as JSON
    • is enabled when the scene has been created.
    • it can be used to save the scene with any changes to a new .json file.
  • Save in scenario json
    • if true, it saves the scene into the existing .json file which was used to load the scene.
  • Clean all deletes and cleans everything in the scene.

Json Structure
UI Scene

Create scene

  • Scenario All .json files from Json folder are offered in this dropdown menu.
  • Load scenario loads all parameters from selected .json file and
    updates them in the UI.
    User can either change the json file prior to clicking on Load scenario
    or can also change in the UI after clicking on Load scenario and
    before creating the scene.
    It is disabled after scene is created. To re-enable, clean the scene
    using Clean all button.
  • Create scene button is enabled after the scenario is loaded. It creates the scene based on the scenario.
UI Create Scene

Manual changes

Hour of the day


Hour affects the sky clouds and changes the available clouds in Cloud Type dropdown.

UI I Hour

UI Hour

Sky selection


Cloud Type dropdown can be used to change the cloud type.
Sky Rotation can be used to change the rotation of cloud HDRI in world shader.

UI I SkySel

UI SkySel

Weather


  • Season
    Following seasons can be selected from dropdown: summer, winter, spring, autumn
  • Precipitation density [0,1]
    • rain in spring, summer and autumn
    • snow in winter
  • Snow cover amount [0,1]
    • snow displacement on terrain in winter
UI Wea1
UI Wea2

Vegetation


Vegetation is scattered using geometry nodes. Pebbles are also scattered using same vegetation geometry node group.
Show dummy vegetation: can be used to show dummy vegetation in 3D Viewport.
Vegetation All vegetations from .json file are offered in menu.
Breed for only tree vegetation.
Vegetation density to change density (count/m2)
Distance Cull to cull vegetation based on their distance from the active camera
Edit & Close to edit the vegetation weight map. Vegetation raster is used to create weight map (stored in vertex group).

UI I Veg

Objects


Add map adds and shrinks selected raster map to terrain to help position added objects.
Show map "Show map" shows/hides added raster map
Place object grabs an active object and helps to snap it to a new location
Add static object select a blend file (opens a file manager) from the 3D_Assets directory and adds the main object to the scene. The added objects are linked to the scene.

Add tweakable object adds tweakable (so-called dynamic) objects.
The dynamic objects can only be added from following locations:

  • '3D_Assets/tweakable_assets/curve' - APPEND
  • '3D_Assets/tweakable_assets/curve_append' - APPEND
  • '3D_Assets/tweakable_assets/animated' - LINK and OVERRIDE if a nonlinear assets is added, and APPEND if a linear model is added. The shrinkwrapper curve can be edited as required.
  • '3D_Assets/tweakable_assets/library_override ' - LINK and OVERRIDE

Advanced assets (i.e. .blend files with advanced keyword in their names) are not allowed to be added to the scene by users. However, those assets may be present in the scene.
From the high level point of view, we distinguish among these object types:

  • Mesh object based on Geometry nodes, e.g. gate or milestone. Inner structure differs.
  • Mesh object with curve and array modifiers, e.g., rails and roads. Those objects are advanced and only experienced users should add those assets to the scene.
  • Mesh object with shrinkwrap modifier, e.g., rail heater. When loaded, the shrinkwrap modifier must be created and applied. This asset is advanced.
  • Curve object with and without animation, e.g., walker.
UI I Objects

Sensor Setup


Camera Setup to switch among predefined cameras from .json file and tweak camera parameters
- field of view, position and rotation.

  • Location X => Front/back,
  • Location Y => Left/right,
  • Location Z => Up/down,
  • Rotation X => Pitch,
  • Rotation Y => Roll,
  • Rotation Z => Yaw

Lidar Setup can be used to change lidar depth and downsample voxel size.

UI Sensor

Bake scene


Bake adds postprocessing handler which generates additional outputs after standard Blender rendering (GT,GPS, Lidar, Thermo). User should save the scene before baking.
Unbake removes postprocessing handler.

UI Bake1

UI Bake2


Generated Outputs

Following images show rendered/generated outputs which are saved in out folder: Albedo, Atmosphere, Depth, Glossiness, GPS (.txt), GT, Lidar (.pcd), Normal, RGB and Thermo. out folder is created one folder up where the scene is saved. The node group for generating these render passes can be found in the compositor.
Lidar is created using GT and depth maps. Thermo is a prediction made using a neural network model that is implemented in the PyTorch framework and is based on the Pix2Pix architecture.


GPS output:


Contributors

  • Petr Strakos (petr.strakos@vsb.cz)
  • Khyati Sethia
  • Marta Jaros
  • Alena Jesko
  • Roman Machacek
  • David Ciz
  • Alfred Koci
  • Vyomkesh Jha
  • Ada Bohm
  • Petr Jelen
  • Tomas Kulich
  • Jakub Sipr


License



Acknowledgement

Created within the project FW01010274 Research and development of a functional sample of a railway vehicle with the ability to collect data and software - a simulator with the ability to generate data for obstacle detection training in simulated conditions. Co-financed by state support from Technology Agency of the Czech Republic in program TREND.