RGB-D Simulation

The algorithm simulates a setup of RGB-D cameras that capture meshes in the scene. The setup can either consist of one ore more freely placed cameras or can emulate a setup with sets of cameras mounted to poles that surround the object.

Example of a simulation setup with three poles with two cameras each

Example of a simulation setup with three poles with two cameras each

Input

It takes any number of meshes as input. These meshes can be colored/textured, but do not necessarily have to.

Output

A point cloud, possibly with colored vertices and/or the respective depth/color images or an RGB-D sequence.

Description

The algorithm allows the user to specify any number of RGB-D cameras that can be placed freely in the scene or to automatically create a setup of multiple camera poles distributed around the object (an object is considered to be a rigid configuration of one or more meshes here).

General Controls

In the controller there are several controls to use:

  • Load Pole Config: Load a configuration file for a given pole setup

  • Create Pole Setup: After specifying a number of poles, the number of cameras per pole, and the distance of the poles from the object the algorithm will automatically create a configuration that distributes the poles with their attached cameras around the object. Those cameras will then appear in the camera list, where their parameters can be fine-tuned (see Camera Parameters Control) and the last added camera will be selected.

  • Add Camera: Adds a single camera with default parameters at the origin and selects it in the camera list (see Camera Parameters Control).

  • Remove Camera: Removes the currently selected camera.

Camera Parameters Control

Whenever a camera is selected in the camera dropdown menu a control group with camera parameters appears, allowing the user to set the following parameters for the respective camera:

  • Width and Height: The resolution of the camera in pixels, separately for depth and color camera.

  • Intrinsics: The camera intrinsics , separately for depth and color camera.

  • Depth to Color T: A transformation that maps points from the coordinate system of the depth camera to the coordinate system of the color camera.

  • T: A transformation that transforms the virtual sensor in world space.

  • Manipulation: Allows the user to interactively move the virtual sensor around in the 3D view, either by translating or rotating it. Automatically updates T while doing so.

Animation

This will rotate the object 360° around the z-Axis, in steps given by Angle Step.

Export

Adds the resulting selected options to the data view (or annotation view if applicable) and (optionally) exports them to the given target directory (Output Folder). If no output folder is given, nothing will be exported.