RGB-D Reconstruction

RGB-D SLAM algorithm which uses one or multiple RGB-D streams as input and performs a 3D reconstruction of the observed scene.

Input

One or more RGB-D streams. These can be either live sensor streams or recorded sequences.

Output

A mesh representation of the scene.

Description

Sensor Selection

When using multiple sensors select the sensors in the order in which they should be displayed in the user interface later on and click OK. After the sensors have been initialized, their images and depth maps are shown. If the order of the sensors is not as intended close the algorithm and reselect the sensors in the correct order.

Reconstruction Settings

Before beginning a scan, the reconstruction volume needs to be defined. The scan will contain all objects which are located within this reconstruction volume. The volume is visualized via the box shown in the point cloud view. For better visualization everything located inside the cube is colored green in the depth map view of each sensor. The volume needs to be adjusted in size and position to contain all objects which are to be reconstructed. When reconstructing objects on a rotating turntable it is important not to select any non-rotating parts of the scene such as walls or the floor, but only the object on the turntable.

../../_images/RecVolume1.png

The size and the location of the reconstruction volume can be modified by several means. The size of the volume can be changed through the Volume Size tab. Alternatively it can be changed by clicking and dragging the middle mouse button vertically in the depth view of any sensor. This will scale the reconstruction box.

The location of the volume can either be changed by setting the position values in the Volume Position tab or by clicking on the Translate button and dragging the axes handles which are then shown in the point cloud view. Another option for moving the volume position is to click and drag in the depth view with the left or right mouse button.

Next, the volume resolution needs to be defined. It should be set so that the voxel size displayed in the Volume Resolution tab is less than 1.5 mm. For live reconstruction please consider that the higher the resolution is, the more processing power is needed. If the framerate during reconstruction is too low (i.e. too many frames have to be dropped) consider using a lower resolution. For offline reconstruction this is not an issue. The maximum achievable resolution is limited by the available GPU memory. If there is insufficient memory for the chosen resolution there will be a warning. Please choose a lower resolution in this case.

It is optionally possible to set a start delay before the reconstruction starts. This can be done through the Timer tab. On the same tab it is also possible to set a fixed time after which the reconstruction stops, e.g. the time of one turntable rotation.

Some additional settings are found in the Keyframe Settings tab. They control at which rotation and translation increments from the last keyframe a new keyframe is captured. Keyframes are only used for texturing and do not affect the tracking.

Some RGB-D data sources can have ground-truth poses included with the data. When the option Use frame pose if available is enabled, this pose will be used for reconstruction and no tracking will be performed.

With the Show advanced options check box all reconstruction settings can be shown and configured. This is only meant for experts and it is not recommended to change these parameters.

Reconstruction

Once the reconstruction volume has been defined, the scan can be started by clicking on Reconstruct. During the scan the reconstruction view is shown. Instead of the 3D point cloud view a live preview of the current reconstruction result is shown on the right.

../../_images/UIRec.png

Typical error sources during scanning are a too fast movement of the sensor or leaving the reconstruction volume.

Export

After the reconstruction the mesh is directly exported to the data model and the RGB-D stream view is hidden automatically. Using the Export Tracking button, the tracked camera poses can be exported to the data model as well. The recorded keyframes can be exported via Export keyframes. To start a new reconstruction, click the Show live view button to show the RGB-D stream again.

Sensor configuration

Expand the Sensor Settings tab to view the sensor settings for the current sensor. You can change the current sensor by changing the selection in the sensor drop-down menu. Using the Orientation drop-down menu the orientation of the displayed sensor image can be matched to the physical orientation of the sensor.

To remove visual clutter which is not part of the scan volume from the color, depth and point cloud views (e.g. a wall in the background) it is possible to set a Depth Cutoff. Enabling this will remove all depth measurements beyond the specified depth.

The Load calibration and Save calibration buttons allow to export the calibration settings of the current sensor and to load them from disk.

Multi-Sensor calibration

../../_images/CalibrationPattern.png ../../_images/SchematicMultiSensor.png ../../_images/TurntableSetup.png

Before calibrating the sensors the point clouds in the point cloud view are not aligned. To align the point clouds the sensors must be calibrated, i.e. their relative positions must be determined. To calibrate the sensors a calibration pattern needs to be used. Using the known structure of the pattern the position of the sensors with respect to each other can be determined.

../../_images/BeforeCalib.png

The parameters of the calibration marker can be configured via the marker configuration settings. It it important that the marker be placed on a rigid and flat surface, e.g. a wooden board. The pattern should not bent or be distorted in any way.

There are two calibration procedures which are designed to allow calibrating all kinds of multi-sensor setups provided that there is at least some overlap between some of the sensors. The first method requires the calibration pattern to be positioned in such a way that it can be seen by all sensors at the same time while the second method (sequential calibration) only requires the pattern to be visible in two consecutive sensors (as defined by the order in which they are shown in the user interface). The sequential calibration is particularly useful for sensor setups in which the sensors are placed in vertical order.

After clicking on Calibrate or Calibrate Seq. depending on which calibration method you want to use, an image for each sensor will be acquired and used for calibration. It is important, that the calibration pattern is not moved during this time.

For the standard calibration the pattern needs to be placed so that it is seen well by all sensors while for the sequential calibration it first needs to be seen by the first and second sensors. It is important that the pattern is as large as possible in all sensor images. The pattern should also not be placed too close to the image border. For the sequential calibration a message will be displayed after detection of the pattern in the first sensor pair asking for the pattern to be moved so that it is visible in the second and third sensors. This will be repeated until the pattern has been seen by all consecutive sensor pairs.

../../_images/SeqCalibration12.png ../../_images/SeqCalibration23.png

If the calibration succeeded a success message will be displayed. If the calibration pattern was very small in the image, the calibration might be inaccurate. In this case the calibration pattern should be moved closer to the sensors if possible or printed on larger paper (e.g. on DIN A2 paper). When printing out a larger version of the pattern the calibration pattern side length in the multi-sensor tab needs to be adjusted accordingly. After a successful calibration the point clouds in the point cloud view on the right are aligned.

../../_images/AfterCalib.png ../../_images/AfterCalibSide.png

The point cloud view should be carefully examined to make sure that there are no misalignments.