ImFusion C++ SDK 4.4.0
RGBD Reconstruction

Comprehensive guide to surface reconstruction from RGB-D data using the RGBDReconstructionAlgorithm.

Collaboration diagram for RGBD Reconstruction:

Comprehensive guide to surface reconstruction from RGB-D data using the RGBDReconstructionAlgorithm.

This page provides detailed information, usage notes, and code examples for reconstructing 3D surfaces from one or multiple RGB-D streams. The algorithm supports live sensor input, playback, and multi-sensor setups, and is designed for robust, scalable volumetric fusion and surfel-based reconstruction.

Overview

The RGBDReconstructionAlgorithm class is the central entry point for RGB-D surface reconstruction in ImFusion. It takes one or more RGBDStream objects and reconstructs a 3D volume, mesh, and tracking sequences. The algorithm is suitable for medical imaging, robotics, and general 3D vision applications.

Key features:**

  • Supports multiple RGB-D sensors and playback streams
  • Volumetric fusion (OpenCL, GPU) backend supported
  • Automatic memory management and frame buffering
  • Relocalization and keyframe management (SurfelSlam only)
  • Live preview and tracking output
  • Extensive configuration via SurfaceReconstructionData

Basic Usage

Single Stream Example

The following example demonstrates basic usage for reconstructing from a single RGB-D stream. Note:** Reconstruction runs asynchronously. You must call stop() before retrieving the output with takeOutput(). This ensures all frames are processed and results are finalized.

#include <ImFusion/RGBD/RGBDReconstructionAlgorithm.h>
#include <ImFusion/RGBD/RGBDStream.h>
#include <ImFusion/Base/DataList.h>
#include <ImFusion/RGBD/SurfaceReconstructionData.h>
using namespace ImFusion;
// Prepare input stream(s)
std::vector<RGBDStream*> streams = { rgbdStream };
// Create reconstruction algorithm
// Access and configure reconstruction parameters via SurfaceReconstructionData
SurfaceReconstructionData* recoData = reco->surfaceReconstructionData();
// Example: Set volume resolution and size
recoData->setVolumeResolution(vec3i(256, 256, 256));
recoData->setVolumeSize(vec3(1000, 1000, 1000)); // in mm
recoData->setIcpLevels(3);
// Set additional parameters as needed
// recoData->setIntegrationMaxWeight(128);
// recoData->setColorScene(true);
// Optionally configure parameters
reco->setMaxFrameBufferSize(200); // Maximum buffered frames
// Start reconstruction (runs asynchronously)
reco->compute();
// ... perform other tasks, or wait for user interaction ...
// When ready to finalize, call stop() to finish processing
reco->stop();
// Now retrieve output mesh and tracking sequences
OwningDataList output = reco->takeOutput();
auto mesh = output.get<Mesh>();
auto tracking = output.get<TrackingSequence>();
Represents a triangle mesh.
Definition Mesh.h:43
Wrapper class to store a list of owned Data instances.
Definition OwningDataList.h:24
Data model for surface reconstruction.
Definition SurfaceReconstructionData.h:46
void setIcpLevels(int value)
Set number of pyramid levels for ICP alignment. Each pyramid level is downsampled from the previous l...
void setVolumeResolution(const vec3i &value)
Set volume resolution in voxels in each spatial dimension.
void setVolumeSize(const vec3 &value)
Set volume size in world units (usually mm) in each spatial dimension.
void setPreprocessingUseBilateralFilter(bool use)
Control whether to use the bilateral filter.
Sequence of rigid tracking data with optional timestamps, quality values and flags.
Definition TrackingSequence.h:47
T make_unique(T... args)
Namespace of the ImFusion SDK.
Definition Changelog.dox:1

Multi-Sensor Example

For multi-sensor setups, simply pass all streams to the algorithm:

// For each stream, set the extrinsic transformation before creating the algorithm
// Calibration result given by sensorTransform1, sensorTransform2, sensorTransform3
// needs to be computed via extrinsic calibration methods and can be passed as follows
// thereby all transforms represent the coordinate transform from depth sensor coordinate system
// to a common reference coordinate system
mat4 sensorTransform1, sensorTransform2, sensorTransform3;
stream1->setMatrix(sensorTransform1);
stream2->setMatrix(sensorTransform2);
stream3->setMatrix(sensorTransform3);
std::vector<RGBDStream*> streams = { stream1, stream2, stream3 };
// Create reconstruction algorithm
// Access and configure reconstruction parameters via SurfaceReconstructionData
SurfaceReconstructionData* recoData = reco->surfaceReconstructionData();
// Example: Set volume resolution and size
recoData->setVolumeResolution(vec3i(256, 256, 256));
recoData->setVolumeSize(vec3(1000, 1000, 1000)); // in mm
recoData->setIcpLevels(3);
// Set additional parameters as needed
// recoData->setIntegrationMaxWeight(128);
// recoData->setColorScene(true);
// Start reconstruction (runs asynchronously)
reco->compute();
// ... perform other tasks, or wait for user interaction ...
// When ready to finalize, call stop() to finish processing
reco->stop();
auto mesh = reco->getMesh();

Advanced Configuration

Relocalization

You can set a custom relocalization instance before starting reconstruction:

reco->setRelocalization(std::move(reloc));

Frame Buffering

Control memory usage and frame dropping:

reco->setMaxFrameBufferSize(500); // Maximum number of frames in buffer
reco->setReconstructAllFrames(true); // Process all frames (default for playback)
reco->setSkipEverySecondFrame(true); // Reduce load for low-powered hardware

Callbacks

You can register callbacks to process frames before and after reconstruction:

class MyCallback : public RGBDReconstructionCallback {
bool preProcessFrame(int sensor, RGBDFrame* frame) override {
// Inspect or modify frame before reconstruction
return true; // Return false to drop frame
}
void onProcessedFrame(int sensor, int status, bool stablePose, bool trackLost, SharedImage* imgScene, RGBDFrame* frame) override {
// Handle processed frame, e.g. update UI
}
};
MyCallback cb;
reco->addListener(&cb);
Class for representing RGBD image.
Definition RGBDStream.h:206
Callback interface for RGBDReconstructionAlgorithm events.
Definition RGBDReconstructionAlgorithm.h:39
Image shared on multiple devices.
Definition SharedImage.h:86

Parameter Tuning Guidelines

  • Frame Buffer Size: Set according to available RAM and expected frame rate.
  • Keyframe Thresholds: Adjust rotation/translation thresholds to control keyframe density for ReconstructionMethod::SurfelSlam.
  • Laser Synchronization: Enable for multi-sensor setups to avoid interference.
  • Relocalization: Use for robust tracking in challenging scenarios.
  • Preprocessing: Bilateral filter and normal computation can be adjusted via SurfaceReconstructionData.

Error Analysis and Validation

  • Mesh Output: Use getMesh() or takeOutput() to retrieve the reconstructed mesh for validation.
  • Tracking Output: Use trackingSequence() to analyze sensor poses over time.
  • Live Preview: Use livePointCloud() and liveTrackingSequence() for real-time feedback.

Best Practices

  1. Preprocessing: Ensure input streams are calibrated and synchronized.
  2. Memory Management: Monitor frame buffer size to avoid dropping frames.
  3. Parameter Selection: Start with default parameters and tune for your application.
  4. Validation: Visually inspect mesh and tracking results, and compare against ground truth if available.
  5. Reconstruction Mode: For small scenes and high speed use VolumetricFusion; for large scenes and flexible keyframe management use SurfelSlam.

Troubleshooting

Common Issues and Solutions

  • Dropped Frames: Increase frame buffer size or reduce frame rate.
  • Poor Mesh Quality: Check sensor calibration and input data quality.
  • Tracking Loss: Enable relocalization or adjust keyframe thresholds.
  • Slow Performance: Skip frames or reduce buffer size for real-time applications.
See also
RGBDReconstructionAlgorithm Class
Search Tab / S to search, Esc to close