Comprehensive guide to surface reconstruction from RGB-D data using the RGBDReconstructionAlgorithm.
Comprehensive guide to surface reconstruction from RGB-D data using the RGBDReconstructionAlgorithm.
This page provides detailed information, usage notes, and code examples for reconstructing 3D surfaces from one or multiple RGB-D streams. The algorithm supports live sensor input, playback, and multi-sensor setups, and is designed for robust, scalable volumetric fusion and surfel-based reconstruction.
Overview
The RGBDReconstructionAlgorithm class is the central entry point for RGB-D surface reconstruction in ImFusion. It takes one or more RGBDStream objects and reconstructs a 3D volume, mesh, and tracking sequences. The algorithm is suitable for medical imaging, robotics, and general 3D vision applications.
Key features:**
- Supports multiple RGB-D sensors and playback streams
- Volumetric fusion (OpenCL, GPU) backend supported
- Automatic memory management and frame buffering
- Relocalization and keyframe management (SurfelSlam only)
- Live preview and tracking output
- Extensive configuration via SurfaceReconstructionData
Basic Usage
Single Stream Example
The following example demonstrates basic usage for reconstructing from a single RGB-D stream. Note:** Reconstruction runs asynchronously. You must call stop() before retrieving the output with takeOutput(). This ensures all frames are processed and results are finalized.
#include <ImFusion/RGBD/RGBDReconstructionAlgorithm.h>
#include <ImFusion/RGBD/RGBDStream.h>
#include <ImFusion/Base/DataList.h>
#include <ImFusion/RGBD/SurfaceReconstructionData.h>
reco->setMaxFrameBufferSize(200);
reco->compute();
reco->stop();
auto mesh = output.get<
Mesh>();
Represents a triangle mesh.
Definition Mesh.h:43
Wrapper class to store a list of owned Data instances.
Definition OwningDataList.h:24
Data model for surface reconstruction.
Definition SurfaceReconstructionData.h:46
void setVolumeSize(const vec3 &value)
Set volume size in world units (usually mm) in each spatial dimension.
void setVolumeResolution(const vec3i &value)
Set volume resolution in voxels in each spatial dimension.
void setPreprocessingUseBilateralFilter(bool use)
Control whether to use the bilateral filter.
void setIcpLevels(int value)
Set number of pyramid levels for ICP alignment. Each pyramid level is downsampled from the previous l...
Sequence of rigid tracking data with optional timestamps, quality values and flags.
Definition TrackingSequence.h:47
Namespace of the ImFusion SDK.
Definition Assert.h:7
Multi-Sensor Example
For multi-sensor setups, simply pass all streams to the algorithm:
mat4 sensorTransform1, sensorTransform2, sensorTransform3;
stream1->setMatrix(sensorTransform1);
stream2->setMatrix(sensorTransform2);
stream3->setMatrix(sensorTransform3);
reco->compute();
reco->stop();
auto mesh = reco->getMesh();
Advanced Configuration
Relocalization
You can set a custom relocalization instance before starting reconstruction:
reco->setRelocalization(std::move(reloc));
Frame Buffering
Control memory usage and frame dropping:
reco->setMaxFrameBufferSize(500);
reco->setReconstructAllFrames(true);
reco->setSkipEverySecondFrame(true);
Callbacks
You can register callbacks to process frames before and after reconstruction:
bool preProcessFrame(
int sensor,
RGBDFrame* frame)
override {
return true;
}
void onProcessedFrame(
int sensor,
int status,
bool stablePose,
bool trackLost,
SharedImage* imgScene,
RGBDFrame* frame)
override {
}
};
MyCallback cb;
reco->addListener(&cb);
Class for representing RGBD image.
Definition RGBDStream.h:206
Callback interface for RGBDReconstructionAlgorithm events.
Definition RGBDReconstructionAlgorithm.h:39
Image shared on multiple devices.
Definition SharedImage.h:86
Parameter Tuning Guidelines
- Frame Buffer Size: Set according to available RAM and expected frame rate.
- Keyframe Thresholds: Adjust rotation/translation thresholds to control keyframe density for ReconstructionMethod::SurfelSlam.
- Laser Synchronization: Enable for multi-sensor setups to avoid interference.
- Relocalization: Use for robust tracking in challenging scenarios.
- Preprocessing: Bilateral filter and normal computation can be adjusted via SurfaceReconstructionData.
Error Analysis and Validation
- Mesh Output: Use
getMesh() or takeOutput() to retrieve the reconstructed mesh for validation.
- Tracking Output: Use
trackingSequence() to analyze sensor poses over time.
- Live Preview: Use
livePointCloud() and liveTrackingSequence() for real-time feedback.
Best Practices
- Preprocessing: Ensure input streams are calibrated and synchronized.
- Memory Management: Monitor frame buffer size to avoid dropping frames.
- Parameter Selection: Start with default parameters and tune for your application.
- Validation: Visually inspect mesh and tracking results, and compare against ground truth if available.
- Reconstruction Mode: For small scenes and high speed use VolumetricFusion; for large scenes and flexible keyframe management use SurfelSlam.
Troubleshooting
Common Issues and Solutions
- Dropped Frames: Increase frame buffer size or reduce frame rate.
- Poor Mesh Quality: Check sensor calibration and input data quality.
- Tracking Loss: Enable relocalization or adjust keyframe thresholds.
- Slow Performance: Skip frames or reduce buffer size for real-time applications.
- See also
- RGBDReconstructionAlgorithm Class