Endoscopic Tool Segmentation

Performs endoscopic tool segmentation using selectable algorithm backends, including feature-based and machine learning models.

Input

One 2D image set. Only 8-bit RGB images with a single slice per image are supported.

Output

Label image with two classes: “Background” and “Endoscopic Tool”. Visualization includes label names and colors.

Description

The algorithm supports multiple segmentation backends, selectable via the parameter endoscopicToolSegmentationAlgorithm. Available backends may include feature-based methods and machine learning models (e.g., “Custom Model”).

For the “Custom Model” backend, a machine learning model, Torch, ONNX or TensorRT is loaded from a user-specified path (modelConfigurationPath). The model processes the input image set and outputs a segmentation label image. An example model is the MONAI model (model.ts) at https://huggingface.co/MONAI/endoscopic_tool_segmentation/tree/0.6.2/models/.

```yaml Version: ‘8.0’ Type: NeuralNetwork Name: CustomEndoscopicToolSegmentation Description: Engine:

Name: torch ModelFile: model.ts ForceCPU: false Verbose: false InputFields: [Image] OutputFields: [ToolSeg] DisableJITRecompile: true

PreprocessingInputFields: [Image] PredictionOutput: [Image] Sampling: - SkipUnpadding: True PreProcessing: - ResampleDims:

target_dims: 736 480 1

  • MakeFloat:

  • NormalizeUniform:

    min: 0 max: 1

PostProcessing: - Softmax: - ArgMax: - ResampleToInput: MaxBatchSize: 2 ```

Configuration parameters:

  • endoscopicToolSegmentationAlgorithm: Selects the segmentation backend.

  • modelConfigurationPath (for Custom Model): Path to the model configuration file.

Output labels:

  • Background (label 0)

  • Endoscopic Tool (label 1, color: green)

The output is a label image with these classes, suitable for visualization and further analysis.