Reference
imfusion
imfusion - ImFusion SDK for Medical Imaging
This module provides Python bindings for the C++ ImFusion libraries.
- exception imfusion.AlgorithmExecutionError
Bases:
RuntimeError
- exception imfusion.FileNotFoundError
Bases:
FileNotFoundError
- exception imfusion.IncompatibleError
Bases:
ValueError
- exception imfusion.MissingLicenseError
Bases:
RuntimeError
- class imfusion.Algorithm(self: BaseAlgorithm, actions: list[object])
Bases:
BaseAlgorithm
Base class for Algorithms.
An Algorithm accepts certain Data as input and performs some computation on it.
Example for an algorithm that takes exactly one image and prints its name:
>>> class MyAlgorithm(imfusion.Algorithm): ... def __init__(self, image): ... super().__init__() ... self.image = image ... ... @classmethod ... def convert_input(cls, data): ... images = data.images() ... if len(images) == 1 and len(data) == 1: ... return [images[0]] ... raise IncompatibleError('Requires exactly one image') ... ... def compute(self): ... print(self.image.name)
In order to make an Algorithm available to the ImFusion Suite (i.e. the context menu when right-clicking on selected data), it has to be registered to the ApplicationController:
>>> imfusion.register_algorithm('Python.MyAlgorithm','My Algorithm', MyAlgorithm)
If the Algorithm is created through the ImFusion Suite, the
convert_input()
method is called to determine if the Algorithm is compatible with the desired input data. If this method does not raise an exception, the Algorithm is initialized with the data returned byconvert_input()
. The implementation is similar to this:try: input = MyAlgorithm.convert_input(some_data) return MyAlgorithm(*input) except IncompatibleError: return None
The Algorithm class also provides default implementations for the
configuration()
andconfigure()
methods that automatically serialize attributes created withadd_param()
.- class action(display_name: str)
Bases:
object
Decorator to demarcate a method as an “action”. Actions are displayed as additional buttons when creating an AlgorithmController in the Suite and can be run generically, using their id, through
run_action()
.- Parameters:
display_name (str) – Text that should be shown on the Controller button.
- __call__(method)
Call self as a function.
- static action_wrapper(func: Callable[[BaseAlgorithm], Status | None]) Callable[[BaseAlgorithm], Status]
Helper that returns
UNKNOWN
automatically if the wrapped method did not return a status.- Parameters:
func (Callable[[BaseAlgorithm], Status | None]) –
- Return type:
- add_param(name, value, attributes='')
Add a new parameter to the object.
The parameter is available as a new attribute with the given name and value. The attribute will be configured automatically.
>>> class MyAlgorithm(imfusion.Algorithm): ... def __init__(self): ... super().__init__() ... self.add_param('x', 5) >>> a = MyAlgorithm() >>> a.x 5
- configuration()
Returns a copy of the current algorithm configuration.
- configure(p)
Sets the current algorithm configuration with the given Properties.
- classmethod convert_input(data: List[Data]) List[Data]
Convert the given DataList to a valid input for the algorithm.
Must be overridden in derived classes. Raise an
IncompatibleError
if the given data does not exactly match the required input of the algorithm. Should return a list, a dict or a generator.
- output()
Return the output generated by the previous call to
compute()
. The returned type must be a list of Data objects! The default implementation returns an empty list.
- class imfusion.Annotation
Bases:
pybind11_object
- class AnnotationType(self: AnnotationType, value: int)
Bases:
pybind11_object
Members:
BOX
CIRCLE
LINE
POINT
POLY_LINE
RECTANGLE
- BOX = <AnnotationType.BOX: 0>
- CIRCLE = <AnnotationType.CIRCLE: 1>
- LINE = <AnnotationType.LINE: 2>
- POINT = <AnnotationType.POINT: 3>
- POLY_LINE = <AnnotationType.POLY_LINE: 4>
- RECTANGLE = <AnnotationType.RECTANGLE: 5>
- property name
- property value
- on_editing_finished(self: Annotation, callback: object) SignalConnection
Register a callback which is called when the annotation has been fully defined by the user.
The callback must not require any arguments.
>>> a = imfusion.app.annotation_model.create_annotation(imfusion.Annotation.LINE) >>> def callback(): ... print("All points are defined") >>> a.on_editing_finished(callback) >>> a.start_editing()
- on_points_changed(self: Annotation, callback: object) SignalConnection
Register a callback which is called when any of the points have changed their position.
The callback must not require any arguments.
>>> a = imfusion.app.annotation_model.create_annotation(imfusion.Annotation.LINE) >>> def callback(): ... print("Points changed") >>> a.on_points_changed(callback) >>> a.start_editing()
- start_editing(self: Annotation) None
Start interactive placement of the annotation.
This can currently only be called once.
- BOX = <AnnotationType.BOX: 0>
- CIRCLE = <AnnotationType.CIRCLE: 1>
- LINE = <AnnotationType.LINE: 2>
- POINT = <AnnotationType.POINT: 3>
- POLY_LINE = <AnnotationType.POLY_LINE: 4>
- RECTANGLE = <AnnotationType.RECTANGLE: 5>
- property color
Color of the annotation as a normalized RGB tuple.
- property editable
Whether the annotation can be manipulated by the user.
- property label_text
The text of the label of the annotation.
- property label_visible
Whether the label of the annotation needs to be drawn or not.
- property line_width
The line width used to draw the annotation.
- property max_points
The maximum amount of points this annotation supports.
A -1 indicates that this annotation supports any number of points.
- property name
The name of the annotation.
- property points
The points which define the annotation in world coordinates.
It is possible to not immediately set all the points that the specific annotation requires: in this case the annotation is required to be manually completed using the mouse in the ImFusionSuite. It is not supported to partially set the points of multiple annotations at once: please complete the current partially-defined annotation before setting the points of another annotation.
Besides, it is not possible to set more points than the specific annotation requires.
- property type
Return the type of this annotation.
Raises if the annotation is no longer valid or if the annotation is not supported in Python.
- property visible
Whether the annotation needs to be drawn or not.
- class imfusion.AnnotationModel
Bases:
pybind11_object
- create_annotation(self: AnnotationModel, arg0: AnnotationType) Annotation
- property annotations
- class imfusion.ApplicationController
Bases:
pybind11_object
A ApplicationController instance serves as the center of the ImFusionSDK.
It provides an OpenGL context, a
DataModel
, executes algorithms and more. While multiple instances are possible, in general there is only one instance.- add_algorithm(self: ApplicationController, id: str, data: list = [], properties: Properties = None) object
Add the algorithm with the given name to the application.
The algorithm will only be created if it is compatible with the given data. The optional Properties object will be used to configure the algorithm. Returns the created algorithm or
None
if no compatible algorithm could be found.>>> app.add_algorithm("Create Synthetic Data", []) <imfusion.BaseAlgorithm object at ...>
- close_all(self: ApplicationController) None
Delete all algorithms and datasets. Make sure to not reference any deleted objects after calling this!
- execute_algorithm(self: ApplicationController, id: str, data: list = [], properties: Properties = None) list
Execute the algorithm with the given name and returns its output.
The algorithm will only be executed if it is compatible with the given data. The optional
Properties
object will be used to configure the algorithm before executing it. Any data created by the algorithm is added to theDataModel
before being returned.
- load_workspace(self: ApplicationController, path: str, **kwargs) bool
Loads a workspace file and returns True if the loading was successful. Placeholders can be specified as keyword arguments, for example: >>> app.load_workspace(“path/to/workspace.iws”, sweep=sweep, case=case)
- open(self: ApplicationController, path: str) list
Tries to open the given filepath as data. If successful the data is added to
DataModel
and returned. Otherwise raises a FileNotFoundError.
- remove_algorithm(self: ApplicationController, algorithm: BaseAlgorithm) None
Remove and deletes the given algorithm from the application. Don’t reference the given algorithm afterwards!
- save_workspace(self: ApplicationController, path: str) bool
Saves current workspace to a iws file
- select_data(*args, **kwargs)
Overloaded function.
select_data(self: imfusion.ApplicationController, arg0: imfusion.Data) -> None
select_data(self: imfusion.ApplicationController, arg0: imfusion.DataList) -> None
select_data(self: imfusion.ApplicationController, arg0: list) -> None
- update(self: ApplicationController) None
- update_display(self: ApplicationController) None
- property algorithms
Return a list of all open algorithms.
- property annotation_model
- property data_model
- property display
- property selected_data
- class imfusion.BaseAlgorithm(self: BaseAlgorithm, actions: list[object])
Bases:
Configurable
Low-level base class for all algorithms.
This interface mirrors the C++ interface very closely. Instances of this class are returned when you create an Algorithm that exists in the C++ SDK, either through
add_algorithm()
or :meth:~imfusion.create_algorithm`.If you want to implement your own Algorithms from Python see
Algorithm
instead.- class Status(self: Status, value: int)
Bases:
pybind11_object
Members:
UNKNOWN
SUCCESS
ERROR
INVALID_INPUT
INCOMPLETE_INPUT
OUT_OF_MEMORY_HOST
OUT_OF_MEMORY_GPU
UNSUPPORTED_GPU
UNKNOWN_ACTION
USER
- ERROR = <Status.ERROR: 1>
- INCOMPLETE_INPUT = <Status.INCOMPLETE_INPUT: 3>
- INVALID_INPUT = <Status.INVALID_INPUT: 2>
- OUT_OF_MEMORY_GPU = <Status.OUT_OF_MEMORY_GPU: 5>
- OUT_OF_MEMORY_HOST = <Status.OUT_OF_MEMORY_HOST: 4>
- SUCCESS = <Status.SUCCESS: 0>
- UNKNOWN = <Status.UNKNOWN: -1>
- UNKNOWN_ACTION = <Status.UNKNOWN_ACTION: 7>
- UNSUPPORTED_GPU = <Status.UNSUPPORTED_GPU: 6>
- USER = <Status.USER: 1000>
- property name
- property value
- __call__(self: BaseAlgorithm) None
Delegates to:
compute()
- compute(self: BaseAlgorithm) None
- output(self: BaseAlgorithm) list
- output_annotations(self: BaseAlgorithm) list[Annotation]
- run_action(self: BaseAlgorithm, id: str) Status
Run one of the registered actions.
- Parameters:
id (str) – Identifier of the action to run.
- ERROR = <Status.ERROR: 1>
- INCOMPLETE_INPUT = <Status.INCOMPLETE_INPUT: 3>
- INVALID_INPUT = <Status.INVALID_INPUT: 2>
- OUT_OF_MEMORY_GPU = <Status.OUT_OF_MEMORY_GPU: 5>
- OUT_OF_MEMORY_HOST = <Status.OUT_OF_MEMORY_HOST: 4>
- SUCCESS = <Status.SUCCESS: 0>
- UNKNOWN = <Status.UNKNOWN: -1>
- UNKNOWN_ACTION = <Status.UNKNOWN_ACTION: 7>
- UNSUPPORTED_GPU = <Status.UNSUPPORTED_GPU: 6>
- USER = <Status.USER: 1000>
- property actions
List of registered actions.
- property id
- property input
- property name
- property status
- class imfusion.Configurable
Bases:
pybind11_object
- configuration(self: Configurable) Properties
- configure(self: Configurable, properties: Properties) None
- configure_defaults(self: Configurable) None
- class imfusion.ConsoleController(self: ConsoleController, name: str = 'ImFusion Python Module')
Bases:
ApplicationController
ApplicationController without a UI interface.
This class is not available in the embedded Python interpreter in the ImFusionSuite.
- class imfusion.CroppingMask(self: CroppingMask, dimensions: ndarray[numpy.int32[3, 1]])
Bases:
Mask
Simple axis-aligned cropping mask with optional roundness.
- class RoundDims(self: RoundDims, value: int)
Bases:
pybind11_object
Members:
XY
YZ
XZ
XYZ
- XY = <RoundDims.XY: 0>
- XYZ = <RoundDims.XYZ: 3>
- XZ = <RoundDims.XZ: 2>
- YZ = <RoundDims.YZ: 1>
- property name
- property value
- XY = <RoundDims.XY: 0>
- XYZ = <RoundDims.XYZ: 3>
- XZ = <RoundDims.XZ: 2>
- YZ = <RoundDims.YZ: 1>
- property border
Number of pixels cropped away
- property inverted
Whether the mask is inverted
- property roundness
Roundness in percent (100 means an ellipse, 0 a rectangle)
- property roundness_dims
Which dimensions the roundness parameter should be applied
- class imfusion.Data
Bases:
pybind11_object
- class Kind(self: Kind, value: int)
Bases:
pybind11_object
Members:
UNKNOWN
IMAGE
VOLUME
IMAGE_SET
VOLUME_SET
IMAGE_STREAM
VOLUME_STREAM
POINT_SET
SURFACE
TRACKING_STREAM
TRACKING_DATA
STEREOIMAGESET
- IMAGE = <Kind.IMAGE: 1>
- IMAGE_SET = <Kind.IMAGE_SET: 3>
- IMAGE_STREAM = <Kind.IMAGE_STREAM: 5>
- POINT_SET = <Kind.POINT_SET: 7>
- STEREOIMAGESET = <Kind.STEREOIMAGESET: 14>
- SURFACE = <Kind.SURFACE: 8>
- TRACKING_DATA = <Kind.TRACKING_DATA: 10>
- TRACKING_STREAM = <Kind.TRACKING_STREAM: 9>
- UNKNOWN = <Kind.UNKNOWN: 0>
- VOLUME = <Kind.VOLUME: 2>
- VOLUME_SET = <Kind.VOLUME_SET: 4>
- VOLUME_STREAM = <Kind.VOLUME_STREAM: 6>
- property name
- property value
- class Modality(self: Modality, value: int)
Bases:
pybind11_object
Members:
NA
XRAY
CT
MRI
ULTRASOUND
VIDEO
NM
OCT
LABEL
- CT = <Modality.CT: 2>
- LABEL = <Modality.LABEL: 8>
- MRI = <Modality.MRI: 3>
- NA = <Modality.NA: 0>
- NM = <Modality.NM: 6>
- OCT = <Modality.OCT: 7>
- ULTRASOUND = <Modality.ULTRASOUND: 4>
- VIDEO = <Modality.VIDEO: 5>
- XRAY = <Modality.XRAY: 1>
- property name
- property value
- property components
- property kind
- property name
- class imfusion.DataComponent(self: DataComponent)
Bases:
pybind11_object
Data components provide a way to generically attach custom information to
Data
.Data and StreamData are the two main classes that hold a list of data components, allowing custom information (for example optional data or configuration settings) to be attached to instances of these classes. Data components are meant to be used for information that is bound to a specific Data instance and that can not be represented by the usual ImFusion data types.
Data components should implement the Configurable methods, in order to support generic (de)serialization.
Note
Data components are supposed to act as generic storage for custom information. When subclassing DataComponent, you should not implement any heavy evaluation logic since this is the domain of Algorithms or other classes accessing the DataComponents.
Example
class MyComponent(imfusion.DataComponent, accessor_name="my_component"): def __init__(self, a=""): imfusion.DataComponent.__init__(self) self.a = a @property def a(self): return self._a @a.setter def a(self, value): if value and not isinstance(value, str): raise TypeError("`a` must be of type `str`") self._a = value def configure(self, properties: imfusion.Properties) -> None: self.a = str(properties["a"]) def configuration(self) -> imfusion.Properties: return imfusion.Properties({"a": self.a}) def __eq__(self, other: "MyComponent") -> bool: return self.a == other.a
- configuration(self: DataComponent) Properties
- configure(self: DataComponent, properties: Properties) None
- property id
Returns a unique string identifier for this type of data component
- class imfusion.DataComponentBase
Bases:
Configurable
- property id
Returns the unique string identifier of this component class.
- class imfusion.DataComponentList
Bases:
pybind11_object
A list of DataComponent. The list contains properties for specific DataComponent types. Each DataComponent type can only occur once.
- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.DataComponentList, index: int) -> object
__getitem__(self: imfusion.DataComponentList, indices: list[int]) -> list[object]
__getitem__(self: imfusion.DataComponentList, slice: slice) -> list[object]
__getitem__(self: imfusion.DataComponentList, id: str) -> object
- add(*args, **kwargs)
Overloaded function.
add(self: imfusion.DataComponentList, component: imfusion.DataComponent) -> object
Adds the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.ImageInfoDataComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.DisplayOptions2d) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.DisplayOptions3d) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.TransformationStashDataComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.DataSourceComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.LabelDataComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.DatasetLicenseComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.RealWorldMappingDataComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.dicom.GeneralEquipmentModuleDataComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.dicom.SourceInfoComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.dicom.ReferencedInstancesComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.dicom.RTStructureDataComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.machinelearning.TargetTag) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.machinelearning.ProcessingRecordComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.ReferenceImageDataComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.PatchesFromImageDataComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.InversionComponent) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.ultrasound.FrameGeometryMetadata) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
add(self: imfusion.DataComponentList, arg0: imfusion.ultrasound.UltrasoundMetadata) -> imfusion.DataComponentBase
Adds a copy of the component to the component list and returns a reference to the copy.
- property data_source
- property dataset_license
- property display_options_2d
- property display_options_3d
- property frame_geometry_metadata
- property general_equipment_module
- property image_info
- property inversion
- property label
- property patches_from_image
- property processing_record
- property real_world_mapping
- property reference_image
- property referenced_instances
- property rt_structure
- property source_info
- property target_tag
- property transformation_stash
- property ultrasound_metadata
- class imfusion.DataList(*args, **kwargs)
Bases:
pybind11_object
List of Data. Is implicitly converted from and to regular Python lists.
Deprecated since version 2.15: Use a regular
list
instead.Overloaded function.
__init__(self: imfusion.DataList) -> None
__init__(self: imfusion.DataList, list: list) -> None
- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.DataList, index: int) -> imfusion.Data
__getitem__(self: imfusion.DataList, indices: list[int]) -> list[imfusion.Data]
__getitem__(self: imfusion.DataList, slice: slice) -> list[imfusion.Data]
- class imfusion.DataModel
Bases:
pybind11_object
The DataModel instance holds all datasets of an ApplicationController.
- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.DataModel, index: int) -> imfusion.Data
__getitem__(self: imfusion.DataModel, indices: list[int]) -> list[imfusion.Data]
__getitem__(self: imfusion.DataModel, slice: slice) -> list[imfusion.Data]
- add(*args, **kwargs)
Overloaded function.
add(self: imfusion.DataModel, data: imfusion.Data, name: str = ‘’) -> imfusion.Data
Add data to the model. The data will be copied and a reference to the copy is returned. If the data cannot be added, a
ValueError
is raised.add(self: imfusion.DataModel, data_list: list[imfusion.Data]) -> list
Add multiple pieces of data to the model. The data will be copied and a reference to the copy is returned. If the data cannot be added, a
ValueError
is raised.
- create_group(self: DataModel, arg0: DataList) DataGroup
Groups a list of Data in the model. Only Data that is already part of the model can be grouped.
- get_common_parent(self: DataModel, data_list: DataList) DataGroup
Return the most common parent of all given Data
- get_parent(self: DataModel, data: Data) DataGroup
Return the parent DataGroup of the given Data or None if it is not part of the model. For top-level data this function will return
get_root_node()
.
- index(self: DataModel, data: Data) int
Return index of data. The index is depth-first for all groups.
- remove(self: DataModel, data: Data) None
Remove and delete data from the model. Afterwards data must not be reference anymore!
- property root_node
Return the parent DataGroup of the given Data or None if it does not have a parent
- property size
Return the total amount of data in the model
- class imfusion.DataSourceComponent
Bases:
DataComponentBase
- class DataSourceInfo(self: DataSourceInfo, arg0: str, arg1: str, arg2: Properties, arg3: int, arg4: list[DataSourceInfo])
Bases:
Configurable
- update(self: DataSourceInfo, arg0: DataSourceInfo) None
- property filename
- property history
- property index_in_file
- property io_algorithm_config
- property io_algorithm_name
- property filenames
- property sources
- class imfusion.DatasetLicenseComponent(*args, **kwargs)
Bases:
DataComponentBase
Overloaded function.
__init__(self: imfusion.DatasetLicenseComponent) -> None
__init__(self: imfusion.DatasetLicenseComponent, infos: list[imfusion.DatasetLicenseComponent.DatasetInfo]) -> None
- class DatasetInfo(*args, **kwargs)
Bases:
pybind11_object
Overloaded function.
__init__(self: imfusion.DatasetLicenseComponent.DatasetInfo) -> None
__init__(self: imfusion.DatasetLicenseComponent.DatasetInfo, name: str, authors: str, website: str, license: str, attribution_required: bool, commercial_use_allowed: bool) -> None
- property attribution_required
- property authors
- property commercial_use_allowed
- property license
- property name
- property website
- infos(self: DatasetLicenseComponent) list[DatasetInfo]
- class imfusion.Deformation
Bases:
pybind11_object
- configuration(self: Deformation) Properties
- configure(self: Deformation, properties: Properties) None
- displace_point(self: Deformation, at: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
- displace_points(self: Deformation, at: list[ndarray[numpy.float64[3, 1]]]) list[ndarray[numpy.float64[3, 1]]]
- displacement(*args, **kwargs)
Overloaded function.
displacement(self: imfusion.Deformation, at: numpy.ndarray[numpy.float64[3, 1]]) -> numpy.ndarray[numpy.float64[3, 1]]
displacement(self: imfusion.Deformation, at: numpy.ndarray[numpy.float64[2, 1]]) -> numpy.ndarray[numpy.float64[3, 1]]
displacement(self: imfusion.Deformation, at: list[numpy.ndarray[numpy.float64[3, 1]]]) -> list[numpy.ndarray[numpy.float64[3, 1]]]
- class imfusion.DisplayOptions2d(self: DisplayOptions2d, arg0: Data)
Bases:
DataComponentBase
- property gamma
- property invert
- property level
- property window
- class imfusion.DisplayOptions3d(self: DisplayOptions3d, arg0: Data)
Bases:
DataComponentBase
- property alpha
- property invert
- property level
- property window
- class imfusion.ExplicitIntensityMask(self: ExplicitIntensityMask, ref_image: SharedImage, mask_image: SharedImage)
Bases:
Mask
Combination of an ExplicitMask and an IntensityMask.
- property border_clamp
If true, set sampler wrapping mode to
CLAMP_TO_BORDER
(default). If false, set toCLAMP_TO_EDGE
.
- property border_color
Border color (normalized for integer images)
- property intensity_range
Range of allowed pixel values
- class imfusion.ExplicitMask(*args, **kwargs)
Bases:
Mask
Mask holding an individual mask value for every pixel.
Overloaded function.
__init__(self: imfusion.ExplicitMask, width: int, height: int, slices: int, initial: int = 0) -> None
__init__(self: imfusion.ExplicitMask, dimensions: numpy.ndarray[numpy.int32[3, 1]], initial: int = 0) -> None
__init__(self: imfusion.ExplicitMask, mask_image: imfusion.MemImage) -> None
- mask_image(self: ExplicitMask) SharedImage
Returns a copy of the mask image held by the mask.
- class imfusion.FrameworkInfo
Bases:
pybind11_object
Provides general information about the framework.
- property license
- property opengl
- property plugins
- class imfusion.FreeFormDeformation
Bases:
Deformation
- configuration(self: FreeFormDeformation) Properties
- configure(self: FreeFormDeformation, arg0: Properties) None
- control_points(self: FreeFormDeformation) list[ndarray[numpy.float64[3, 1]]]
Get current control point locations (including displacement)
- property displacements
Displacement in mm of all control points
- property grid_spacing
Spacing of the control point grid
- property grid_transformation
Transformation matrix of the control point grid
- property subdivisions
Subdivisions of the control point grid
- class imfusion.GlPlatformInfo
Bases:
pybind11_object
Provides information about the underlying OpenGL driver.
- property extensions
- property renderer
- property vendor
- property version
- class imfusion.ImageDescriptor(*args, **kwargs)
Bases:
pybind11_object
Struct describing the essential properties of an image.
The ImFusion framework distinguishes two main image pixel value domains, which are indicated by the shift and scale parameters of this image descriptor:
Original pixel value domain: Pixel values are the same as in their original source (e.g. when loaded from a file). Same as the storage pixel value domain if the image’s scale is 1 and the shift is 0
Storage pixel value domain: Pixel values as they are stored in a MemImage. The user may decide to apply such a rescaling in order to better use the available limits of the underlying type.
The following conversion rules apply:
OV = (SV / scale) - shift
SV = (OV + shift) * scale
Overloaded function.
__init__(self: imfusion.ImageDescriptor) -> None
__init__(self: imfusion.ImageDescriptor, type: imfusion.PixelType, dimensions: numpy.ndarray[numpy.int32[3, 1]], channels: int = 1) -> None
__init__(self: imfusion.ImageDescriptor, type: imfusion.PixelType, width: int, height: int, slices: int = 1, channels: int = 1) -> None
- configure(self: ImageDescriptor, properties: Properties) None
Deserialize an image descriptor from Properties
- coord(self: ImageDescriptor, index: int) ndarray[numpy.int32[4, 1]]
Return the pixel/voxel coordinate (x,y,z,c) for a given index
- has_index(self: ImageDescriptor, x: int, y: int, z: int = 0, c: int = 0) int
Return true if the pixel at (x,y,z) exists, false otherwise
- image_to_pixel(self: ImageDescriptor, world: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
Convert 3D image coordinates to pixel/voxel position
- index(self: ImageDescriptor, x: int, y: int, z: int = 0, c: int = 0) int
Return a linear memory index for a pixel or voxel
- is_compatible(self: ImageDescriptor, other: ImageDescriptor, ignore_type: bool = False, ignore_3D: bool = False, ignore_channels: bool = False, ignore_spacing: bool = True) bool
Convenience function to perform partial comparison of two image descriptors. Two descriptors are compatible if their width and height, and optionally number of slices, number of channels and type are the same
- is_valid(self: ImageDescriptor) bool
Return if the descriptor is valid (a size of one is allowed)
- original_to_storage(self: ImageDescriptor, value: float) float
Apply the image’s shift and scale in order to convert a value from original pixel value domain to storage pixel value domain
- pixel_to_image(self: ImageDescriptor, pixel: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
Convert a 3D pixel/voxel position to image coordinates
- set_dimensions(self: ImageDescriptor, dimensions: ndarray[numpy.int32[3, 1]], channels: int = 0) None
Convenience function for specifying the image dimensions and channels at once. If
channels
is 0, the number of channels will remain unchanged
- set_spacing(self: ImageDescriptor, spacing: ndarray[numpy.float64[3, 1]], is_metric: bool) None
Convenience function for specifying spacing and metric flag at the same time
- storage_to_original(self: ImageDescriptor, value: float) float
Apply the image’s shift and scale in order to convert a value from storage pixel value domain to original pixel value domain
- property byte_size
Return the size of the image in bytes
- property channels
- property configuration
Serialize an image descriptor to Properties
- property dimension
- property dimensions
- property extent
- property height
- property image_to_pixel_matrix
Return a 4x4 matrix to transform from image space to pixel space
- property image_to_texture_matrix
Return a 4x4 matrix to transform from image space to texture space
- property is_metric
- property pixel_to_image_matrix
Return a 4x4 matrix to transform from pixel space to image space
- property pixel_type
- property scale
- property shift
- property size
Return the size (number of elements) of the image
- property slices
- property spacing
Access the image descriptor spacing. When setting the spacing, it is always assumed that the given spacing is metric. If you want to specify a non-metric spacing, use
desc.set_spacing(new_spacing, is_metric=False)
- property texture_to_image_matrix
Return a 4x4 matrix to transform from texture space to image space
- property type_size
Return the nominal size in bytes of the current component type, zero if unknown
- property width
- class imfusion.ImageDescriptorWorld(*args, **kwargs)
Bases:
pybind11_object
Convenience struct extending an ImageDescriptor to also include a matrix describing the image orientation in world coordinates.
This struct can be useful for describing the geometrical properties of an image without need to hold the (heavy) image content. As such it can be used for representing reference geometries (see
ImageResamplingAlgorithm
), or for one-line creation of a new SharedImage.Overloaded function.
__init__(self: imfusion.ImageDescriptorWorld, descriptor: imfusion.ImageDescriptor, matrix_to_world: numpy.ndarray[numpy.float64[4, 4]]) -> None
__init__(self: imfusion.ImageDescriptorWorld, shared_image: imfusion.SharedImage) -> None
- image_to_pixel(self: ImageDescriptorWorld, world: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
Convert 3D image coordinates to pixel/voxel position
- is_spatially_compatible(self: ImageDescriptorWorld, other: ImageDescriptorWorld) bool
Convenience function to compare two image world descriptors (for instance to know whether a resampling is necessary). Two descriptors are compatible if their dimensions, matrix and spacing are identical.
- pixel_to_image(self: ImageDescriptorWorld, pixel: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
Convert a 3D pixel/voxel position to image coordinates
- pixel_to_world(self: ImageDescriptorWorld, pixel: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
Convert a 3D pixel/voxel position to world coordinates
- world_to_pixel(self: ImageDescriptorWorld, world: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
Convert 3D world coordinates to pixel/voxel position
- property descriptor
- property image_to_pixel_matrix
Return a 4x4 matrix to transform from image space to pixel space
- property matrix_from_world
- property matrix_to_world
- property pixel_to_image_matrix
Return a 4x4 matrix to transform from pixel space to image space
- property pixel_to_texture_matrix
Return a 4x4 matrix to transform from image space to texture space
- property pixel_to_world_matrix
Return a 4x4 matrix to transform from pixel space to world space
- property texture_to_pixel_matrix
Return a 4x4 matrix to transform from texture space to image space
- property texture_to_world_matrix
Return a 4x4 matrix to transform from texture space to world space
- property world_to_pixel_matrix
Return a 4x4 matrix to transform from world space to pixel space
- property world_to_texture_matrix
Return a 4x4 matrix to transform from world space to texture space
- class imfusion.ImageInfoDataComponent(self: ImageInfoDataComponent)
Bases:
DataComponentBase
DataComponent storing general information on the image origin.
Modeled after the DICOM patient-study-series hierarchy, it stores information on the patient, study and series the data set belongs to.
- class AnatomicalOrientationType(self: AnatomicalOrientationType, value: int)
Bases:
pybind11_object
The anatomical orientation type used in Instances generated by this equipment.
Members:
UNKNOWN
BIPED
QUADRUPED
- BIPED = <AnatomicalOrientationType.BIPED: 1>
- QUADRUPED = <AnatomicalOrientationType.QUADRUPED: 2>
- UNKNOWN = <AnatomicalOrientationType.UNKNOWN: 0>
- property name
- property value
- class Laterality(self: Laterality, value: int)
Bases:
pybind11_object
Laterality of (paired) body part examined
Members:
UNKNOWN
LEFT
RIGHT
- LEFT = <Laterality.LEFT: 1>
- RIGHT = <Laterality.RIGHT: 2>
- UNKNOWN = <Laterality.UNKNOWN: 0>
- property name
- property value
- class PatientSex(self: PatientSex, value: int)
Bases:
pybind11_object
Gender of the patient
Members:
UNKNOWN
MALE
FEMALE
OTHER
- FEMALE = <PatientSex.FEMALE: 2>
- MALE = <PatientSex.MALE: 1>
- OTHER = <PatientSex.OTHER: 3>
- UNKNOWN = <PatientSex.UNKNOWN: 0>
- property name
- property value
- property frame_of_reference_uid
Uniquely identifies the Frame of Reference for a Series. Multiple Series within a Study may share a Frame of Reference UID.
- property laterality
Laterality of (paired) body part examined
- property modality
DICOM modality string specifying the method used to create this series
- property orientation_type
DICOM Anatomical Orientation Type
- property patient_birth_date
Patient date of birth in yyyyMMdd format
- property patient_comment
Additional information about the Patient
- property patient_id
DICOM Patient ID
- property patient_name
Patient name
- property patient_position
Specifies position of the Patient relative to the imaging equipment.
- property patient_sex
Patient sex
- property photometric_interpretation
Specifies the intended interpretation of the pixel data (e.g. RGB, HSV, …).
- property responsible_person
Name of person with medical or welfare decision making authority for the Patient.
- property series_date
Series date in yyyyMMdd format
- property series_description
Series description
- property series_instance_uid
Unique identifier of the Series
- property series_number
DICOM Series number. The value of this attribute should be unique for all Series in a Study created on the same equipment.
- property series_time
Series time in HHmmss format
- property series_time_exact
Series time in microseconds. 0 if the original series time was empty.
- property study_date
Study date in yyyyMMdd format
- property study_description
Study description
- property study_id
DICOM Study ID
- property study_instance_uid
Unique identifier for the Study
- property study_time
Study time in HHmmss format, optionally with time zone offset &ZZXX
- property study_time_exact
Study time in microseconds. 0 if the original study time was empty.
- property study_timezone
Study time zone abbreviation
- class imfusion.ImageResamplingAlgorithm(*args, **kwargs)
Bases:
BaseAlgorithm
Algorithm for resampling an image to a target dimension or resolution, optionally with respect to another image.
If a reference image is not provided the size of the output can be either explicitly specified, or implicitly determined by setting a target spacing, binning or relative size w.r.t. the input (in percentage). Only one of these strategies can be active at a time, as specified by the resamplingMode field. The value of the other target fields will be ignored. The algorithm offers convenience methods to jointly update the value of a target field and change the resampling mode accordingly.
In case you provide a reference image it will its pixel grid (dimensions, spacing, pose matrix) for the output. However, the pixel type as well as shift/scale will remain the same as in the input image.
The algorithm supports Linear and Nearest interpolation modes. In the Linear case (default), when accessing the input image at a fractional coordinate, the obtained value will be computed by linearly interpolating between the closest pixels/voxels. In the Nearest case, the value of the closest pixel/voxel will be used instead.
Furthermore, multiple reduction modes are also supported. In contrast to the interpolation mode, which affects how the value of the input image at a given (potentially fractional) coordinate is extracted, this determines what happens when multiple input pixels/voxels contribute to the value of a single output pixel/voxel. In Nearest mode, the value of the closest input pixel/voxel is used as-is. Alternatively, the Minimum, Maximum or Average value of the neighboring pixel/voxels can be used.
By default, the image will be modified in-place; a new one can be created instead by changing the value of the createNewImage parameter.
By default, the resulting image will have an altered physical extent, since the original extent may not be divisible by the target spacing. The algorithm can modify the target spacing to exactly maintain the physical extent, by toggling the preserveExtent parameter.
If the keepZeroValues parameter is set to true, the input pixels/voxels having zero value will not be modified by the resampling process.
Overloaded function.
__init__(self: imfusion.ImageResamplingAlgorithm, input_images: imfusion.SharedImageSet, reference_images: imfusion.SharedImageSet = None) -> None
__init__(self: imfusion.ImageResamplingAlgorithm, input_images: imfusion.SharedImageSet, reference_world_descriptors: list[imfusion.ImageDescriptorWorld]) -> None
- class ResamplingMode(self: ResamplingMode, value: int)
Bases:
pybind11_object
Members:
TARGET_DIM
TARGET_PERCENT
TARGET_SPACING
TARGET_BINNING
- TARGET_BINNING = <ResamplingMode.TARGET_BINNING: 3>
- TARGET_DIM = <ResamplingMode.TARGET_DIM: 0>
- TARGET_PERCENT = <ResamplingMode.TARGET_PERCENT: 1>
- TARGET_SPACING = <ResamplingMode.TARGET_SPACING: 2>
- property name
- property value
- resampling_needed(self: ImageResamplingAlgorithm, frame: int = -1) bool
Return whether resampling is needed or the specified settings result in the same image size and spacing
- set_input(*args, **kwargs)
Overloaded function.
set_input(self: imfusion.ImageResamplingAlgorithm, new_input_images: imfusion.SharedImageSet, new_reference_images: imfusion.SharedImageSet, reconfigure_from_new_data: bool) -> None
Replaces the input of the algorithm. If reconfigureFromNewData is true, the algorithm reconfigures itself based on meta data of the new input
set_input(self: imfusion.ImageResamplingAlgorithm, new_input_images: imfusion.SharedImageSet, new_reference_world_descriptors: list[imfusion.ImageDescriptorWorld], reconfigure_from_new_data: bool) -> None
Replaces the input of the algorithm. If reconfigureFromNewData is true, the algorithm reconfigures itself based on meta data of the new input
- set_target_min_spacing(*args, **kwargs)
Overloaded function.
set_target_min_spacing(self: imfusion.ImageResamplingAlgorithm, min_spacing: float) -> bool
Set the target spacing with the spacing of the input image, replacing the value in each dimension with the maximum between the original and the provided value.
- Parameters:
min_spacing – the minimum value that the target spacing should have in each direction
- Returns:
True if the final target spacing is different than the input image spacing
set_target_min_spacing(self: imfusion.ImageResamplingAlgorithm, min_spacing: numpy.ndarray[numpy.float64[3, 1]]) -> bool
Set the target spacing with the spacing of the input image, replacing the value in each dimension with the maximum between the original and the provided value.
- Parameters:
min_spacing – the minimum value that the target spacing should have in each direction
- Returns:
True if the final target spacing is different than the input image spacing
- TARGET_BINNING = <ResamplingMode.TARGET_BINNING: 3>
- TARGET_DIM = <ResamplingMode.TARGET_DIM: 0>
- TARGET_PERCENT = <ResamplingMode.TARGET_PERCENT: 1>
- TARGET_SPACING = <ResamplingMode.TARGET_SPACING: 2>
- property clone_deformation
Whether to clone deformation from original image before attaching to result
- property create_new_image
Whether to compute the result in-place or in a newly allocated image
- property force_cpu
Whether to force the computation on the CPU
- property interpolation_mode
Mode for image interpolation
- property keep_zero_values
Whether to update the target spacing to keep exactly the physical dimensions of the input image
- property preserve_extent
Whether to update the target spacing to keep exactly the physical dimensions of the input image
- property reduction_mode
Mode for image reduction (e.g. downsampling, resampling, binning)
- property resampling_mode
How the output image size should be obtained (explicit dimensions, percentage relative to the input image, …)
- property target_binning
How many pixels from the input image should be combined into an output pixel
- property target_dimensions
Target dimensions for the new image
- property target_percent
Target dimensions for the new image, relatively to the input one
- property target_spacing
Target spacing for the new image
- property verbose
Whether to enable advanced logging
- class imfusion.IntensityMask(*args, **kwargs)
Bases:
Mask
Masks pixels with a specific value or values outside a specific range.
Overloaded function.
__init__(self: imfusion.IntensityMask, type: imfusion.PixelType, value: float = 0.0) -> None
__init__(self: imfusion.IntensityMask, image: imfusion.MemImage, value: float = 0.0) -> None
- property masked_value
Specific value that should be masked
- property masked_value_range
Half-open range
[min, max)
of allowed pixel values
- property type
- property use_range
Whether the mask should operate in range mode (true) or single-value mode (false)
- class imfusion.InterpolationMode(self: InterpolationMode, value: int)
Bases:
pybind11_object
Members:
NEAREST
LINEAR
- LINEAR = <InterpolationMode.LINEAR: 1>
- NEAREST = <InterpolationMode.NEAREST: 0>
- property name
- property value
- class imfusion.InversionComponent
Bases:
DataComponentBase
Data component for storing the information needed to invert an operation.
- class InversionInfo
Bases:
pybind11_object
Struct for storing the information needed to invert an operation.
- property context_properties
- property identifier
- property operation_name
- property operation_properties
- get_all_inversion_infos(self: InversionComponent, arg0: str) list[InversionInfo]
- get_inversion_info(self: InversionComponent, arg0: str) InversionInfo
- class imfusion.LabelDataComponent(self: LabelDataComponent, label_map: SharedImageSet = None)
Bases:
pybind11_object
Stores metadata for a label map, supporting up to 255 labels.
Creates a LabelDataComponent. If a label map of type uint8 is provided, detects labels in the label map.
- class LabelConfig(self: LabelConfig, name: str = '', color: ndarray[numpy.float64[4, 1]] = array([0., 0., 0., 0.]), is_visible2d: bool = True, is_visible3d: bool = True)
Bases:
pybind11_object
Encapsulates metadata for a label value in a label map.
Constructor for LabelConfig.
- Parameters:
name – Name of the label.
color – RGBA color used for rendering the label.
is_visible2d – Visibility flag for 2D/MPR views.
is_visible3d – Visibility flag for 3D views.
- property color
RGBA color used for rendering the label. Values should be in the range [0, 1].
- property is_visible2d
Visibility flag for 2D/MPR views.
- property is_visible3d
Visibility flag for 3D views.
- property name
Name of the label.
- property segmentation_algorithm_name
Name of the algorithm used to generate the segmentation.
- property segmentation_algorithm_type
Type of algorithm used to generate the segmentation.
- property snomed_category_code_meaning
Human-readable meaning of the category code.
- property snomed_category_code_value
SNOMED CT code for the category this label represents.
- property snomed_type_code_meaning
Human-readable meaning of the type code.
- property snomed_type_code_value
SNOMED CT code for the type this label represents.
- class SegmentationAlgorithmType(self: SegmentationAlgorithmType, value: int)
Bases:
pybind11_object
Members:
UNKNOWN
AUTOMATIC
SEMI_AUTOMATIC
MANUAL
- AUTOMATIC = <SegmentationAlgorithmType.AUTOMATIC: 1>
- MANUAL = <SegmentationAlgorithmType.MANUAL: 3>
- SEMI_AUTOMATIC = <SegmentationAlgorithmType.SEMI_AUTOMATIC: 2>
- UNKNOWN = <SegmentationAlgorithmType.UNKNOWN: 0>
- property name
- property value
- detect_labels(self: LabelDataComponent, image: SharedImageSet) None
Detects labels present in an image of type uint8 and creates configurations for non-existing labels using default configurations.
- has_label(self: LabelDataComponent, pixel_value: int) bool
Checks if a label configuration exists for a pixel value.
- label_config(self: LabelDataComponent, pixel_value: int) LabelConfig | None
Gets label configuration for a pixel value.
- label_configs(self: LabelDataComponent) dict[int, LabelConfig]
Returns known label configurations.
- remove_label(self: LabelDataComponent, pixel_value: int) None
Removes label configuration for a pixel value.
- remove_unused_labels(self: LabelDataComponent, image: SharedImageSet) None
Removes configurations for non-existing labels in an image.
- set_default_label_config(self: LabelDataComponent, pixel_value: int) None
Sets default label configuration for a pixel value.
- set_label_config(self: LabelDataComponent, pixel_value: int, config: LabelConfig) None
Sets label configuration for a pixel value.
- set_label_configs(self: LabelDataComponent, configs: dict[int, LabelConfig]) None
Sets known label configurations from a dictionary mapping pixel values to LabelConfig objects.
- AUTOMATIC = <SegmentationAlgorithmType.AUTOMATIC: 1>
- MANUAL = <SegmentationAlgorithmType.MANUAL: 3>
- SEMI_AUTOMATIC = <SegmentationAlgorithmType.SEMI_AUTOMATIC: 2>
- UNKNOWN = <SegmentationAlgorithmType.UNKNOWN: 0>
- class imfusion.LayoutMode(self: LayoutMode, value: int)
Bases:
pybind11_object
Members:
LAYOUT_ROWS
LAYOUT_FOCUS_PLUS_STACK
LAYOUT_FOCUS_PLUS_ROWS
LAYOUT_SIDE_BY_SIDE
LAYOUT_CUSTOM
- LAYOUT_CUSTOM = <LayoutMode.LAYOUT_CUSTOM: 100>
- LAYOUT_FOCUS_PLUS_ROWS = <LayoutMode.LAYOUT_FOCUS_PLUS_ROWS: 2>
- LAYOUT_FOCUS_PLUS_STACK = <LayoutMode.LAYOUT_FOCUS_PLUS_STACK: 1>
- LAYOUT_ROWS = <LayoutMode.LAYOUT_ROWS: 0>
- LAYOUT_SIDE_BY_SIDE = <LayoutMode.LAYOUT_SIDE_BY_SIDE: 3>
- property name
- property value
- class imfusion.LicenseInfo
Bases:
pybind11_object
Provides information about the currently used license.
- property expiration_date
Date until the license is valid in ISO format or None if the license won’t expire.
- property key
- class imfusion.Mask
Bases:
pybind11_object
Base interface for implementing polymorphic image masks.
- class CreateOption(self: CreateOption, value: int)
Bases:
pybind11_object
Enumeration of available behavior for Mask::create_explicit_mask().
Members:
DEEP_COPY
SHALLOW_COPY_IF_POSSIBLE
- DEEP_COPY = <CreateOption.DEEP_COPY: 0>
- SHALLOW_COPY_IF_POSSIBLE = <CreateOption.SHALLOW_COPY_IF_POSSIBLE: 1>
- property name
- property value
- create_explicit_mask(self: imfusion.Mask, image: imfusion.SharedImage, create_option: imfusion.Mask.CreateOption = <CreateOption.DEEP_COPY: 0>) MemImage
Creates an explicit mask representation of this mask for a given image.
- is_compatible(self: Mask, arg0: SharedImage) bool
Returns
True
if the mask can be used with the given image orFalse
otherwise.
- mask_value(*args, **kwargs)
Overloaded function.
mask_value(self: imfusion.Mask, coord: numpy.ndarray[numpy.int32[3, 1]], color: numpy.ndarray[numpy.float32[4, 1]]) -> int
Returns 0 if the given pixel is outside the mask (i.e. invisible/to be ignored) or a non-zero value if it is inside the mask (i.e. visible/to be considered).
mask_value(self: imfusion.Mask, coord: numpy.ndarray[numpy.int32[3, 1]], value: float) -> int
Returns 0 if the given pixel is outside the mask (i.e. invisible/to be ignored) or a non-zero value if it is inside the mask (i.e. visible/to be considered).
- DEEP_COPY = <CreateOption.DEEP_COPY: 0>
- SHALLOW_COPY_IF_POSSIBLE = <CreateOption.SHALLOW_COPY_IF_POSSIBLE: 1>
- property requires_pixel_value
Returns
True
if the mask_value() rely on the pixel value. If this method returnsFalse
, the mask_value() method can be safely used with only the coordinate.
- class imfusion.MemImage(*args, **kwargs)
Bases:
pybind11_object
A
MemImage
instance represents an image which resides in main memory.The
MemImage
class supports the Buffer Protocol. This means that the underlying buffer can be wrapped in e.g. numpy without a copy:>>> mem = imfusion.MemImage(imfusion.PixelType.BYTE, 10, 10) >>> arr = np.array(mem, copy=False) >>> arr.fill(0) >>> np.sum(arr) 0
Be aware that most numpy operation create a copy of the data and don’t affect the original data:
>>> np.sum(np.add(arr, 1)) 100
>>> np.sum(arr) 0
To update the buffer of a
MemImage
, usenp.copyto
:>>> np.copyto(arr, np.add(arr, 1)) >>> np.sum(arr) 100
Alternatively use the
out
argument of certain numpy functions:>>> np.add(arr, 1, out=arr) array(...)
>>> np.sum(arr) 200
Overloaded function.
__init__(self: imfusion.MemImage, type: imfusion.PixelType, width: int, height: int, slices: int = 1, channels: int = 1) -> None
__init__(self: imfusion.MemImage, desc: imfusion.ImageDescriptor) -> None
Factory method to instantiate a MemImage from an ImageDescriptor. note This method does not initialize the underlying buffer
__init__(self: imfusion.MemImage, array: numpy.ndarray[numpy.int8], greyscale: bool = False) -> None
Create a
MemImage
from anumpy.array
.The array must be contiguous and must have between 2 and 4 dimensions. The dimensions are interpreted as (slices, height, width, channels). Missing dimensions are set to one. The color dimension must always be present even for greyscale image in which case it would be 1.
Use the optional
greyscale
argument to specify that the color dimensions is missing and the buffer should be interpreted as greyscale.The actual array data is copied into the
MemImage
.__init__(self: imfusion.MemImage, array: numpy.ndarray[numpy.uint8], greyscale: bool = False) -> None
__init__(self: imfusion.MemImage, array: numpy.ndarray[numpy.int16], greyscale: bool = False) -> None
__init__(self: imfusion.MemImage, array: numpy.ndarray[numpy.uint16], greyscale: bool = False) -> None
__init__(self: imfusion.MemImage, array: numpy.ndarray[numpy.int32], greyscale: bool = False) -> None
__init__(self: imfusion.MemImage, array: numpy.ndarray[numpy.uint32], greyscale: bool = False) -> None
__init__(self: imfusion.MemImage, array: numpy.ndarray[numpy.float32], greyscale: bool = False) -> None
__init__(self: imfusion.MemImage, array: numpy.ndarray[numpy.float64], greyscale: bool = False) -> None
- apply_shift_and_scale(arr)
Return a copy of the array with storage values converted to original values. The dtype of the returned array is always DOUBLE.
- astype(self: MemImage, image_type: object) MemImage
Create a copy of the current MemImage instance with the requested Image format.
This function accepts either: - an Image type (e.g. imfusion.Image.UINT); - most of the numpy’s dtypes (e.g. np.uint); - python’s float or int types.
If the requested Image format already matches the Image format of the current instance, then a clone of the current instance is returned.
- create_float(self: object, normalize: bool = True, calc_min_max: bool = True, apply_scale_shift: bool = False) object
- crop(self: MemImage, width: int, height: int, slices: int = -1, ox: int = -1, oy: int = -1, oz: int = -1) MemImage
- downsample(self: imfusion.MemImage, dx: int, dy: int, dz: int = 1, zero_mask: bool = False, reduction_mode: imfusion.ReductionMode = <ReductionMode.AVERAGE: 1>) MemImage
- image_to_pixel(self: MemImage, world: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
Convert a 3D image coordinate to a pixel position.
- numpy()
Convenience method for converting a MemImage or a SharedImage into a newly created numpy array with scale and shift already applied.
Shift and scale may determine a complex change of pixel type prior the conversion into numpy array:
as a first rule, even if the type of shift and scale is float, they will still be considered as integers if they are representing integers (e.g. a shift of 2.000 will be treated as 2);
if shift and scale are such that the pixel values range (determined by the pixel_type) would not be fitting into the pixel_type, e.g. a negative pixel value but the type is unsigned, then the pixel_type will be promoted into a signed type if possible, otherwise into a single precision floating point type;
if shift and scale are such that the pixel values range (determined by the pixel_type) would be fitting into a demoted pixel_type, e.g. the type is signed but the range of pixel values is unsigned, then the pixel_type will be demoted;
if shift and scale do not certainly determine that all the possible pixel values (in the range determined by the pixel_type) would become integers, then the pixel_type will be promoted into a single precision floating point type.
in any case, the returned numpy array will be returned with type up to 32-bit integers. If the integer type would require more bits, then the resulting pixel_type will be DOUBLE.
- Parameters:
self – instance of a MemImage or of a SharedImage
- Returns:
numpy.ndarray
- pad(*args, **kwargs)
Overloaded function.
pad(self: imfusion.MemImage, pad_lower_left_front: numpy.ndarray[numpy.int32[3, 1]], pad_upper_right_back: numpy.ndarray[numpy.int32[3, 1]], padding_mode: imfusion.PaddingMode, legacy_mirror_padding: bool = True) -> imfusion.MemImage
pad(self: imfusion.MemImage, padding_mode: tuple[int, int], pad_size_x: tuple[int, int], pad_size_y: tuple[int, int], pad_size_z: imfusion.PaddingMode, legacy_mirror_padding: bool = True) -> imfusion.MemImage
- pixel_to_image(self: MemImage, pixel: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
Convert a 3D pixel position to an image coordinate.
- range_threshold(self: MemImage, inside_range: bool, lower_value: float, upper_value: float, use_original: bool = True, replace_with: float = 0) MemImage
- resample(*args, **kwargs)
Overloaded function.
resample(self: imfusion.MemImage, spacing_adjustment: imfusion.SpacingMode, spacing: numpy.ndarray[numpy.float64[3, 1]], zero_mask: bool = False, reduction_mode: imfusion.ReductionMode = <ReductionMode.AVERAGE: 1>, interpolation_mode: imfusion.InterpolationMode = <InterpolationMode.LINEAR: 1>, allowed_dimension_change: bool = False) -> imfusion.MemImage
resample(self: imfusion.MemImage, dimensions: numpy.ndarray[numpy.int32[3, 1]], zero_mask: bool = False, reduction_mode: imfusion.ReductionMode = <ReductionMode.AVERAGE: 1>, interpolation_mode: imfusion.InterpolationMode = <InterpolationMode.LINEAR: 1>) -> imfusion.MemImage
- rotate(*args, **kwargs)
Overloaded function.
rotate(self: imfusion.MemImage, angle: int = 90, flip_dim: int = -1, axis: int = 2) -> imfusion.MemImage
rotate(self: imfusion.MemImage, rot: numpy.ndarray[numpy.float64[3, 3]], tolerance: float = 0.0) -> imfusion.MemImage
- threshold(self: MemImage, value: float, below: bool, apply_shift_scale: bool = True, merge_channels: bool = False, replace_with: float = 0) MemImage
- static zeros(desc: ImageDescriptor) MemImage
Factory method to create a zero-initialized image
- property channels
- property dimension
- property dimensions
- property extent
- property height
- property image_to_pixel_matrix
- property metric
- property ndim
- property pixel_to_image_matrix
- property scale
- property shape
Return a numpy compatible shape descripting the dimensions of this image.
The returned tuple has 4 entries: slices, height, width, channels
- property shift
- property slices
- property spacing
- property type
- property width
- class imfusion.Mesh(*args, **kwargs)
Bases:
Data
Overloaded function.
__init__(self: imfusion.Mesh, mesh: imfusion.Mesh) -> None
__init__(self: imfusion.Mesh, name: str = ‘’) -> None
- set_halfedge_color(*args, **kwargs)
Overloaded function.
set_halfedge_color(self: imfusion.Mesh, vertex_index: int, face_index: int, color: numpy.ndarray[numpy.float32[4, 1]]) -> None
set_halfedge_color(self: imfusion.Mesh, vertex_index: int, face_index: int, color: numpy.ndarray[numpy.float32[3, 1]], alpha: float) -> None
- set_halfedge_normal(self: Mesh, vertex_index: int, face_index: int, normal: ndarray[numpy.float64[3, 1]]) None
- set_vertex_color(*args, **kwargs)
Overloaded function.
set_vertex_color(self: imfusion.Mesh, index: int, color: numpy.ndarray[numpy.float32[4, 1]]) -> None
set_vertex_color(self: imfusion.Mesh, index: int, color: numpy.ndarray[numpy.float32[3, 1]], alpha: float) -> None
- property center
- property extent
- property filename
- property has_halfedge_colors
- property has_halfedge_normals
- property has_vertex_colors
- property has_vertex_normals
- property number_of_faces
- property number_of_vertices
- class imfusion.Optimizer
Bases:
pybind11_object
Object for non-linear optimization.
The current bindings are work in progress and therefore limited. They are so far mostly meant to be used for changing an existing optimizer rather than creating one from scratch.
- class Mode(self: Mode, value: int)
Bases:
pybind11_object
Mode of operation when execute is called.
Members:
OPT : Standard optimization.
STUDY : Randomized study.
PLOT : Evaluate for 1D or 2D plot generation.
EVALUATE : Single evaluation.
- EVALUATE = <Mode.EVALUATE: 3>
- OPT = <Mode.OPT: 0>
- PLOT = <Mode.PLOT: 2>
- STUDY = <Mode.STUDY: 1>
- property name
- property value
- configuration(self: Optimizer) Properties
Retrieves the configuration of the object.
- configure(self: Optimizer, arg0: Properties) None
Configures the object.
- execute(self: Optimizer, x: list[float]) list[float]
Execute the optimization given a vector of initial parameters of full dimensionality.
- set_bounds(*args, **kwargs)
Overloaded function.
set_bounds(self: imfusion.Optimizer, bounds: float) -> None
Set the same symmetric bounds in all parameters. A value of zero disables the bounds.
set_bounds(self: imfusion.Optimizer, lower_bounds: float, upper_bounds: float) -> None
Set the same lower and upper bounds for all parameters.
set_bounds(self: imfusion.Optimizer, bounds: list[float]) -> None
Set individual symmetric bounds.
set_bounds(self: imfusion.Optimizer, lower_bounds: list[float], upper_bounds: list[float]) -> None
Set individual lower and upper bounds.
set_bounds(self: imfusion.Optimizer, bounds: list[tuple[float, float]]) -> None
Set individual lower and upper bounds as a list of pairs.
- set_logging_level(self: Optimizer, file_level: int | None = None, console_level: int | None = None) None
Set level of detail for logging to text file and the console. 0 = none (default), 1 = init/result, 2 = every evaluation, 3 = only final result after study.
- property abort_eval
Abort after a certain number of cost function evaluations.
- property abort_fun_tol
Abort if change in cost function value is becomes too small.
- property abort_fun_val
Abort if this function value is reached.
- property abort_par_tol
Abort if change in parameter values becomes too small.
- property abort_time
Abort after a certain elapsed number of seconds.
- property aborted
Whether the optimizer was aborted.
- property best_val
Return best cost function value.
- property dimension
Total number of parameters. The selection gets cleared when the dimension value is modified.
- property first_val
Return cost function value of first evaluation.
- property minimizing
Whether the optimizer has a loss function (that it should minimize) or an objective function (that it should maximize).
- property mode
Mode of operation when execute is called.
- property num_eval
Return number of cost function evaluations computed so far.
- property param_names
Names of the parameters.
- property selection
Selected parameters.
- property type
Type of optimizer (see doc/header).
- class imfusion.PaddingMode(*args, **kwargs)
Bases:
pybind11_object
Members:
CLAMP
MIRROR
ZERO
Overloaded function.
__init__(self: imfusion.PaddingMode, value: int) -> None
__init__(self: imfusion.PaddingMode, arg0: str) -> None
- CLAMP = <PaddingMode.CLAMP: 2>
- MIRROR = <PaddingMode.MIRROR: 1>
- ZERO = <PaddingMode.ZERO: 0>
- property name
- property value
- class imfusion.ParametricDeformation
Bases:
Deformation
- set_parameters(self: ParametricDeformation, parameters: list[float]) None
- class imfusion.PatchInfo
Bases:
pybind11_object
Struct for storing the descriptor of the image a patch was extracted from and the region of interest in the original image.
- property original_image_descriptor
- property roi
- class imfusion.PatchesFromImageDataComponent
Bases:
DataComponentBase
Data component for keeping track of the original location of a patch in the original image. This is set for instance by the SplitIntoPatchesOperation when extracting patches from the input image.
- add(self: PatchesFromImageDataComponent, arg0: PatchInfo) None
- property patch_infos
- class imfusion.PixelType(self: PixelType, value: int)
Bases:
pybind11_object
Members:
BYTE
UBYTE
SHORT
USHORT
INT
UINT
FLOAT
DOUBLE
HFLOAT
- BYTE = <PixelType.BYTE: 5120>
- DOUBLE = <PixelType.DOUBLE: 5130>
- FLOAT = <PixelType.FLOAT: 5126>
- HFLOAT = <PixelType.HFLOAT: 5131>
- INT = <PixelType.INT: 5124>
- SHORT = <PixelType.SHORT: 5122>
- UBYTE = <PixelType.UBYTE: 5121>
- UINT = <PixelType.UINT: 5125>
- USHORT = <PixelType.USHORT: 5123>
- property name
- property value
- class imfusion.PluginInfo
Bases:
pybind11_object
Provides information about a framework plugin.
- property name
- property path
- class imfusion.PointCloud(self: PointCloud, points: list[ndarray[numpy.float64[3, 1]]] = [], *, normals: list[ndarray[numpy.float64[3, 1]]] = [], colors: list[ndarray[numpy.float64[3, 1]]] = [])
Bases:
Data
Data structure representing a point cloud in 3d space. Each point can have an associated color and normal vector.
Constructs a point cloud with the specified points, normals and colors. If the number of colors / normals does not match the number of points, they will be ignored with a warning.
- Parameters:
points – Vertices of the point cloud.
normals – Normals of the point cloud. If the length does not match
points
,normals
will be dropped with a warning.colors – Colors (RGB) of the point cloud. If the length does not match
points
,colors
will be dropped with a warning.
- clone(self: PointCloud) PointCloud
Create a new point cloud by deep copying an all data from this instance.
- transform_point_cloud(self: PointCloud, transformation: ndarray[numpy.float64[4, 4]]) None
- property colors
- property has_normals
- property is_dense
- property normals
- property points
- property weights
- class imfusion.PointCorrespondences(self: PointCorrespondences, first: PointsOnData, second: PointsOnData)
Bases:
pybind11_object
Class that handles point correspondences. Points at corresponding indices on the two PointsOnData instances are considered correspondences. When creating PointCorrespondences(first, second) names of points in first and second are made uniform giving precedence to names in first. Logic for matching points based on their names should be implemented outside of this class. The class supports both full-set and subset-based rigid fitting, allowing users to select specific point correspondences for the transformation calculation. The class supports estimation of a fitting error for any correspondence using complementary correspondences, which can assist with the identification of inconsistent correspondences.
- class PointCorrespondenceIterator
Bases:
pybind11_object
Iterator for PointCorrespondences
- __next__(self: PointCorrespondenceIterator) object
- class Reduction(self: Reduction, value: int)
Bases:
pybind11_object
Reduction type used during distance evaluation
Members:
MEAN : Mean reduction
MEDIAN : Median reduction
MAX : Max reduction
MIN : Min reduction
- MAX = <Reduction.MAX: 2>
- MEAN = <Reduction.MEAN: 0>
- MEDIAN = <Reduction.MEDIAN: 1>
- MIN = <Reduction.MIN: 3>
- property name
- property value
- __getitem__(self: PointCorrespondences, index: int) tuple
Get a point correspondences pair (in world coordinates) by index
- __iter__(self: PointCorrespondences) PointCorrespondenceIterator
Iterate over all point correspondences in world coordinates
- clear(self: PointCorrespondences) None
Remove all correspondences.
- compute_pairwise_distances(self: imfusion.PointCorrespondences, reduction: imfusion.PointCorrespondences.Reduction = <Reduction.MEAN: 0>, weights: list[float] = None) float
Compute the distance between pairs of correspondences. Parameters: reduction (PointCorrespondences.Reduction): The reduction method to use (MEAN, MEDIAN, MIN, MAX). weights (list of float): Optional weights for each correspondence. If provided, the individual errors are multiplied with these weights before reduction.
- fit_rigid(self: PointCorrespondences, subset_indices: list[int] | None = None) object
Fit a rigid transformation that aligns the selected correspondences. :param subset_indices: Optional list of indices to use for fitting. If not provided, all selected correspondences are used.
- is_selected(self: PointCorrespondences, index: int) bool
Check if a correspondence is selected.
- name(self: PointCorrespondences, index: int) str
Return the name of a correspondence.
- set_name(self: PointCorrespondences, index: int, name: str) None
Set the name of a correspondence at index ‘index’.
- set_selected(self: PointCorrespondences, index: int, selected: bool) None
Set whether a correspondence is selected.
- property point_handler
The PointsOnData handlers.
- class imfusion.PointsOnData
Bases:
pybind11_object
Base interface for points linked to Data.
- clear(self: PointsOnData) None
Remove all the points
- empty(self: PointsOnData) bool
Check if the list of points is empty
- find(self: PointsOnData, name: str, start: int = 0) int
Get the first index of a point with the given name, starting from index start. Return -1 if not found
- numpy(self: PointsOnData) ndarray[numpy.float64]
Convert all points to a numpy array
- property selected_points
Return the selected points in world coordinates
- class imfusion.PointsOnImage(self: PointsOnImage, image: SharedImageSet)
Bases:
PointsOnData
Class that hold a list of points on a volume or image. Points are automatically updated when the matrix or deformation changes.
Create a PointsOnImage object for the given SharedImageSet.
- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.PointsOnImage, index: int) -> imfusion.PyPointsOnImagePoint
__getitem__(self: imfusion.PointsOnImage, indices: list[int]) -> list[imfusion.PyPointsOnImagePoint]
__getitem__(self: imfusion.PointsOnImage, slice: slice) -> list[imfusion.PyPointsOnImagePoint]
__getitem__(self: imfusion.PointsOnImage, name: str) -> imfusion.PyPointsOnImagePoint
Get a point in world coordinates by name.
- __iter__(self: PointsOnImage) PyPointsIterator
Iterate over all points in world coordinates
- __setitem__(self: PointsOnImage, arg0: int, arg1: ndarray[numpy.float64[3, 1]]) None
Set a point in world coordinates
- add_image_point(self: PointsOnImage, point: ndarray[numpy.float64[3, 1]], frame: int) None
Add a new point in image coordinates for a given frame.
- add_world_point(self: PointsOnImage, point: ndarray[numpy.float64[3, 1]], find_closest_frame: bool = True) None
Add a new point in world coordinates. If find_closest_frame is true, assigns to closest frame.
- image_point(self: PointsOnImage, index: int) tuple[ndarray[numpy.float64[3, 1]], int]
Return the point in image coordinates, paired with its associated frame.
- property all_points
Return the points in world coordinates.
- property image_points
Return the points in image coordinates, paired with their associated frame.
- property selected_image_points
Return only the selected points in image coordinates, paired with their associated frame.
- class imfusion.Properties(*args, **kwargs)
Bases:
pybind11_object
Properties
objects store arbitrary key-value pairs as strings in a hierarchical fashion.Properties
are extensively used within the ImFusion frameworks for purposes such as:Saving and loading the state of an application to and from files.
Configuring the
View
instances of the UI.Configuring the
Algorithm
instances.
The bindings provide two interfaces: a C++-like one based on
param()
andset_param()
, and a more Pythonic interface using the[]
operator. Both interfaces are equivalent and interchangeable.Parameters can be set with the
set_param()
method, e.g.:>>> p = imfusion.Properties() >>> p.set_param('Spam', 5)
The parameter type will be set depending on the type of the Python value similar to C++. To retrieve a parameter, a value of the desired return type must be passed:
>>> spam = 0 >>> p.param('Spam', spam) 5
If the parameter doesn’t exists, the value of the second argument is returned:
>>> foo = 8 >>> p.param('Foo', foo) 8
The
Properties
object also exposes all its parameters as items, e.g. to add a new parameter just add a new key:>>> p = imfusion.Properties() >>> p['spam'] = 5
When using the dictionary-like syntax with the basic types (
bool
,int
,float
,str
andlist
), the returned values are typed accordingly:>>> type(p['spam']) <class 'int'>
However, for matrix and vector types, the
param()
method needs to be used, which receives an extra variable of the same type that has to be returned:>>> import numpy as np >>> np_array = np.ones(3) >>> p['foo'] = np_array >>> p.param('foo', np_array) array([1., 1., 1.])
In fact, the dictionary-like syntax would just return it as a string instead:
>>> p['foo'] '1 1 1 '
Additionally, the attributes of parameters are available through the
param_attributes()
method:>>> p.set_param_attributes('spam', 'max: 10') >>> p.param_attributes('spam') [('max', '10')]
A
Properties
object can be obtained from adict
:>>> p = imfusion.Properties({'spam': 5, 'eggs': True, 'sub/0': { 'eggs': False }}) >>> p['eggs'] True >>> p['sub/0']['eggs'] False
There are two possible, but slightly different, ways to convert a
Properties
instance into adict
. The first method is bydict
casting, which returns adict
made by nestedProperties
. The second method is by calling theasdict()
method, which returns adict
expanding also the nestedProperties
instances:>>> dict(p) {'spam': 5, 'eggs': True, 'sub/0': <Properties object at ...>} >>> p.asdict() {'spam': 5, 'eggs': True, 'sub/0': {'eggs': False}}
When a parameter needs to take values among a
set
of possible choices, the parameter can be assigned to anEnumStringParam
:>>> p["choice"] = imfusion.Properties.EnumStringParam(value="choice2", admitted_values={"choice1", "choice2"}) >>> p["choice"] Properties.EnumStringParam(value="choice2", admitted_values={...})
Please refer to
EnumStringParam
for more information.Overloaded function.
__init__(self: imfusion.Properties, name: str = ‘’) -> None
__init__(self: imfusion.Properties, dictionary: dict) -> None
- class EnumStringParam(self: EnumStringParam, *, value: str, admitted_values: set[str])
Bases:
pybind11_object
Parameter that can assume a certain value among a
set
ofstr
possibilities.A first way to instantiate this class, is to provide the value and the
set
of admitted values:>>> p = imfusion.Properties() >>> p["choice"] = imfusion.Properties.EnumStringParam(value="choice2", admitted_values={"choice1", "choice2"}) >>> p["choice"] Properties.EnumStringParam(value="choice2", admitted_values={...})
If
EnumStringParam
is assigned to a value that is not in theset
of possible choices, then aValueError
is raised:>>> p["choice"] = imfusion.Properties.EnumStringParam(value="choice3", admitted_values={"choice1", "choice2"}) Traceback (most recent call last): ... ValueError: EnumStringParam was assigned to 'choice3' but it is not in the set of admitted values: ...
An
EnumStringParam
instance can be constructed from aEnum
member by using thefrom_enum()
method, in which case theEnumStringParam
instance gets itsvalue
from the givenEnum
member, and gets itsadmitted_values
from theset
ofEnum
members:>>> import enum >>> class Choices(enum.Enum): ... CHOICE_1: str = "choice1" ... CHOICE_2: str = "choice2" ... >>> p["choice"] = imfusion.Properties.EnumStringParam.from_enum(Choices.CHOICE_2) >>> p["choice"] Properties.EnumStringParam(value="CHOICE_2", admitted_values={...})
An
EnumStringParam
instance that corresponds 1-to-1 to anEnum
can be converted into theEnum
member that corresponds to its currentvalue
:>>> p["choice"].to_enum(Choices) <Choices.CHOICE_2: 'choice2'>
In the example above, the
Enum
members were used to populated theadmitted_values
. However, it is also possible to populate theadmitted_values
from theEnum
values:>>> p["choice"] = imfusion.Properties.EnumStringParam.from_enum(Choices.CHOICE_2, take_enum_values=True) >>> p["choice"] Properties.EnumStringParam(value="choice2", admitted_values={...}) >>> p["choice"].to_enum(Choices) <Choices.CHOICE_2: 'choice2'>
- Parameters:
value – a choice among the
set
ofadmitted_values
.
- classmethod from_enum()
(cls: object, enum_member: object, take_enum_values: bool = False) -> imfusion.Properties.EnumStringParam
Construct an
EnumStringParam
automatically out of the provided instance of an enumeration class.- Parameters:
enum_member – a member of an enumeration class. The current
value
will be assigned to this argument, while theadmitted_values
will be automatically constructed from the members of the enumeration class.take_enum_values – is False, then the enumeration members are taken as values. If True, then the enumeration values are taken as values: please note that in this case all the enumeration values must be unique and of
str
type.
- to_enum(self: EnumStringParam, enum_type: object) object
Casts into the corresponding member of the
enum_type
type. It raises when this is not possible.- Parameters:
enum_type – the enumeration class into which to cast the current value. Please note that this enumeration class must be compatible, which means it must correspond to the set of
admitted_values
.
- property admitted_values
The current set of admitted values.
- property value
The current value that is assumed among the current set of admitted values.
- __getitem__(self: Properties, arg0: str) object
- __setitem__(*args, **kwargs)
Overloaded function.
__setitem__(self: imfusion.Properties, name: str, value: bool) -> None
__setitem__(self: imfusion.Properties, name: str, value: int) -> None
__setitem__(self: imfusion.Properties, name: str, value: float) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 3]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 4]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 4]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 3]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 4]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[2, 1]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 1]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 1]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[5, 1]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[2, 1]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 1]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 1]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.int32[2, 1]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.int32[3, 1]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.int32[4, 1]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: str) -> None
__setitem__(self: imfusion.Properties, name: str, value: os.PathLike) -> None
__setitem__(self: imfusion.Properties, name: str, value: list[str]) -> None
__setitem__(self: imfusion.Properties, name: str, value: list[os.PathLike]) -> None
__setitem__(self: imfusion.Properties, name: str, value: list[bool]) -> None
__setitem__(self: imfusion.Properties, name: str, value: list[int]) -> None
__setitem__(self: imfusion.Properties, name: str, value: list[float]) -> None
__setitem__(self: imfusion.Properties, name: str, value: list[numpy.ndarray[numpy.float64[2, 1]]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: list[numpy.ndarray[numpy.float64[3, 1]]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: list[numpy.ndarray[numpy.float64[4, 1]]]) -> None
__setitem__(self: imfusion.Properties, name: str, value: imfusion.Properties.EnumStringParam) -> None
__setitem__(self: imfusion.Properties, name: str, value: object) -> None
- add_sub_properties(self: Properties, name: str) Properties
- asdict(self: Properties) dict
Return the Properties as a dict.
The dictionary values have the correct type when they are basic (bool, int, float, str and list), all other param types are returned with a str type. Subproperties are turned into nested dicts.
- clear(self: Properties) None
- copy_from(self: Properties, arg0: Properties) None
- get(self: Properties, key: str, default_value: object = None) object
- get_name(self: Properties) str
- items(self: Properties) list
- keys(self: Properties) list
- static load_from_json(path: str) Properties
- static load_from_xml(path: str) Properties
- param(*args, **kwargs)
Overloaded function.
param(self: imfusion.Properties, name: str, value: bool) -> bool
param(self: imfusion.Properties, name: str, value: int) -> int
param(self: imfusion.Properties, name: str, value: float) -> float
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 3]]) -> numpy.ndarray[numpy.float64[3, 3]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 4]]) -> numpy.ndarray[numpy.float64[4, 4]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 4]]) -> numpy.ndarray[numpy.float64[3, 4]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 3]]) -> numpy.ndarray[numpy.float32[3, 3]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 4]]) -> numpy.ndarray[numpy.float32[4, 4]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[2, 1]]) -> numpy.ndarray[numpy.float64[2, 1]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 1]]) -> numpy.ndarray[numpy.float64[3, 1]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 1]]) -> numpy.ndarray[numpy.float64[4, 1]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[5, 1]]) -> numpy.ndarray[numpy.float64[5, 1]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[2, 1]]) -> numpy.ndarray[numpy.float32[2, 1]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 1]]) -> numpy.ndarray[numpy.float32[3, 1]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 1]]) -> numpy.ndarray[numpy.float32[4, 1]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.int32[2, 1]]) -> numpy.ndarray[numpy.int32[2, 1]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.int32[3, 1]]) -> numpy.ndarray[numpy.int32[3, 1]]
param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.int32[4, 1]]) -> numpy.ndarray[numpy.int32[4, 1]]
param(self: imfusion.Properties, name: str, value: str) -> str
param(self: imfusion.Properties, name: str, value: os.PathLike) -> os.PathLike
param(self: imfusion.Properties, name: str, value: list[str]) -> list[str]
param(self: imfusion.Properties, name: str, value: list[os.PathLike]) -> list[os.PathLike]
param(self: imfusion.Properties, name: str, value: list[bool]) -> list[bool]
param(self: imfusion.Properties, name: str, value: list[int]) -> list[int]
param(self: imfusion.Properties, name: str, value: list[float]) -> list[float]
param(self: imfusion.Properties, name: str, value: list[numpy.ndarray[numpy.float64[2, 1]]]) -> list[numpy.ndarray[numpy.float64[2, 1]]]
param(self: imfusion.Properties, name: str, value: list[numpy.ndarray[numpy.float64[3, 1]]]) -> list[numpy.ndarray[numpy.float64[3, 1]]]
param(self: imfusion.Properties, name: str, value: list[numpy.ndarray[numpy.float64[4, 1]]]) -> list[numpy.ndarray[numpy.float64[4, 1]]]
param(self: imfusion.Properties, name: str, value: imfusion.Properties.EnumStringParam) -> imfusion.Properties.EnumStringParam
- params(self: Properties) list[str]
Return a list of all param names.
Params inside sub-properties will be prefixed with the name of the sub-properties (e.g. ‘sub/var’). If
with_sub_params
is false, only the top-level params are returned.
- remove_param(self: Properties, name: str) None
- save_to_json(self: Properties, path: str) None
- save_to_xml(self: Properties, path: str) None
- set_name(self: Properties, name: str) None
- set_param(*args, **kwargs)
Overloaded function.
set_param(self: imfusion.Properties, name: str, value: bool) -> None
set_param(self: imfusion.Properties, name: str, value: bool, default: bool) -> None
set_param(self: imfusion.Properties, name: str, value: int) -> None
set_param(self: imfusion.Properties, name: str, value: int, default: int) -> None
set_param(self: imfusion.Properties, name: str, value: float) -> None
set_param(self: imfusion.Properties, name: str, value: float, default: float) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 3]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 3]], default: numpy.ndarray[numpy.float64[3, 3]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 4]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 4]], default: numpy.ndarray[numpy.float64[4, 4]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 4]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 4]], default: numpy.ndarray[numpy.float64[3, 4]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 3]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 3]], default: numpy.ndarray[numpy.float32[3, 3]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 4]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 4]], default: numpy.ndarray[numpy.float32[4, 4]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[2, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[2, 1]], default: numpy.ndarray[numpy.float64[2, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 1]], default: numpy.ndarray[numpy.float64[3, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 1]], default: numpy.ndarray[numpy.float64[4, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[5, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float64[5, 1]], default: numpy.ndarray[numpy.float64[5, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[2, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[2, 1]], default: numpy.ndarray[numpy.float32[2, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 1]], default: numpy.ndarray[numpy.float32[3, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 1]], default: numpy.ndarray[numpy.float32[4, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.int32[2, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.int32[2, 1]], default: numpy.ndarray[numpy.int32[2, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.int32[3, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.int32[3, 1]], default: numpy.ndarray[numpy.int32[3, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.int32[4, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: numpy.ndarray[numpy.int32[4, 1]], default: numpy.ndarray[numpy.int32[4, 1]]) -> None
set_param(self: imfusion.Properties, name: str, value: str) -> None
set_param(self: imfusion.Properties, name: str, value: str, default: str) -> None
set_param(self: imfusion.Properties, name: str, value: os.PathLike) -> None
set_param(self: imfusion.Properties, name: str, value: os.PathLike, default: os.PathLike) -> None
set_param(self: imfusion.Properties, name: str, value: list[str]) -> None
set_param(self: imfusion.Properties, name: str, value: list[str], default: list[str]) -> None
set_param(self: imfusion.Properties, name: str, value: list[os.PathLike]) -> None
set_param(self: imfusion.Properties, name: str, value: list[os.PathLike], default: list[os.PathLike]) -> None
set_param(self: imfusion.Properties, name: str, value: list[bool]) -> None
set_param(self: imfusion.Properties, name: str, value: list[bool], default: list[bool]) -> None
set_param(self: imfusion.Properties, name: str, value: list[int]) -> None
set_param(self: imfusion.Properties, name: str, value: list[int], default: list[int]) -> None
set_param(self: imfusion.Properties, name: str, value: list[float]) -> None
set_param(self: imfusion.Properties, name: str, value: list[float], default: list[float]) -> None
set_param(self: imfusion.Properties, name: str, value: list[numpy.ndarray[numpy.float64[2, 1]]]) -> None
set_param(self: imfusion.Properties, name: str, value: list[numpy.ndarray[numpy.float64[2, 1]]], default: list[numpy.ndarray[numpy.float64[2, 1]]]) -> None
set_param(self: imfusion.Properties, name: str, value: list[numpy.ndarray[numpy.float64[3, 1]]]) -> None
set_param(self: imfusion.Properties, name: str, value: list[numpy.ndarray[numpy.float64[3, 1]]], default: list[numpy.ndarray[numpy.float64[3, 1]]]) -> None
set_param(self: imfusion.Properties, name: str, value: list[numpy.ndarray[numpy.float64[4, 1]]]) -> None
set_param(self: imfusion.Properties, name: str, value: list[numpy.ndarray[numpy.float64[4, 1]]], default: list[numpy.ndarray[numpy.float64[4, 1]]]) -> None
set_param(self: imfusion.Properties, name: str, value: imfusion.Properties.EnumStringParam) -> None
set_param(self: imfusion.Properties, name: str, value: imfusion.Properties.EnumStringParam, default: imfusion.Properties.EnumStringParam) -> None
- set_param_attributes(self: Properties, name: str, attributes: str) None
- sub_properties(*args, **kwargs)
Overloaded function.
sub_properties(self: imfusion.Properties, name: str, create_if_doesnt_exist: bool = False) -> imfusion.Properties
sub_properties(self: imfusion.Properties) -> list[imfusion.Properties]
- sub_properties_all(self: Properties, name: str) list
- values(self: Properties) list
- class imfusion.PyPointsIterator
Bases:
pybind11_object
- __iter__(self: PyPointsIterator) PyPointsIterator
- __next__(self: PyPointsIterator) PyPointsOnImagePoint
- class imfusion.PyPointsOnImagePoint
Bases:
pybind11_object
- property image_frame
Gets/sets the image frame of a point.
- property image_position
Gets/sets the image position of a point.
- property name
Gets/sets the name of a point.
- property selected
Gets/sets whether a point is selected. By default all points are selected.
- property world_position
Gets/sets the world position of a point.
- class imfusion.RealWorldMappingDataComponent(self: RealWorldMappingDataComponent)
Bases:
DataComponentBase
- class Mapping(self: Mapping)
Bases:
pybind11_object
- storage_to_real_world(self: Mapping, image_descriptor: ImageDescriptor, value: float) float
- property intercept
- property slope
- property type
- property unit
- class MappingType(self: MappingType, value: int)
Bases:
pybind11_object
Members:
REAL_WORLD_VALUES
STANDARDIZED_UPTAKE_VALUES
- REAL_WORLD_VALUES = <MappingType.REAL_WORLD_VALUES: 0>
- STANDARDIZED_UPTAKE_VALUES = <MappingType.STANDARDIZED_UPTAKE_VALUES: 1>
- property name
- property value
- REAL_WORLD_VALUES = <MappingType.REAL_WORLD_VALUES: 0>
- STANDARDIZED_UPTAKE_VALUES = <MappingType.STANDARDIZED_UPTAKE_VALUES: 1>
- property mappings
- property units
- class imfusion.ReductionMode(self: ReductionMode, value: int)
Bases:
pybind11_object
Members:
LOOKUP
AVERAGE
MINIMUM
MAXIMUM
- AVERAGE = <ReductionMode.AVERAGE: 1>
- LOOKUP = <ReductionMode.LOOKUP: 0>
- MAXIMUM = <ReductionMode.MAXIMUM: 3>
- MINIMUM = <ReductionMode.MINIMUM: 2>
- property name
- property value
- class imfusion.ReferenceImageDataComponent
Bases:
DataComponentBase
Data component used to store a reference image. The reference image is used to keep track of the input of a processing pipeline or a machine learning model, and can be used to set the correct image descriptor for the output of the pipeline.
- property reference
- class imfusion.RegionOfInterest(self: RegionOfInterest, arg0: ndarray[numpy.int32[3, 1]], arg1: ndarray[numpy.int32[3, 1]])
Bases:
pybind11_object
- property offset
- property size
- class imfusion.Selection(*args, **kwargs)
Bases:
Configurable
Utility class for describing a selection of elements out of a set. Conceptually, a Selection pairs a list of bools describing selected items with the index of a “focus” item and provides syntactic sugar on top. For instance, the set of selected items could define which ones to show in general while the focus item is additionally highlighted. The class is fully separate from the item set of which it describes the selection. This means for instance that it cannot know the actual number of items in the set and the user/parent class must manually make sure that they match. Also, a Selection only manages indices and offers no way of accessing the underlying elements. In order to iterate over all selected indices, you can do for instance the following:
>>> for index in range(selection.start, selection.stop): ... if selection[index]: ... pass
The same effect can also be achieved in a much more terse fashion:
>>> for selected_index in selection.selected_indices: ... pass
For convenience, the selection can also be converted to a slice object (if the selection has a regular spacing, see below):
>>> selected_subset = container[selection.to_slice()]
Sometimes it can be more convenient to “thin out” a selection by only selecting every N-th element. To this end, the Selection constructor takes the arguments
start
,stop
andstep
. Settingstep
to N will only select every N-th element, mimicking the signature ofrange
,slice
, etc.Overloaded function.
__init__(self: imfusion.Selection) -> None
__init__(self: imfusion.Selection, stop: int) -> None
__init__(self: imfusion.Selection, start: int, stop: int, step: int = 1) -> None
__init__(self: imfusion.Selection, indices: list[int]) -> None
- class NonePolicy(self: NonePolicy, value: int)
Bases:
pybind11_object
Members:
EMPTY
FOCUS
ALL
- ALL = <NonePolicy.ALL: 2>
- EMPTY = <NonePolicy.EMPTY: 0>
- FOCUS = <NonePolicy.FOCUS: 1>
- property name
- property value
- is_selected(self: Selection, index: int, none_policy: NonePolicy) bool
- ALL = <NonePolicy.ALL: 2>
- EMPTY = <NonePolicy.EMPTY: 0>
- FOCUS = <NonePolicy.FOCUS: 1>
- property first_selected
- property focus
- property has_regular_skip
- property is_none
- property last_selected
- property range
- property selected_indices
- property size
- property skip
- property start
- property step
- property stop
Bases:
pybind11_object
A
SharedImage
instance represents an image that resides in different memory locations, i.e. in CPU memory or GPU memory.A
SharedImage
can be directly converted from and to a numpy array:>>> img = imfusion.SharedImage(np.ones([10, 10, 1], dtype='uint8')) >>> arr = np.array(img)
See
MemImage
for details.Overloaded function.
__init__(self: imfusion.SharedImage, mem_image: imfusion.MemImage) -> None
__init__(self: imfusion.SharedImage, desc: imfusion.ImageDescriptor) -> None
__init__(self: imfusion.SharedImage, desc: imfusion.ImageDescriptorWorld) -> None
__init__(self: imfusion.SharedImage, type: imfusion.PixelType, width: int, height: int, slices: int = 1, channels: int = 1) -> None
__init__(self: imfusion.SharedImage, array: numpy.ndarray[numpy.int8], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImage, array: numpy.ndarray[numpy.uint8], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImage, array: numpy.ndarray[numpy.int16], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImage, array: numpy.ndarray[numpy.uint16], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImage, array: numpy.ndarray[numpy.int32], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImage, array: numpy.ndarray[numpy.uint32], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImage, array: numpy.ndarray[numpy.float32], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImage, array: numpy.ndarray[numpy.float64], greyscale: bool = False) -> None
Returns True if all pixels / voxels is non-zero
Returns True if at least one pixel / voxel is non-zero
Return a copy of the array with storage values converted to original values. The dtype of the returned array is always DOUBLE.
Return a list of the indices of maximum values, channel-wise. The indices are represented as (x, y, z, image index).
Return a list of the indices of minimum values, channel-wise. The indices are represented as (x, y, z, image index).
Copies the contents of arr to the SharedImage. Automatically calls setDirtyMem.
The casting parameters behaves like numpy.copyto.
Create a copy of the current SharedImage instance with the requested Image format.
This function accepts either: - a PixelType (e.g. imfusion.PixelType.UInt); - most of the numpy’s dtypes (e.g. np.uint); - python’s float or int types.
If the requested PixelType already matches the PixelType of the provided SharedImage, then a clone of the current instance is returned.
Reorders the channels of an image based on the input indices, e.g. indices[0] will correspond to the first channel of the output image.
- Parameters:
indices (List[int]) – List of channels indices to swizzle the channels of
SharedImage
.
Clear representations that are not CPU memory
Return the list of the maximum elements of images, channel-wise.
Return a list of channel-wise average of image elements.
Return the list of the minimum elements of images, channel-wise.
Returns the norm of an image instance, channel-wise.
Convenience method for converting a MemImage or a SharedImage into a newly created numpy array with scale and shift already applied.
Shift and scale may determine a complex change of pixel type prior the conversion into numpy array:
as a first rule, even if the type of shift and scale is float, they will still be considered as integers if they are representing integers (e.g. a shift of 2.000 will be treated as 2);
if shift and scale are such that the pixel values range (determined by the pixel_type) would not be fitting into the pixel_type, e.g. a negative pixel value but the type is unsigned, then the pixel_type will be promoted into a signed type if possible, otherwise into a single precision floating point type;
if shift and scale are such that the pixel values range (determined by the pixel_type) would be fitting into a demoted pixel_type, e.g. the type is signed but the range of pixel values is unsigned, then the pixel_type will be demoted;
if shift and scale do not certainly determine that all the possible pixel values (in the range determined by the pixel_type) would become integers, then the pixel_type will be promoted into a single precision floating point type.
in any case, the returned numpy array will be returned with type up to 32-bit integers. If the integer type would require more bits, then the resulting pixel_type will be DOUBLE.
- Parameters:
self – instance of a MemImage or of a SharedImage
- Returns:
numpy.ndarray
- Prepare the image:
Integral types are converted to unsigned representation if applicable, double-precision will be converted to single-precision float. Furthermore, if shift_only is False it will rescale the present intensity range to [0..1] for floating point types or to the entire available value range for integral types.
Return a list of channel-wise production of image elements.
Return a list of channel-wise sum of image elements.
Convert SharedImageSet or a SharedImage to a torch.Tensor.
- Parameters:
self (DataElement | SharedImageSet | SharedImage) – Instance of SharedImageSet or SharedImage (this function bound as a method to SharedImageSet and SharedImage)
device (device) – Target device for the new torch.Tensor
dtype (dtype) – Type of the new torch.Tensor
same_as (Tensor) – Template tensor whose device and dtype configuration should be matched.
device
anddtype
are still applied afterwards.
- Returns:
New torch.Tensor
- Return type:
Numpy compatible shape describing the dimensions of this image, stored as a namedtuple.
- Returns:
Named tuple with
slices
,height
,width
, andchannels
attributes.- Return type:
(collections.namedtuple)
Physical extent of each voxel in [mm] stored as a namedtuple. Spacing for a specific dimension can be accessed via
x
,y
, andz
attributes.- Returns:
Named tuple with
x
,y
, andz
attributes.- Return type:
(collections.namedtuple)
Bases:
Data
Set of images independent of their storage location.
This class is the main high-level container for image data consisting of one or multiple images or volumes, and should be used both in algorithms and visualization classes. Both a single focus and multiple selection is featured, as well as providing transformation matrices for each image.
The focus image of a
SharedImageSet
can be directly converted from and to a numpy array:>>> img = imfusion.SharedImageSet(np.ones([1, 10, 10, 10, 1], dtype='uint8')) >>> arr = np.array(img)
See
MemImage
for details.Overloaded function.
__init__(self: imfusion.SharedImageSet) -> None
Creates an empty SharedImageSet.
__init__(self: imfusion.SharedImageSet, mem_image: imfusion.MemImage) -> None
__init__(self: imfusion.SharedImageSet, shared_image: imfusion.SharedImage) -> None
__init__(self: imfusion.SharedImageSet, array: numpy.ndarray[numpy.int8], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImageSet, array: numpy.ndarray[numpy.uint8], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImageSet, array: numpy.ndarray[numpy.int16], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImageSet, array: numpy.ndarray[numpy.uint16], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImageSet, array: numpy.ndarray[numpy.int32], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImageSet, array: numpy.ndarray[numpy.uint32], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImageSet, array: numpy.ndarray[numpy.float32], greyscale: bool = False) -> None
__init__(self: imfusion.SharedImageSet, array: numpy.ndarray[numpy.float64], greyscale: bool = False) -> None
Overloaded function.
add(self: imfusion.SharedImageSet, shared_image: imfusion.SharedImage) -> None
add(self: imfusion.SharedImageSet, mem_image: imfusion.MemImage) -> None
Returns True if all pixels / voxels is non-zero
Returns True if at least one pixel / voxel is non-zero
Return a copy of the array with storage values converted to original values.
- Parameters:
self – instance of a SharedImageSet which provides shift and scale
arr – array to be converted from storage values into original values
- Returns:
numpy.ndarray
Return a list of the indices of maximum values, channel-wise. The indices are represented as (x, y, z, image index).
Return a list of the indices of minimum values, channel-wise. The indices are represented as (x, y, z, image index).
Copies the contents of arr to the MemImage. Automatically calls setDirtyMem.
Returns a new SharedImageSet formed by new SharedImage instances obtained by converting the original ones into the requested PixelType.
This function accepts either: - a PixelType (e.g. imfusion.PixelType.UInt); - most of the numpy’s dtypes (e.g. np.uint); - python’s float or int types.
If the requested type already matches the input type, the returned SharedImageSet will contain clones of the original images.
Reorders the channels of an image based on the input indices, e.g. indices[0] will correspond to the first channel of the output image.
- Parameters:
indices (List[int]) – List of channels indices to swizzle the channels of
SharedImageSet
.
Overloaded function.
from_images(path: str) -> imfusion.SharedImageSet
Load different images as a single
SharedImageSet
.Currently supported image formats are: [bmp, pgm, png, ppm, jpg, jpeg, tif, tiff, jp2].
- Args:
folder_path: The directory where all images are located to be loaded as a
SharedImageSet
.- Raises:
IOError if the file cannot be opened or if the extensions is not supported.
from_images(path: list[str]) -> imfusion.SharedImageSet
Load different images as a single
SharedImageSet
.Currently supported image formats are: [bmp, pgm, png, ppm, jpg, jpeg, tif, tiff, jp2].
- Args:
file_paths: paths to image files to be loaded as a
SharedImageSet
.- Raises:
IOError if the file cannot be opened or if the extensions is not supported.
Create a SharedImageSet from a torch Tensor. If you want to copy metadata from an existing SharedImageSet you can pass it as the
get_metadata_from
argument. If you are using this, make sure that the size of the tensor’s batch dimension and the number of images in the SIS are equal. Ifget_metadata_from
is provided,properties
will be copied from the SIS andworld_to_image_matrix
,spacing
andmodality
from the contained SharedImages.- Parameters:
cls – Instance of type i.e. SharedImageSet (this function is bound as a classmethod to SharedImageSet)
tensor (Tensor) – Instance of torch.Tensor
get_metadata_from (SharedImageSet | None) – Instance of SharedImageSet from which metadata should be copied.
- Returns:
New instance of SharedImageSet
- Return type:
Return the list of the maximum elements of images, channel-wise.
Return a list of channel-wise average of image elements.
Return the list of the minimum elements of images, channel-wise.
Returns the norm of an image instance, channel-wise.
Convenience method for reading a SharedImageSet as original values, with shift and scale already applied.
- Parameters:
self – instance of a SharedImageSet
- Returns:
numpy.ndarray
Return a list of channel-wise production of image elements.
Removes and deletes the SharedImage from the set.
Return a list of channel-wise sum of image elements.
Convert SharedImageSet or a SharedImage to a torch.Tensor.
- Parameters:
self (DataElement | SharedImageSet | SharedImage) – Instance of SharedImageSet or SharedImage (this function bound as a method to SharedImageSet and SharedImage)
device (device) – Target device for the new torch.Tensor
dtype (dtype) – Type of the new torch.Tensor
same_as (Tensor) – Template tensor whose device and dtype configuration should be matched.
device
anddtype
are still applied afterwards.
- Returns:
New torch.Tensor
- Return type:
Return a numpy compatible shape descripting the dimensions of this image.
The returned tuple has 5 entries: #frames, slices, height, width, channels
- class imfusion.SignalConnection
Bases:
pybind11_object
- disconnect(self: SignalConnection) bool
- property is_active
- property is_blocked
- property is_connected
- class imfusion.SkippingMask(self: SkippingMask, shape: ndarray[numpy.int32[3, 1]], skip: ndarray[numpy.int32[3, 1]])
Bases:
Mask
Basic mask where only every N-th pixel is considered inside.
- property skip
Step size in pixels for the mask
- class imfusion.SpacingMode(self: SpacingMode, value: int)
Bases:
pybind11_object
Members:
EXACT
ADJUST
- ADJUST = <SpacingMode.ADJUST: 1>
- EXACT = <SpacingMode.EXACT: 0>
- property name
- property value
Bases:
SharedImageSet
- class imfusion.TrackerID(*args, **kwargs)
Bases:
pybind11_object
Overloaded function.
__init__(self: imfusion.TrackerID) -> None
__init__(self: imfusion.TrackerID, id: str = ‘’, model_number: str = ‘’, name: str = ‘’) -> None
- property id
- property model_number
- property name
- class imfusion.TrackingSequence(self: TrackingSequence, name: str = '')
Bases:
Data
- add(*args, **kwargs)
Overloaded function.
add(self: imfusion.TrackingSequence, mat: numpy.ndarray[numpy.float64[4, 4]]) -> None
add(self: imfusion.TrackingSequence, mat: numpy.ndarray[numpy.float64[4, 4]], timestamp: float) -> None
add(self: imfusion.TrackingSequence, mat: numpy.ndarray[numpy.float64[4, 4]], timestamp: float, quality: float) -> None
add(self: imfusion.TrackingSequence, mat: numpy.ndarray[numpy.float64[4, 4]], timestamp: float, quality: float, flags: int) -> None
- clear(self: TrackingSequence) None
- flags(self: TrackingSequence, num: int = -1) int
- matrix(*args, **kwargs)
Overloaded function.
matrix(self: imfusion.TrackingSequence, num: int) -> numpy.ndarray[numpy.float64[4, 4]]
matrix(self: imfusion.TrackingSequence, time: float) -> numpy.ndarray[numpy.float64[4, 4]]
- quality(*args, **kwargs)
Overloaded function.
quality(self: imfusion.TrackingSequence, num: int) -> float
quality(self: imfusion.TrackingSequence, time: float, check_distance: bool = True, ignore_relative: bool = False) -> float
- raw_matrix(self: TrackingSequence, num: int) ndarray[numpy.float64[4, 4]]
- remove(self: TrackingSequence, pos: int, count: int = 1) None
- set_raw_matrix(self: TrackingSequence, idx: int, value: ndarray[numpy.float64[4, 4]]) None
- set_timestamp(self: TrackingSequence, idx: int, value: float) None
- shift_timestamps(self: TrackingSequence, shift: float) None
- timestamp(self: TrackingSequence, num: int = -1) float
- property calibration
- property center
- property filename
- property filter_mode
- property filter_size
- property has_timestamps
- property instrument_id
- property instrument_model
- property instrument_name
- property invert
- property median_time_step
- property registration
- property relative_to_first
- property relative_tracking
- property size
- property temporal_offset
- property tracker_id
- class imfusion.TransformationStashDataComponent(self: TransformationStashDataComponent)
Bases:
DataComponentBase
- property original
- property transformations
- class imfusion.VisualizerHandle
Bases:
pybind11_object
The handle to a visualizer. It allows to close a specific visualizer when needed. Example:
>>> visualizer_handle = imfusion.show(data_list, title="MyData") >>> assert visualizer_handle.title() == "MyData" >>> ... >>> visualizer_handle.close()
- close(self: VisualizerHandle) None
Close the visualizer associated to this handle.
- title(self: VisualizerHandle) str
Get the title of the visualizer associated to this handle.
- class imfusion.VitalsDataComponent
Bases:
DataComponentBase
DataComponent for storing a collection of time dependent vital signs like ECG, heart rate or pulse oximeter measurements.
- class VitalsKind(self: VitalsKind, value: int)
Bases:
pybind11_object
Members:
ECG
PULSE_OXIMETER
HEARTH_RATE
OTHER
- ECG = <VitalsKind.ECG: 0>
- HEARTH_RATE = <VitalsKind.HEARTH_RATE: 2>
- OTHER = <VitalsKind.OTHER: 3>
- PULSE_OXIMETER = <VitalsKind.PULSE_OXIMETER: 1>
- property name
- property value
- __getitem__(self: VitalsDataComponent, kind: VitalsKind) list[VitalsTimeSeries]
- ECG = <VitalsKind.ECG: 0>
- HEARTH_RATE = <VitalsKind.HEARTH_RATE: 2>
- OTHER = <VitalsKind.OTHER: 3>
- PULSE_OXIMETER = <VitalsKind.PULSE_OXIMETER: 1>
- property kinds
- imfusion.algorithm_properties(id: str, data: list) Properties
Returns the default properties of the given algorithm. This is useful to figure out what properties are supported by an algorithm.
- imfusion.auto_window(image: SharedImageSet, change2d: bool = True, change3d: bool = True, lower_limit: float = 0.0, upper_limit: float = 0.0) None
Update window/level of input image to show the entire intensity range of the image.
- Parameters:
image (SharedImageSet) – Image to change the windowing for.
change2d (bool) – Flag whether update the DisplayOptions2d attached to a img.
change3d (bool) – Flag whether update the DisplayOptions3d attached to a img.
lower_limit (double) – Ratio of lower values removed by the auto windowing.
upper_limit (double) – Ratio of upper values removed by the auto windowing.
- imfusion.available_algorithms(sub_string: str = '', case_sensitive: bool = False) list[str]
Return a list of all available algorithm ids.
Optionally, a substring can be given to filter the list (case-insensitive by default).
- imfusion.available_data_components() list[str]
Returns the Unique IDs of all DataComponents registered in DataComponentFactory.
- imfusion.create_algorithm(id: str, data: list = [], properties: Properties = None) object
Create the algorithm with the given id and but without executing it.
The algorithm will only be created if it is compatible with the given data. The optional
Properties
object will be used to configure the algorithm.- Parameters:
id – String identifier of the Algorithm to create.
data – List of input data that the Algorithm expects.
properties – Configuration for the Algorithm in the form of a
Properties
instance.
Example
>>> create_algorithm("Create Synthetic Data", []) <imfusion.BaseAlgorithm object at ...>
- imfusion.create_data_component(id: str, properties: Properties = None) object
Instantiates a DataComponent specified by the given ID.
- Parameters:
id – Unique ID of the DataComponent to create.
properties – Optional Properties object. If not None, it will used to configure the newly created DataComponent.
- imfusion.deinit() None
De-initializes the framework.
Deletes the main OpenGL context and unloads all plugins.
This should only be called at the end of the application. Automatically called when the module is unloaded.
Does nothing if the framework was not initialized yet.
- imfusion.execute_algorithm(id: str, data: list = [], properties: Properties = None) list
Execute the algorithm with the given id and returns its output.
The algorithm will only be executed if it is compatible with the given data. The optional
Properties
object will be used to configure the algorithm before executing it.
- imfusion.gpu_info() str | None
Return string with information about GPU to check if hardware support for OpenGL is available.
- imfusion.info() FrameworkInfo
Provides general information about the framework.
- imfusion.keep_data_alive(cls)
- imfusion.list_viewers() list[VisualizerHandle]
Return a list of visualization handles that were created with
show()
. Please note that this method may return viewers that have been closed withoutVisualizerHandle.close()
orclose_viewers()
.
- imfusion.load(path: str | PathLike) list
Load the content of a file or folder as a list of
Data
.The list can contain instances of any class deriving from Data, i.e.
SharedImage
,Mesh
,PointCloud
, etc…- Parameters:
path – can be path to a file containing a supported file formats, or a folder containing Dicom data, if the imfusion package was built with Dicom support.
Note
An IOError is raised if the file cannot be opened or a ValueError if the filetype is not supported. Some filetypes (like workspaces) cannot be opened by this function, but must be opened with
imfusion.ApplicationController.open()
.Example
>>> imfusion.load('ct_image.png') [imfusion.SharedImageSet(size: 1, [imfusion.SharedImage(USHORT width: 512 height: 512 spacing: 0.661813x0.661813x1 mm)])] >>> imfusion.load('multi_label_segmentation.nii.gz') [imfusion.SharedImageSet(size: 1, [imfusion.SharedImage(UBYTE width: 128 height: 128 slices: 128 channels: 3 spacing: 1x1x1 mm)])] >>> imfusion.load('us_sweep.dcm') [imfusion.SharedImageSet(size: 20, [ imfusion.SharedImage(UBYTE width: 164 height: 552 spacing: 0.228659x0.0724638x1 mm), imfusion.SharedImage(UBYTE width: 164 height: 552 spacing: 0.228659x0.0724638x1 mm), ... imfusion.SharedImage(UBYTE width: 164 height: 552 spacing: 0.228659x0.0724638x1 mm) >>> imfusion.load('path_to_folder_containing_multiple_dcm_datasets') [imfusion.SharedImageSet(size: 1, [imfusion.SharedImage(FLOAT width: 400 height: 400 slices: 300 spacing: 2.03642x2.03642x3 mm)])]
- imfusion.load_plugin(path: str) None
Load a single ImFusionLib plugin from the given file. WARNING: This might execute arbitary code. Only use with trusted files!
- imfusion.load_plugins(folder: str) None
Loads all ImFusionLib plugins from the given folder. WARNING: This might execute arbitary code. Only use with trusted folders!
- imfusion.log_level() int
Returns the level of the logging in the ImFusionSDK (Trace = 0, Debug = 1, Info = 2, Warning = 3, Error = 4, Fatal = 5, Quiet = 6)
- imfusion.open(file: str) list
Open a file and load it as data.
Return a list of loaded datasets. An IOError is raised if the file cannot be opened or a ValueError if the filetype is not supported.
Some filetypes (like workspaces) cannot be opened by this function, but must be opened with
imfusion.ApplicationController.open()
.
- imfusion.open_in_suite(data: list[Data]) None
Starts the ImFusion Suite with the input data list. The ImFusionSuite executable must be in your PATH.
- imfusion.register_algorithm(id, name, cls)
Register an Algorithm to the framework.
The Algorithm will be accessible through the given id. If the id is already used, the registration will fail.
cls must derive from Algorithm otherwise a TypeError is raised.
- imfusion.save(*args, **kwargs)
Overloaded function.
save(shared_image_set: imfusion.SharedImageSet, path: Union[str, os.PathLike], **kwargs) -> None
Save a
SharedImageSet
to the specified file or folder path. The path extension is used to determine which file format to save to. If a folder path is provided instead, then images are saved in the directory as separatepng
files. Currently supported file formats are:ImFusion File, extension
imf
NIfTI File, extensions [
nii
,nii.gz
]Folder path
- Parameters:
shared_image_set – Instance of
SharedImageSet
.path – Path to output file or folder. The path extension is used to determine the file format.
\**kwargs –
keep_ras_coordinates (
bool
) – NIfTI only. Sets whether to the keep RAS (Right, Anterior, Superior) coordinate system.compression_level (
int
) – Folder only. Compression level of the outputpng
files. Valid values range from 0-9 (0 - no compression, 9 - “maximal” compression).
- Raises:
RuntimeError if path extension is not supported. Currently supported extensions are ['imf', 'nii', 'nii.gz'], or no extension (save to folder). –
Example
>>> image_set = imfusion.SharedImageSet(np.ones((1,8,8,1))) >>> imfusion.save(image_set, tmp_path / 'file.imf') # saves an ImFusion file >>> imfusion.save(image_set, tmp_path / 'file.nii.gz', keep_ras_coordinates=True) # saves a NIfTI file
save(mesh: imfusion.Mesh, file_path: Union[str, os.PathLike]) -> None
Save a
imfusion.Mesh
to the specified file path. The path extension is used to determine which file format to save to. Currently supported file formats are:ImFusion File, extension
imf
Polygon File Format or the Stanford Triangle Format, extension
ply
STL file format used for 3D printing and computer-aided design (CAD), extension
stl
Object File Format, extension
off
OBJ file format developed by Wavefront , extension
obj
Virtual Reality Modeling Language file format, extension
wrl
Standard Starlink NDF (SUN/33) file format, extension
surf
Raster GIS file format developed by Esri, extension
grid
3D Manufacturing Format , extension
3mf
- Parameters:
mesh – Instance of
imfusion.Mesh
.file_path – Path to output file. The path extension is used to determine the file format.
- Raises:
RuntimeError if file_path extension is not supported. Currently supported extensions are [ply", "stl", "off", "obj", "wrl", "surf", "grid", "3mf]. –
Example
>>> mesh = imfusion.mesh.create(imfusion.mesh.Primitive.SPHERE) >>> imfusion.save(mesh, tmp_path / 'mesh.imf')
save(point_cloud: imfusion.PointCloud, file_path: Union[str, os.PathLike]) -> None
Save a
imfusion.SharedImageSet
to the specified file path. The path extension is used to determine which file format to save to. Currently supported file formats are:ImFusion File, extension
imf
Point Cloud Data used inside Point Cloud Library (PCL), extension
pcd
OBJ file format developed by Wavefront , extension
obj
Polygon File Format or the Stanford Triangle Format, extension
ply
- Parameters:
point_cloud – Instance of
imfusion.PointCloud
.file_path – Path to output file. The path extension is used to determine the file format.
- Raises:
RuntimeError if file_path extension is not supported. Currently supported extensions are ['imf', 'pcd', 'obj', 'ply', 'txt', 'xyz']. –
Example
>>> pc = imfusion.PointCloud([(0,0,0), (1,1,1), (-1,-1,-1)]) >>> imfusion.save(pc, tmp_path / 'point_cloud.pcd')
save(data: imfusion.Data, file_path: Union[str, os.PathLike]) -> None
Save a
Data
instance to the specified file path as an ImFusion file.- Parameters:
data – any instance of class deriving from
Data
can be saved with this methods, examples areSharedImageSet
,Mesh
andPointCloud
.file_path – Path to ImFusion file. The data is saved in a single file. File path must end with .imf.
Note
Raises a RuntimeError on failure or if file_path doesn’t end with .imf extension.
Example
>>> mesh = imfusion.mesh.create(imfusion.mesh.Primitive.SPHERE) >>> imfusion.save(mesh, tmp_path / 'mesh.imf')
save(data_list: list[imfusion.Data], file_path: Union[str, os.PathLike]) -> None
Save a list of data to the specified file path as an ImFusion file.
- Parameters:
data_list – List of
Data
. Any class deriving from Data can be saved with this methods. Examples of Data areSharedImageSet
,Mesh
,PointCloud
, etc.file_path – Path to ImFusion file. The entire list of Data is saved in a single file. File path must end with .imf
Note
Raises a RuntimeError on failure or if file_path doesn’t end with .imf extension.
Example
>>> image_set = imfusion.SharedImageSet(np.ones((1,8,8,1))) >>> mesh = imfusion.mesh.create(imfusion.mesh.Primitive.SPHERE) >>> point_cloud = imfusion.PointCloud([(0,0,0), (1,1,1), (-1,-1,-1)]) >>> another_image_set = imfusion.SharedImageSet(np.ones((1,8,8,1))) >>> imfusion.save([image_set, mesh, point_cloud, another_image_set], tmp_path / 'file.imf')
- imfusion.set_log_level(level: int) None
Sets the level of the logging in the ImFusionSDK (Trace = 0, Debug = 1, Info = 2, Warning = 3, Error = 4, Fatal = 5, Quiet = 6).
The initial log level is 3 (Warning), but can be set explicitly with the IMFUSION_LOG_LEVEL environment variable.
Note
After calling
transfer_logging_to_python()
this function has no effect.
- imfusion.show(*args, **kwargs)
Overloaded function.
show(data: imfusion.Data, *, title: Optional[str] = None) -> imfusion.VisualizerHandle
Launch a visualizer displaying the input data (e.g. a SharedImageSet). A title can also optionally be assigned.
show(data_list: list[imfusion.Data], *, title: Optional[str] = None) -> imfusion.VisualizerHandle
Launch a visualizer displaying the input list of data. A title can also optionally be assigned.
show(filepath: Union[str, os.PathLike], *, title: Optional[str] = None) -> imfusion.VisualizerHandle
Launch a visualizer displaying the content of the filepath. Only .imf files are supported at this point. A title can also optionally be assigned.
- imfusion.transfer_logging_to_python() None
Transfers the control of logging from ImFusionLib to the “ImFusion” logger, which can obtained through the python’s logging module with
logging.getLogger("ImFusion")
.After calling
transfer_logging_to_python
, the configuration of the logger will be possible exclusively through the Python’s logging module interface, e.g. usinglogging.getLogger("ImFusion").setLevel
. Besides, all the imfusion logs that happen after calling this function but before importing thelogging
module will not be captured.Note
Please note that this redirection cannot be cancelled and that any subsequent calls to this functions will have no effect.
Warning
Due to the GIL, log messages from internal threads won’t be forwarded to the logger.
- imfusion.try_import_imfusion_plugin(plugin: str) None
- Parameters:
plugin (str) –
- Return type:
None
- imfusion.unregister_algorithm(name: str) None
Unregister a previously registered algorithm.
This only works for algorithm that where registered through the Python interface but not for built-in algorithm.
- imfusion.wraps(wrapped, assigned=('__module__', '__name__', '__qualname__', '__doc__', '__annotations__', '__type_params__'), updated=('__dict__',))
Decorator factory to apply update_wrapper() to a wrapper function
Returns a decorator that invokes update_wrapper() with the decorated function as the wrapper argument and the arguments to wraps() as the remaining arguments. Default arguments are as for update_wrapper(). This is a convenience function to simplify applying partial() to update_wrapper().
imfusion.computed_tomography
ImFusion Computed Tomography (CT) Python Bindings
Core Functionality Areas
Geometry:
Cone beam geometry representation and manipulation
Parametric geometry generators for scanner configurations
Geometry calibration and validation tools
Simulation:
Forward projection and cone beam simulation
GPU-accelerated projection operators
Synthetic data generation from volumes and meshes
Scanner-specific simulation presets
2D/3D Registration:
X-ray to volume registration algorithms
Multiple initialization strategies (manual, keypoint, point-direction)
GPU-accelerated registration optimization
Reconstruction:
Analytical reconstruction (FDK)
Iterative reconstruction (MLEM, SART, SQS, CG)
Advanced regularization techniques
Example Usage
Basic cone beam geometry setup:
>>> import imfusion.computed_tomography as ct
>>> ct.make_cone_beam_data(projections)
>>> metadata = ct.ConeBeamMetadata.get(projections)
>>> metadata.enable_modern_geometry()
>>> param_gen = ct.ParametricGeometryGenerator()
>>> param_gen.source_det_distance = 1200.0
>>> param_gen.source_pat_distance = 800.0
>>> param_gen.angle_range = 180.0
>>> metadata.add_generator(param_gen, select=True)
Forward projection simulation:
>>> projections = ct.simulate_cone_beam_projections(
... volume,
... geometry_preset=ct.GeometryPreset.FULL_SCAN,
... proj_type=ct.ProjectionType.LOG_CONVERTED_ATTENUATION,
... width=1024, height=1024, frames=360,
... add_poisson_noise=False
... )
2D/3D registration workflow:
>>> ct.register_2d_3d_xray(
... projections, volume,
... initialization_mode=ct.InitializationMode.KEYPOINTS,
... num_resolution_levels=4,
... anatomy_name="spine"
... )
Reconstruction with iterative solver:
>>> volume = ct.reconstruct_cone_beam_ct(
... projections,
... solver_mode="MLEM",
... max_iterations=50,
... subset_size=10,
... force_positivity=True
... )
For detailed documentation of specific classes and functions, use Python’s built-in help() function or access the docstrings directly.
Note: This module requires the ImFusion CT plugin to be properly installed and licensed.
- class imfusion.computed_tomography.CBCTProjector
Bases:
LinearOperator
Base class for Cone Beam CT projectors.
Provides the interface for forward projection (volume to projections) and backprojection (projections to volume) operations that are fundamental to CT reconstruction.
- property use_mask_input_apply
Apply input masking during forward projection.
- property use_mask_input_apply_adjoint
Apply input masking during backprojection (adjoint operation).
- property use_mask_output_apply
Apply output masking during forward projection.
- property use_mask_output_apply_adjoint
Apply output masking during backprojection (adjoint operation).
- class imfusion.computed_tomography.CTStatus(self: CTStatus, value: int)
Bases:
pybind11_object
Status codes for CT operations.
Members:
SUCCESS : Operation completed successfully.
NOT_IMPLEMENTED : Operation not implemented.
ILL_FORMED_INPUT : Invalid input parameters provided.
CANCELLED : Operation was cancelled.
ERROR : General error occurred during operation.
- CANCELLED = <CTStatus.CANCELLED: 3>
- ERROR = <CTStatus.ERROR: 4>
- ILL_FORMED_INPUT = <CTStatus.ILL_FORMED_INPUT: 2>
- NOT_IMPLEMENTED = <CTStatus.NOT_IMPLEMENTED: 1>
- SUCCESS = <CTStatus.SUCCESS: 0>
- property name
- property value
- class imfusion.computed_tomography.ComputationPhase(self: ComputationPhase, value: int)
Bases:
pybind11_object
Phases of computation during 2D/3D registration.
Members:
UNDEF : Undefined phase.
ORIGINAL : Original geometry.
INITIALIZATION : Initialization phase.
OPTIMIZATION : Optimization phase.
- INITIALIZATION = <ComputationPhase.INITIALIZATION: 2>
- OPTIMIZATION = <ComputationPhase.OPTIMIZATION: 3>
- ORIGINAL = <ComputationPhase.ORIGINAL: 1>
- UNDEF = <ComputationPhase.UNDEF: 0>
- property name
- property value
- class imfusion.computed_tomography.ConeBeamGeometry(self: ConeBeamGeometry, *, source_det_distance: float = 0.0, source_pat_distance: float = 0.0, det_size_x: float = 0.0, det_size_y: float = 0.0, det_offset_x: float = 0.0, det_rotation: float = 0.0, det_shear_x: float = 0.0, det_shear_y: float = 0.0, recon_size: float = 0.0, recon_offset_x: float = 0.0, recon_offset_y: float = 0.0, recon_offset_z: float = 0.0, recon_rot_x: float = 0.0, recon_rot_y: float = 0.0, angle_start: float = 0.0, angle_range: float = 0.0, angle_tilt: float = 0.0, use_matrices: bool = False, enable_frame_pars: bool = False, use_fan_beam: bool = False, jitter: list[float] = [], angles: list[float] = [], offsets: list[ndarray[numpy.float64[3, 1]]] = [], iso_mat_rot_ctr: ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]))
Bases:
Configurable
Legacy cone beam geometry representation.
This class provides the traditional parametric representation of cone beam CT geometry using intrinsic and extrinsic parameters. It defines the geometric relationship between X-ray source, detector, and reconstruction volume for cone beam acquisition systems.
Key Features:
Source-detector distance and source-patient distance
Detector size, offset, rotation, and shear parameters
Reconstruction volume size and positioning
Support for circular
Matrix serialization in multiple formats
Example
Create and configure a cone beam geometry:
>>> geometry = ct.ConeBeamGeometry() >>> geometry.source_det_distance = 1200.0 # mm >>> geometry.source_pat_distance = 600.0 # mm >>> geometry.det_size_x = 400.0 # mm >>> geometry.det_size_y = 300.0 # mm >>> geometry.recon_size = 256.0 # mm
Create a ConeBeamGeometry fully configured via parameters.
- Parameters:
source_det_distance – Source to detector center distance in mm
source_pat_distance – Source to patient distance in mm
det_size_x – Detector width in mm
det_size_y – Detector height in mm
det_offset_x – Horizontal detector offset in mm
det_rotation – Detector rotation in degrees
det_shear_x – Detector shear in X (mm)
det_shear_y – Detector shear in Y (mm)
recon_size – Reconstruction volume size in mm (all dimensions)
recon_offset_x – Reconstruction X offset in mm
recon_offset_y – Reconstruction Y offset in mm
recon_offset_z – Reconstruction Z offset in mm
recon_rot_x – Reconstruction X rotation in degrees
recon_rot_y – Reconstruction Y rotation in degrees
angle_start – Rotation angle of first frame in degrees
angle_range – Entire rotation range in degrees
angle_tilt – Vertical tilt in degrees
use_matrices – Use per-frame matrices instead of parametric geometry
enable_frame_pars – Consider per-frame motion parameters
use_fan_beam – Fan-beam projection along z-direction
jitter – Additional per-frame jitter values
angles – Individual rotation angles per frame (degrees)
offsets – Iso-center offset per frame (vec3, mm)
iso_mat_rot_ctr – Rotation center of the global iso-matrix (vec3, mm)
- to_per_frame_geometry(self: ConeBeamGeometry, num_frames: int) list[FullGeometryRepresentation]
Convert this ConeBeamGeometry to a per-frame geometry.
This function returns a vector of FullGeometryRepresentation instances where all parametric geometry is expanded into explicit per-frame matrices for the specified number of frames. Useful for algorithms that require explicit geometry for each frame.
- Parameters:
num_frames – Number of frames to expand the geometry to.
- Returns:
Vector of FullGeometryRepresentation with computed geometry parameters
- property angles
Individual rotation angles per frame in degrees.
If specified, these override the parametric angle calculation.
- property enable_frame_pars
Whether per-frame motion parameters are considered (default is false).
- Type:
- property iso_mat_rot_ctr
Optional rotation center of global iso-matrix.
3D point defining the rotation center for iso-matrix transformations.
- property jitter
Additional per-frame jitter parameters.
Vector of jitter values applied to individual frames for motion simulation.
- property offsets
Individual iso-center offset per frame.
Vector of 3D offsets applied to the iso-center for each frame.
- property recon_rot_x
X rotation of the reconstruction volume in degrees (default is 0.0).
- Type:
- property recon_rot_y
Y rotation of the reconstruction volume in degrees (default is 0.0).
- Type:
- property source_det_distance
Source to center of detector distance in mm (default is 0.0).
- Type:
- property use_fan_beam
Whether this is a fan beam projection along the z-direction (default is false).
Warning
Not all methods support fan beam geometry.
- Type:
- class imfusion.computed_tomography.ConeBeamMetadata(self: ConeBeamMetadata)
Bases:
pybind11_object
Container for cone beam projection metadata.
This class stores all the metadata associated with cone beam projection data, including geometry information, intensity mode, and other acquisition parameters.
Create an empty ConeBeamMetadata container.
- add_generator(self: ConeBeamMetadata, generator: GeometryGenerator, select: bool, store_current_geometry_in_history: bool = False) str
Add a geometry generator with history management.
- Parameters:
generator – GeometryGenerator to add (will be cloned)
select – Whether to select this generator as active
store_current_geometry_in_history – Whether to store current geometry before switching
- Returns:
String key of the added generator
- clear_generators(self: ConeBeamMetadata) None
Clear all geometry generators.
Removes all generators from the metadata.
- disable_modern_geometry(self: ConeBeamMetadata) None
Disable modern geometry representation.
Switches back to legacy ConeBeamGeometry representation.
- enable_modern_geometry(self: ConeBeamMetadata) None
Enable modern geometry representation.
Switches from legacy ConeBeamGeometry to modern geometry components.
- geometry(*args, **kwargs)
Overloaded function.
geometry(self: imfusion.computed_tomography.ConeBeamMetadata) -> imfusion.computed_tomography.ConeBeamGeometry
Access the cone beam geometry parameters.
- Returns:
Reference to the ConeBeamGeometry object containing all geometric parameters
geometry(self: imfusion.computed_tomography.ConeBeamMetadata) -> imfusion.computed_tomography.ConeBeamGeometry
Access the cone beam geometry parameters.
- Returns:
Const reference to the ConeBeamGeometry object
- static get(*args, **kwargs)
Overloaded function.
get(projections: imfusion.SharedImageSet) -> imfusion.computed_tomography.ConeBeamMetadata
Get ConeBeamMetadata from a SharedImageSet.
- Parameters:
projections – SharedImageSet containing the metadata
- Returns:
Reference to the ConeBeamMetadata component
get(projections: imfusion.SharedImageSet) -> imfusion.computed_tomography.ConeBeamMetadata
Get ConeBeamMetadata from a SharedImageSet.
- Parameters:
projections – SharedImageSet containing the metadata
- Returns:
Reference to the ConeBeamMetadata component
- id(self: ConeBeamMetadata) str
Get the data component identifier.
- Returns:
String identifier for the metadata component
- remove_generator(self: ConeBeamMetadata, key: str) None
Remove a geometry generator by its key.
- Parameters:
key – String key of the generator to remove
- sync_to_modern_geometry(self: ConeBeamMetadata) None
Synchronize from legacy geometry to modern geometry components.
Converts legacy ConeBeamGeometry parameters to modern SourceDataComponent and DetectorDataComponent representations.
- property folder
Legacy parameter storing the folder path.
Path to the folder from which the projection data was loaded.
- property generator_uses_selection
Whether the generator should use frame selection.
Controls if the geometry generator respects frame selection settings.
- property global_scaling
Legacy global scaling factor for reconstruction.
Multiplicative factor applied to reconstruction values.
- property i0
Legacy i0 parameter for intensity mode.
Reference intensity value, typically air intensity.
- property intensity_mode
Intensity mode for the projection data.
Specifies whether air appears bright (Absorption) or dark (LinearAttenuation).
- property num_generators
Get the number of geometry generators.
- Returns:
Number of available geometry generators
- property recon_size
Suggested reconstruction size in mm.
Recommended size for the reconstruction volume in all dimensions.
- property using_legacy_geometry
Check if using legacy geometry representation.
- Returns:
True if using legacy ConeBeamGeometry, False if using modern geometry components
- class imfusion.computed_tomography.ConeBeamSimulation(self: imfusion.computed_tomography.ConeBeamSimulation, volume: imfusion.SharedImage, *, geometry_preset: imfusion.computed_tomography.GeometryPreset = <GeometryPreset.HALF_SCAN: 0>, motion_preset: imfusion.computed_tomography.MotionPreset = <MotionPreset.NO_MOTION: 0>, proj_type: imfusion.computed_tomography.ProjectionType = <ProjectionType.PHOTON_COUNT: 0>, width: int = 512, height: int = 512, frames: int = 180, data_type: imfusion.PixelType = <PixelType.FLOAT: 5126>, i0: float = 1.0, add_poisson_noise: bool = False, scaling: float = 1.0, subtract: bool = False, physical_units: bool = False, stream_proj: bool = False, duration: float = 1.0, continuous_updates: bool = False, reference_projections: imfusion.SharedImageSet = None)
Bases:
BaseAlgorithm
Cone beam X-ray projection simulation algorithm.
This algorithm simulates cone beam X-ray projections from a 3D volume using GPU-accelerated ray casting. It supports various acquisition geometries, motion patterns, and realistic noise models for CT simulation studies.
Key Features:
Multiple geometry presets (half-scan, full-scan, C-arm, etc.)
Motion simulation (jitter, patient motion, device motion)
Realistic noise modeling with Poisson statistics
Physical units support for density-based simulation
Reference projection integration
Example
Simulate basic cone beam projections:
>>> volume, *_ = imf.load("volume.nii") >>> simulator = ct.ConeBeamSimulation( ... volume, ... geometry_preset=ct.GeometryPreset.FULL_SCAN, ... width=1024, ... height=1024, ... frames=360 ... ) >>> projections = simulator() >>> print(f"Simulated {projections.size()} projections")
Create cone beam simulation algorithm.
- Parameters:
volume – Input volume to simulate projections from
geometry_preset – Predefined geometry configuration
motion_preset – Motion pattern for simulation
proj_type – Type of projection (photon count or log-converted attenuation)
width – Width of simulated detector in pixels
height – Height of simulated detector in pixels
frames – Number of projection frames to simulate
data_type – Data type for simulated projections
i0 – Incident intensity for transmission projections
add_poisson_noise – Add realistic Poisson noise to projections
scaling – Scaling factor applied to all projection values
subtract – Subtract reference data from simulation
physical_units – Use physical units for simulation calculations
stream_proj – Sync simulated frames to CPU after simulation
duration – Duration of the motion in seconds
continuous_updates – Enable continuous updates when volume changes
reference_projections – Optional reference projection data
- compute(self: ConeBeamSimulation) SharedImageSet
Run simulation and return the result.
- Returns:
SharedImageSet containing the simulated cone beam projections
- geometry(self: ConeBeamSimulation) ConeBeamGeometry
Access the geometry instance for updating it.
- Returns:
Reference to the ConeBeamGeometry object
- property add_poisson_noise
Add Poisson noise if photon count projections are simulated.
- property continuous_updates
If set, the last projections created by compute() are updated continuously when the volume/geometry changes.
- property data_type
Image data type to use for simulated images.
- property duration
Duration of the motion in seconds.
- property frames
Number of frames to be simulated.
- property geometry_preset
Geometry preset of the projections to be simulated.
- property height
Height of the projections to be simulated.
- property i0
Maximum X-Ray intensity, i.e. air intensity value.
- property motion_preset
Motion preset of the projections to be simulated.
- property physical_units
Whether the input volume should be interpreted as a density map in kg/m^3.
- property proj_type
Projection type of the projections to be simulated.
- property scaling
Global intensity scaling factor.
- property stream_proj
If enabled the simulated frames will always be synced to the CPU after simulation.
- property subtract
Subtract reference data from simulation.
- property width
Width of the projections to be simulated.
- class imfusion.computed_tomography.DetectorCurvature(self: DetectorCurvature, value: int)
Bases:
pybind11_object
Types of detector curvature for cone beam geometry.
Different detector curvatures require specific geometric correction algorithms.
Members:
FLAT : Flat panel detector with no curvature correction.
CYLINDRICAL : Cylindrical detector curvature model.
FANFLAT : Fan-beam flat detector geometry.
- CYLINDRICAL = <DetectorCurvature.CYLINDRICAL: 1>
- FANFLAT = <DetectorCurvature.FANFLAT: 2>
- FLAT = <DetectorCurvature.FLAT: 0>
- property name
- property value
- class imfusion.computed_tomography.DetectorDataComponent(self: DetectorDataComponent, *, matrix_world_to_detector: ndarray[numpy.float64[4, 4]] = array([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]]), persistent_index_and_range: ndarray[numpy.int32[2, 1]] = array([-1, -1], dtype=int32))
Bases:
DataComponentBase
Data component storing detector transformation for a single frame.
This component defines the detector position and orientation in world coordinates for cone beam geometry calculations. Used with SourceDataComponent to provide complete frame geometry information.
Create DetectorDataComponent with world-to-detector transform and persistent index/range.
- Parameters:
matrix_world_to_detector – 4x4 transform from world to detector space
persistent_index_and_range – vec2i where x=index, y=range for tracking
- static get(*args, **kwargs)
Overloaded function.
get(projections: imfusion.SharedImageSet, frame: int) -> imfusion.computed_tomography.DetectorDataComponent
Get DetectorDataComponent for a specific frame.
- Parameters:
projections – SharedImageSet to access
frame – Frame index
- Returns:
Pointer to DetectorDataComponent or None if not found
get(projections: imfusion.SharedImageSet, frame: int) -> imfusion.computed_tomography.DetectorDataComponent
Get DetectorDataComponent for a specific frame.
- Parameters:
projections – SharedImageSet to access
frame – Frame index
- Returns:
Pointer to DetectorDataComponent or None if not found
- static get_or_create(projections: SharedImageSet, frame: int) DetectorDataComponent
Get or create DetectorDataComponent for a specific frame.
- Parameters:
projections – SharedImageSet to access
frame – Frame index to get/create component for
- Returns:
Reference to the DetectorDataComponent
- property matrix_world_to_detector
Matrix transforming from world to detector space.
4x4 transformation matrix that maps coordinates from world space to detector coordinate space. To get to image coordinates, the image matrix must be multiplied as well.
- property persistent_index_and_range
Persistent index and range for geometry tracking.
Stores the original frame index (x component) and range (y component) used by geometry generators to track the original location of frames.
- class imfusion.computed_tomography.DetectorPropertiesDataComponent(self: DetectorPropertiesDataComponent)
Bases:
pybind11_object
Container for detector properties and calibration information.
This data component stores detector-specific parameters including physical dimensions, curvature properties, and calibration data required for accurate cone beam CT reconstruction.
Features:
Detector curvature modeling (flat, cylindrical, fan-flat)
Per-frame curved detector offsets
Curvature radii configuration
Integration with cone beam metadata
Automatic creation and retrieval utilities
Create an empty DetectorPropertiesDataComponent (default curvature=FLAT).
- static get(*args, **kwargs)
Overloaded function.
get(projections: imfusion.SharedImageSet) -> imfusion.computed_tomography.DetectorPropertiesDataComponent
Get DetectorPropertiesDataComponent from SharedImageSet.
- Parameters:
projections – SharedImageSet containing the component
- Returns:
Pointer to DetectorPropertiesDataComponent or None if not found
get(projections: imfusion.SharedImageSet) -> imfusion.computed_tomography.DetectorPropertiesDataComponent
Get DetectorPropertiesDataComponent from SharedImageSet.
- Parameters:
projections – SharedImageSet containing the component
- Returns:
Pointer to DetectorPropertiesDataComponent or None if not found
- static get_or_create(*args, **kwargs)
Overloaded function.
get_or_create(projections: imfusion.SharedImageSet) -> imfusion.computed_tomography.DetectorPropertiesDataComponent
Get or create DetectorPropertiesDataComponent from SharedImageSet.
- Parameters:
projections – SharedImageSet to get/create component for
- Returns:
Reference to the DetectorPropertiesDataComponent
get_or_create(projections: imfusion.SharedImageSet) -> imfusion.computed_tomography.DetectorPropertiesDataComponent
Get or create DetectorPropertiesDataComponent from SharedImageSet.
- Parameters:
projections – SharedImageSet to get/create component for
- Returns:
Reference to the DetectorPropertiesDataComponent
- id(self: DetectorPropertiesDataComponent) str
Get the data component identifier.
- Returns:
String identifier for the detector properties component
- property curvature
Detector curvature type (default is FLAT).
Specifies the physical curvature of the detector: - FLAT: Flat panel detector - CYLINDRICAL: Curved in x-direction - FANFLAT: Fan beam flat detector
- Type:
enum
- property curved_offsets
Per-frame offsets (in mm) on the curved detector to the center of the image.
Vector of 2D offsets for each frame, specifying the displacement from the geometric center of the curved detector to the actual image center. Only relevant for curved detector geometries.
- property radii
Radii of the curvature per frame (optional).
If not set, the source-detector distance will be used as the radius. When specified, provides the radius of curvature for each frame, allowing for variable curvature in dynamic acquisitions.
- class imfusion.computed_tomography.Eos2D3DRegistration(self: imfusion.computed_tomography.Eos2D3DRegistration, shots: imfusion.SharedImageSet, volumes: imfusion.SharedImageSet, *, initialization_mode: imfusion.computed_tomography.InitializationMode = <InitializationMode.POINT_DIRECTION: 1>, anatomy_name: str = 'default', num_resolution_levels: int = 3)
Bases:
XRay2D3DRegistration
Specialized 2D/3D registration algorithm for EOS imaging systems.
This algorithm extends XRay2D3DRegistration with specific preprocessing and handling for EOS bi-planar X-ray systems, providing optimized registration for clinical EOS workflows.
Features:
EOS-specific geometry preprocessing
Bi-planar projection handling
Clinical workflow integration
Inherits all XRay2D3DRegistration features
Example
EOS system registration:
>>> eos_shots, *_ = imf.load("eos_projections.dcm") >>> volume, *_ = imf.load("patient_ct.nii") >>> reg = ct.Eos2D3DRegistration( ... eos_shots, ... volume, ... anatomy_name="spine" ... ) >>> reg.initialize() >>> reg()
Create EOS-specific 2D/3D registration algorithm.
- Parameters:
shots – EOS bi-planar projection shots
volumes – 3D volume data to register to the projections
initialization_mode – Registration initialization strategy (default: InitializationMode.POINT_DIRECTION — initialization using point and direction information)
anatomy_name – Identifier for storing registration matrices
num_resolution_levels – Number of multi-resolution levels
- compute(self: Eos2D3DRegistration) None
Run EOS-specific registration algorithm.
This performs EOS-specific preprocessing and then standard 2D/3D registration.
- class imfusion.computed_tomography.FullGeometryRepresentation(self: FullGeometryRepresentation, source_location: ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), world_to_detector_matrix: ndarray[numpy.float64[4, 4]] = array([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]]), persistent_index_and_range: ndarray[numpy.int32[2, 1]] = array([0, 0], dtype=int32), image_desc_world: ImageDescriptorWorld | None = None)
Bases:
pybind11_object
Complete geometry representation for a single frame.
This structure contains all the geometric information needed to describe the relationship between source, detector, and world coordinates for a single projection frame.
Create full geometry representation.
- Parameters:
source_location – Location of X-ray source in detector coordinates
world_to_detector_matrix – Transformation matrix from world to detector
persistent_index_and_range – Persistent index and range information
- static from_opencv_matrix(matrix: ndarray[numpy.float64[3, 4]], detector_width_px: int, detector_height_px: int, pixel_spacing: ndarray[numpy.float64[2, 1]]) FullGeometryRepresentation
Create full geometry representation from OpenCV matrix.
- Parameters:
matrix – OpenCV projection matrix P = K*[R|t] (3x4)
detector_width_px – Detector width in pixels
detector_height_px – Detector height in pixels
pixel_spacing – Pixel size in mm (x_spacing, y_spacing)
- Returns:
FullGeometryRepresentation with computed geometry parameters
- static from_opengl_matrix(*args, **kwargs)
Overloaded function.
from_opengl_matrix(matrix: numpy.ndarray[numpy.float64[4, 4]], det_size: numpy.ndarray[numpy.float64[2, 1]]) -> imfusion.computed_tomography.FullGeometryRepresentation
Create full geometry representation from OpenGL matrix.
- Parameters:
matrix – OpenGL projection matrix
det_size – Size of the detector in mm
- Returns:
FullGeometryRepresentation instance
from_opengl_matrix(matrix: numpy.ndarray[numpy.float64[4, 4]], detector_width_px: int, detector_height_px: int, pixel_spacing: numpy.ndarray[numpy.float64[2, 1]]) -> imfusion.computed_tomography.FullGeometryRepresentation
Create full geometry representation from OpenGL matrix.
- Parameters:
matrix – OpenGL projection matrix
detector_width_px – Width of the detector in pixels
detector_height_px – Height of the detector in pixels
pixel_spacing – Pixel spacing in mm
- Returns:
FullGeometryRepresentation instance
- to_matrix_components_gl(self: FullGeometryRepresentation) object
Get OpenGL projection and modelview matrix components.
- Returns:
PM: 4x4 OpenGL projection matrix
MV: 4x4 OpenGL modelview matrix
- Return type:
Named tuple with fields (PM, MV) where
- to_matrix_components_opencv_image(self: FullGeometryRepresentation) object
Get OpenCV camera matrix components in image coordinates.
- Returns:
K: 3x3 intrinsic camera matrix (mm coordinates, origin at center)
R: 3x3 rotation matrix
t: 3D translation vector
- Return type:
Named tuple with fields (K, R, t) where
- to_matrix_components_opencv_pixel(self: FullGeometryRepresentation) object
Get OpenCV camera matrix components in pixel coordinates.
- Returns:
K: 3x3 intrinsic camera matrix (pixel coordinates, origin at top-left)
R: 3x3 rotation matrix
t: 3D translation vector
- Return type:
Named tuple with fields (K, R, t) where
- to_matrix_gl_image(self: FullGeometryRepresentation) ndarray[numpy.float64[4, 4]]
Get full OpenGL projection matrix (PM * MV).
- Returns:
4x4 OpenGL projection matrix
- to_matrix_gl_image_top_left(self: FullGeometryRepresentation) ndarray[numpy.float64[4, 4]]
Get OpenGL projection matrix with y-axis flipped for top-left origin.
- Returns:
4x4 OpenGL projection matrix with flipped y-axis for image coordinates
- to_matrix_image_to_world(self: FullGeometryRepresentation) ndarray[numpy.float64[4, 4]]
Get transformation matrix from image to world coordinates.
- Returns:
4x4 transformation matrix (image -> world)
- to_matrix_opencv_image(self: FullGeometryRepresentation) ndarray[numpy.float64[3, 4]]
Get OpenCV projection matrix P = K * [R|t] in image coordinates.
- Returns:
3x4 projection matrix in image coordinates (mm, origin at center)
- to_matrix_opencv_pixel(self: FullGeometryRepresentation) ndarray[numpy.float64[3, 4]]
Get OpenCV projection matrix P = K * [R|t] in pixel coordinates.
- Returns:
3x4 projection matrix in pixel coordinates (origin at top-left)
- to_matrix_world_to_image(self: FullGeometryRepresentation) ndarray[numpy.float64[4, 4]]
Get projective transformation matrix from world to image coordinates.
- Returns:
4x4 transformation matrix (world -> image)
- property location_source_in_detector_space
Location of the X-ray source in detector coordinates.
- property matrix_world_to_detector
Matrix from world to detector coordinates.
- property persistent_index_and_range
Persistent index and range for geometry tracking.
- class imfusion.computed_tomography.GeometryGenerator
Bases:
pybind11_object
Base class for X-ray geometry generators.
Geometry generators are used to compute and manage the geometric configuration of cone beam CT systems. They can generate full geometry representations from parametric descriptions or fit parametric models to existing geometries.
- clone(self: GeometryGenerator) GeometryGenerator
Create a copy of this geometry generator.
- Returns:
Cloned GeometryGenerator
- id(self: GeometryGenerator) str
Get the identifier string for this generator type.
- Returns:
String identifier for the generator
- class imfusion.computed_tomography.GeometryPreset(self: GeometryPreset, value: int)
Bases:
pybind11_object
Predefined scanner geometry configurations for cone beam simulation.
Members:
NONE : No specific geometry preset.
HALF_SCAN : Half-rotation scan mode.
FULL_SCAN : Full-rotation scan mode.
SHORT_SCAN : Short scan mode.
SINGLE_XRAY_SHOT : Single X-ray shot (AP shot at 180 degrees).
BIPLANAR_SHOT : Biplanar shot (AP and LAT).
EOS_SCANNER : EOS scanner geometry.
C_ARM_SCANNER : C-Arm scanner geometry.
CARDIOVASCULAR_C_ARM : Cardiovascular C-Arm scanner geometry.
- BIPLANAR_SHOT = <GeometryPreset.BIPLANAR_SHOT: 4>
- CARDIOVASCULAR_C_ARM = <GeometryPreset.CARDIOVASCULAR_C_ARM: 7>
- C_ARM_SCANNER = <GeometryPreset.C_ARM_SCANNER: 6>
- EOS_SCANNER = <GeometryPreset.EOS_SCANNER: 5>
- FULL_SCAN = <GeometryPreset.FULL_SCAN: 1>
- HALF_SCAN = <GeometryPreset.HALF_SCAN: 0>
- NONE = <GeometryPreset.NONE: -1>
- SHORT_SCAN = <GeometryPreset.SHORT_SCAN: 2>
- SINGLE_XRAY_SHOT = <GeometryPreset.SINGLE_XRAY_SHOT: 3>
- property name
- property value
- class imfusion.computed_tomography.GlCBCTProjector(self: GlCBCTProjector, domain_ref: SharedImageSet, range_ref: SharedImageSet)
Bases:
CBCTProjector
GPU-accelerated Cone Beam CT projector using OpenGL compute shaders.
This projector provides high-performance forward projection and backprojection operations using GPU acceleration. It’s the preferred projector for reconstruction when GPU resources are available.
Features:
GPU-accelerated forward and backprojection
Support for complex detector geometries
Optimized memory usage with streaming
Compatible with all reconstruction solvers
Automatic fallback to CPU if needed
Example
Create a GPU projector for reconstruction:
>>> projector = ct.GlCBCTProjector(volume_ref, projections_ref) >>> # Used internally by reconstruction algorithms
Create a GPU projector with domain and range references.
- Parameters:
domain_ref – Reference SharedImageSet for the volume domain
range_ref – Reference SharedImageSet for the projection range
- clone(self: GlCBCTProjector) LinearOperator
Create a copy of this GPU projector.
- Returns:
Cloned GlCBCTProjector
- class imfusion.computed_tomography.InitializationMode(self: InitializationMode, value: int)
Bases:
pybind11_object
Initialization methods for 2D/3D registration.
Members:
NOOP : Manual initialization.
POINT_DIRECTION : Initialization using point and direction information.
KEYPOINTS : Automatic initialization using keypoint detection.
CUSTOM : Custom initialization method.
- CUSTOM = <InitializationMode.CUSTOM: 3>
- KEYPOINTS = <InitializationMode.KEYPOINTS: 2>
- NOOP = <InitializationMode.NOOP: 0>
- POINT_DIRECTION = <InitializationMode.POINT_DIRECTION: 1>
- property name
- property value
- class imfusion.computed_tomography.IntensityMode(self: IntensityMode, value: int)
Bases:
pybind11_object
Intensity mode for cone beam projections.
Defines how intensity values in cone beam projections should be interpreted for visualization and processing purposes.
Members:
ABSORPTION : Absorption mode - air appears bright
LINEAR_ATTENUATION : Linear attenuation mode - air appears dark
- ABSORPTION = <IntensityMode.ABSORPTION: 0>
- LINEAR_ATTENUATION = <IntensityMode.LINEAR_ATTENUATION: 1>
- property name
- property value
- class imfusion.computed_tomography.LinearOperator
Bases:
pybind11_object
Abstract base class for linear operators in CT reconstruction.
Linear operators represent the forward projection and backprojection operations that form the core of CT reconstruction algorithms.
- apply(*args, **kwargs)
Overloaded function.
apply(self: imfusion.computed_tomography.LinearOperator, expr_in: imfusion.imagemath.lazy.Expression, input: imfusion.SharedImageSet, output: imfusion.SharedImageSet, expr_out: imfusion.imagemath.lazy.Expression = None) -> None
Apply the linear operator with input and output expressions.
Computes: expr_out(A * expr_in(input))
- Parameters:
expr_in – Input expression to apply to input data
input – Input SharedImageSet (reference for domain)
output – Output SharedImageSet to store results
expr_out – Optional output expression to apply to results
- Raises:
AlgorithmExecutionError – If the operation fails
apply(self: imfusion.computed_tomography.LinearOperator, input: imfusion.SharedImageSet, output: imfusion.SharedImageSet, expr_out: imfusion.imagemath.lazy.Expression = None) -> None
Apply the linear operator directly to SharedImageSet.
- Parameters:
input – Input SharedImageSet
output – Output SharedImageSet to store results
expr_out – Optional output expression to apply to results
- Raises:
AlgorithmExecutionError – If the operation fails
- apply_adjoint(*args, **kwargs)
Overloaded function.
apply_adjoint(self: imfusion.computed_tomography.LinearOperator, expr_in: imfusion.imagemath.lazy.Expression, input: imfusion.SharedImageSet, output: imfusion.SharedImageSet, expr_out: imfusion.imagemath.lazy.Expression = None) -> None
Apply the adjoint (transpose) of the linear operator with expressions.
Computes: expr_out(A^T * expr_in(input))
- Parameters:
expr_in – Input expression to apply to input data
input – Input SharedImageSet (reference for range)
output – Output SharedImageSet to store results
expr_out – Optional output expression to apply to results
- Raises:
AlgorithmExecutionError – If the operation fails
apply_adjoint(self: imfusion.computed_tomography.LinearOperator, input: imfusion.SharedImageSet, output: imfusion.SharedImageSet, expr_out: imfusion.imagemath.lazy.Expression = None) -> None
Apply the adjoint (transpose) of the linear operator.
- Parameters:
input – Input SharedImageSet
output – Output SharedImageSet to store results
expr_out – Optional output expression to apply to results
- Raises:
AlgorithmExecutionError – If the operation fails
- clone(self: LinearOperator) LinearOperator
Create a copy of this linear operator.
- Returns:
Copy of the linear operator
- create_domain_sis(self: LinearOperator) SharedImageSet
Create a new SharedImageSet in the operator’s domain.
- Returns:
New SharedImageSet suitable for operator input
- create_range_sis(self: LinearOperator) SharedImageSet
Create a new SharedImageSet in the operator’s range.
- Returns:
New SharedImageSet suitable for operator output
- domain_ref(self: LinearOperator) SharedImageSet
Get the domain reference SharedImageSet.
- Returns:
Reference to domain SharedImageSet or None
- range_ref(self: LinearOperator) SharedImageSet
Get the range reference SharedImageSet.
- Returns:
Reference to range SharedImageSet or None
- class imfusion.computed_tomography.Mesh2D3DRegistration(self: imfusion.computed_tomography.Mesh2D3DRegistration, projections: imfusion.SharedImageSet, mesh: imfusion.Mesh, *, initialization_mode: imfusion.computed_tomography.InitializationMode = <InitializationMode.POINT_DIRECTION: 1>, anatomy_name: str = 'default', num_resolution_levels: int = 3)
Bases:
BaseAlgorithm
2D/3D registration algorithm using mesh-based synthetic CT generation.
This algorithm combines mesh-to-volume conversion with 2D/3D registration, allowing registration of X-ray projections to 3D mesh models by first generating a synthetic CT volume from the mesh.
Features:
Automatic synthetic CT generation from mesh
Full 2D/3D registration pipeline
Support for complex mesh geometries
Optimized for anatomical mesh models
Example
Register projections to a mesh model:
>>> projections, *_ = imf.load("projections.dcm") >>> mesh, *_ = imf.load("anatomy.obj") >>> reg = ct.Mesh2D3DRegistrationAlgorithm( ... projections, ... mesh, ... num_resolution_levels=3 ... ) >>> reg()
Create mesh-based 2D/3D registration algorithm.
- Parameters:
projections – 2D X-ray projections with cone beam geometry
mesh – 3D mesh to register to the projections
initialization_mode – Registration initialization strategy (default: InitializationMode.POINT_DIRECTION — initialization using point and direction information)
anatomy_name – Identifier for storing registration matrices
num_resolution_levels – Number of multi-resolution levels
- compute(self: Mesh2D3DRegistration) None
Run mesh-based registration algorithm.
This generates a synthetic CT from the mesh and performs 2D/3D registration.
- reg_alg(self: Mesh2D3DRegistration) XRay2D3DRegistration
Get the nested XRay2D3DRegistrationAlgorithm.
- Returns:
Reference to the nested registration algorithm
- class imfusion.computed_tomography.MotionModelGenerator(*args, **kwargs)
Bases:
GeometryGenerator
Generator for motion model based on projection data.
This generator creates a motion model based on the projection data. It can be used to create a motion model from projection data.
Overloaded function.
__init__(self: imfusion.computed_tomography.MotionModelGenerator, projections: imfusion.SharedImageSet) -> None
Create motion model generator from projection data.
- Parameters:
projections – SharedImageSet to create motion model from
__init__(self: imfusion.computed_tomography.MotionModelGenerator, *, base_generator: imfusion.computed_tomography.GeometryGenerator, transformation_config: imfusion.computed_tomography.RelativeTransformationConfig) -> None
Create MotionModelGenerator from a base generator and transformation configuration.
- Parameters:
base_generator – GeometryGenerator to wrap (will be cloned)
transformation_config – Transformation configuration and reference handling
- class imfusion.computed_tomography.MotionPreset(self: MotionPreset, value: int)
Bases:
pybind11_object
Predefined motion patterns for simulation.
Members:
NONE : No motion preset.
NO_MOTION : No motion - static acquisition.
FOLLOWING_DETECTOR : Following detector motion.
NOD_AFTER_25 : Nod motion after 25%.
NOD_AFTER_50 : Nod motion after 50%.
NOD_AFTER_75 : Nod motion after 75%.
DEVICE_JITTER : Device jitter motion.
RAMP : Ramp motion.
ISO_JITTER : Iso-jitter motion.
- DEVICE_JITTER = <MotionPreset.DEVICE_JITTER: 5>
- FOLLOWING_DETECTOR = <MotionPreset.FOLLOWING_DETECTOR: 1>
- ISO_JITTER = <MotionPreset.ISO_JITTER: 7>
- NOD_AFTER_25 = <MotionPreset.NOD_AFTER_25: 2>
- NOD_AFTER_50 = <MotionPreset.NOD_AFTER_50: 3>
- NOD_AFTER_75 = <MotionPreset.NOD_AFTER_75: 4>
- NONE = <MotionPreset.NONE: -1>
- NO_MOTION = <MotionPreset.NO_MOTION: 0>
- RAMP = <MotionPreset.RAMP: 6>
- property name
- property value
- class imfusion.computed_tomography.ParametricGeometryGenerator(*args, **kwargs)
Bases:
GeometryGenerator
Parametric geometry generator for regular CBCT acquisition trajectories.
This generator creates cone beam geometries using parametric descriptions suitable for standard circular or helical cone beam CT acquisitions. It provides a high-level interface for common scanning patterns.
Features:
Parametric trajectory description
Support for circular and helical trajectories
Configurable detector and source parameters
Automatic geometry computation for all frames
Overloaded function.
__init__(self: imfusion.computed_tomography.ParametricGeometryGenerator, geometry: imfusion.computed_tomography.ConeBeamGeometry) -> None
Create parametric generator from legacy geometry.
- Parameters:
geometry – ConeBeamGeometry to convert to parametric form
__init__(self: imfusion.computed_tomography.ParametricGeometryGenerator, projections: imfusion.SharedImageSet) -> None
Create parametric generator by fitting to projection data.
- Parameters:
projections – SharedImageSet with existing geometry to fit
__init__(self: imfusion.computed_tomography.ParametricGeometryGenerator, *, source_det_distance: float = 0.0, source_pat_distance: float = 0.0, angle_range: float = 0.0, det_src_x_shift: float = 0.0, det_rotation: float = 0.0, det_shear: numpy.ndarray[numpy.float64[2, 1]] = array([0., 0.]), transformation_setup: Optional[imfusion.computed_tomography.RelativeTransformationSetupWrapper] = None) -> None
Create ParametricGeometryGenerator and initialize all parameters.
- Parameters:
source_det_distance – Source to detector center distance (mm)
source_pat_distance – Source to patient distance (mm)
angle_range – Entire rotation range in degrees
det_src_x_shift – Combined detector and source shift in X (mm)
det_rotation – Detector rotation in degrees
det_shear – Detector shear (vec2, mm)
transformation_setup – Optional additional transformation setup
- static get_or_fit_parametric_geometry(projections: SharedImageSet) ParametricGeometryGenerator
Get or fit a parametric geometry from projection data.
- Parameters:
projections – SharedImageSet with cone beam data
- Returns:
ParametricGeometryGenerator instance
- property source_det_distance
Source to center of detector distance parameter (default is 0.0 mm).
- Type:
- property transformation_setup
Transformation setup for additional geometric transformations.
Provides access to relative transformation configuration including ISO-center parameters and direct transformation matrices. This allows applying additional transformations on top of the basic parametric geometry.
- class imfusion.computed_tomography.ProjectionType(self: ProjectionType, value: int)
Bases:
pybind11_object
Types of projection computation methods.
Members:
PHOTON_COUNT : Photon count projection.
LOG_CONVERTED_ATTENUATION : Log-converted attenuation projection.
- LOG_CONVERTED_ATTENUATION = <ProjectionType.LOG_CONVERTED_ATTENUATION: 1>
- PHOTON_COUNT = <ProjectionType.PHOTON_COUNT: 0>
- property name
- property value
- class imfusion.computed_tomography.Reconstruction(self: Reconstruction, projections: SharedImageSet, *, problem_mode: str = 'LeastSquaresProblem', solver_mode: str = 'FDK', region_of_interest_enabled: bool = False, shift: float = 0.0, scale: float = 1.0, subset_size: int = -1, nesterov: bool = False, max_iterations: int = 50, crop_fan: bool = False, force_positivity: bool = False, initial_reconstruction: SharedImageSet = None)
Bases:
BaseAlgorithm
Cone beam CT reconstruction algorithm.
This algorithm reconstructs a 3D volume from cone beam CT projection data using various optimization problems and solvers. It supports both analytical (FDK) and iterative reconstruction methods with customizable regularization.
Key Features:
Multiple reconstruction solvers (FDK, MLEM, SART, CG, SQS)
Various optimization problems (Least Squares, Tikhonov, TV regularization)
Region of Interest (ROI) reconstruction
GPU acceleration support
Automatic solver parameter optimization
Example
Basic FDK reconstruction:
>>> projections, *_ = imf.load("projections.dcm") >>> reconstructor = ct.Reconstruction( ... projections, ... problem_mode="LeastSquaresProblem", ... solver_mode="FDK" ... ) >>> volume = reconstructor() >>> print(f"Reconstructed volume: {volume.get().size()}")
Iterative reconstruction:
>>> reconstructor = ct.Reconstruction( ... projections, ... solver_mode="MLEM", ... max_iterations=100, ... force_positivity=True ... ) >>> volume = reconstructor()
- Parameters:
projections – Cone beam projection data with geometry information
problem_mode – Optimization problem formulation
solver_mode – Reconstruction solver to use
region_of_interest_enabled – Enable ROI reconstruction for focused regions
shift – Calibrated shift parameter
scale – Calibrated scale parameter
subset_size – Number of projections per subset for iterative methods (-1 = auto)
nesterov – Enable Nesterov acceleration for faster convergence
max_iterations – Maximum number of iterations for iterative solvers
crop_fan – Apply fan beam cropping to reduce artifacts
force_positivity – Enforce positivity constraint in reconstruction
initial_reconstruction – Optional initial volume for iterative methods
- compute(self: Reconstruction) SharedImageSet
Run reconstruction and return the result.
- Returns:
SharedImageSet containing the reconstructed volume
- property problem_mode
Optimization problem formulation to use for reconstruction.
- property region_of_interest_enabled
Enable ROI reconstruction for focused regions.
- property scale
Calibrated scale parameter.
- property shift
Calibrated shift parameter.
- property solver_mode
Reconstruction solver to use.
- property subset_size
Number of projections per subset for FDK and iterative methods
- class imfusion.computed_tomography.RelativeGlobalTransformationGenerator(*args, **kwargs)
Bases:
GeometryGenerator
Generator for one relative transformation applied to all frames.
This generator applies a single transformation to all frames of a cone beam acquisition relative to an existing geometry. It wraps an existing generator and applies the transformation on top.
Features:
Single transformation for all frames
Relative to existing geometry (wraps base generator)
Global transformation setup with ISO-center parameters
Automatic base generator selection or snapshot creation
Overloaded function.
__init__(self: imfusion.computed_tomography.RelativeGlobalTransformationGenerator, projections: imfusion.SharedImageSet) -> None
Create relative global transformation generator from projection data.
- Parameters:
projections – SharedImageSet to create relative transformations for
__init__(self: imfusion.computed_tomography.RelativeGlobalTransformationGenerator, *, base_generator: imfusion.computed_tomography.GeometryGenerator, transformation_setup: imfusion.computed_tomography.RelativeTransformationSetupWrapper) -> None
Create RelativeGlobalTransformationGenerator from a base generator and transformation setup.
- Parameters:
base_generator – GeometryGenerator to wrap (will be cloned)
transformation_setup – Transformation applied to all frames
- property transformation_setup
Transformation setup for the global transformation.
Configuration for the transformation applied to all frames, including ISO-center parameters and transformation matrix.
- class imfusion.computed_tomography.RelativeTransformationConfig(self: imfusion.computed_tomography.RelativeTransformationConfig, *, keep_source_fixed_relative_to_detector: bool = True, reference: imfusion.computed_tomography.TransformationReference = <TransformationReference.WORLD: 0>)
Bases:
pybind11_object
Configuration for relative transformations in geometry generators.
This structure defines how transformations are applied relative to different coordinate systems and how the source position is handled during transformations.
Create RelativeTransformationConfig with explicit settings.
- Parameters:
keep_source_fixed_relative_to_detector – If true, source moves with detector
reference – Reference coordinate system (WORLD or DETECTOR)
- property keep_source_fixed_relative_to_detector
Keep the source fixed relative to the detector (default is True).
When True, the source moves with the detector during transformations. When False, the world position of the source stays fixed.
- Type:
- property reference
Reference coordinate system for the transformation (default is WORLD).
Specifies whether transformations are applied relative to world coordinates or detector coordinates.
- Type:
enum
- class imfusion.computed_tomography.RelativeTransformationSetupWrapper(*args, **kwargs)
Bases:
RelativeTransformationConfig
Wrapper for relative transformation setup with ISO-center parameters.
This class provides both direct transformation matrix access and ISO-center based parametric transformation definition. It handles signal connections to automatically update the transformation matrix when ISO-center parameters change.
Features:
Direct transformation matrix specification
ISO-center based parametric transformation
Automatic matrix computation from ISO parameters
Reference coordinate system control
Overloaded function.
__init__(self: imfusion.computed_tomography.RelativeTransformationSetupWrapper, *, transformation: numpy.ndarray[numpy.float64[4, 4]] = array([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])) -> None
Create RelativeTransformationSetupWrapper with direct transformation matrix.
- Parameters:
transformation – 4x4 transformation matrix
__init__(self: imfusion.computed_tomography.RelativeTransformationSetupWrapper, *, iso_center_offset: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), iso_rotation: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), iso_rotation_center: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.])) -> None
Create RelativeTransformationSetupWrapper with ISO-center parameters.
- Parameters:
iso_center_offset – ISO-center offset in mm (world)
iso_rotation – ISO-center rotation Euler angles in degrees
iso_rotation_center – Center of ISO rotation in mm (world)
- property iso_center_offset
ISO-center offset in mm in world coordinates (default is [0,0,0]).
3D translation offset applied to the ISO-center position. Only used when use_iso_center_parameters is True.
- Type:
ndarray[numpy.float64[3, 1]]
- property iso_rotation
ISO-center rotation in Euler angles in degrees (default is [0,0,0]).
3D rotation applied around the ISO-center rotation center. Only used when use_iso_center_parameters is True.
- Type:
ndarray[numpy.float64[3, 1]]
- property iso_rotation_center
Center of ISO-center rotation in mm in world coordinates (default is [0,0,0]).
3D point around which ISO-center rotations are applied. Only used when use_iso_center_parameters is True.
- Type:
ndarray[numpy.float64[3, 1]]
- property transformation
Direct transformation matrix (default is Identity).
4x4 transformation matrix applied to the geometry. This is the primary interface - changes to ISO-center parameters are automatically propagated to this matrix.
- Type:
ndarray[numpy.float64[4, 4]]
- property use_iso_center_parameters
Whether to use ISO-center parameters instead of direct transformation matrix (default is False).
When True, the transformation is computed from ISO-center offset, rotation, and rotation center. When False, the direct transformation matrix is used. Only available when reference is set to WORLD.
- Type:
- class imfusion.computed_tomography.ShotTargetPointsType
Bases:
pybind11_object
Parameter type for storing shot target points.
This parameter type is used to store a list of 3D points that represent the target points for a shot.
- class imfusion.computed_tomography.SnapshotGenerator(self: SnapshotGenerator, projections: SharedImageSet)
Bases:
GeometryGenerator
Geometry generator that stores snapshots of existing geometries.
This generator captures and stores the complete geometry state of a SharedImageSet, allowing exact reproduction of complex or irregular geometric configurations.
Create snapshot generator from projection data.
- Parameters:
projections – SharedImageSet to capture geometry from
- class imfusion.computed_tomography.SourceDataComponent(self: SourceDataComponent, *, location_source_in_detector_space: ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]))
Bases:
DataComponentBase
Data component storing X-ray source position for a single frame.
This component represents the location of the X-ray source in detector coordinate space for cone beam geometry calculations. Used in conjunction with DetectorDataComponent to define complete frame geometry.
Create SourceDataComponent with source position for one frame.
- Parameters:
location_source_in_detector_space – Position of X-ray source in detector space (z is negative by convention).
- static get(*args, **kwargs)
Overloaded function.
get(projections: imfusion.SharedImageSet, frame: int) -> imfusion.computed_tomography.SourceDataComponent
Get SourceDataComponent for a specific frame.
- Parameters:
projections – SharedImageSet to access
frame – Frame index
- Returns:
Pointer to SourceDataComponent or None if not found
get(projections: imfusion.SharedImageSet, frame: int) -> imfusion.computed_tomography.SourceDataComponent
Get SourceDataComponent for a specific frame.
- Parameters:
projections – SharedImageSet to access
frame – Frame index
- Returns:
Pointer to SourceDataComponent or None if not found
- static get_or_create(projections: SharedImageSet, frame: int) SourceDataComponent
Get or create SourceDataComponent for a specific frame.
- Parameters:
projections – SharedImageSet to access
frame – Frame index to get/create component for
- Returns:
Reference to the SourceDataComponent
- property location_source_in_detector_space
Position of the X-ray source in detector coordinate space.
This position is used for geometric calibration and projection calculations.
- class imfusion.computed_tomography.SyntheticCTFromMesh(self: SyntheticCTFromMesh, mesh: Mesh = None, *, spacing: float = 1.0, inside_decay: float = 0.5, first_outside_decay: float = 10.0, first_outside_amplitude: float = 1.0, second_outside_decay: float = 0.2, second_outside_amplitude: float = 0.2)
Bases:
BaseAlgorithm
Synthetic CT from Mesh algorithm.
This algorithm generates synthetic CT images from a Mesh object. It does so by setting intensity values as a configurable function of the (signed) distance function to the mesh.
- Create SyntheticCTFromMeshAlgorithm instance.
- Args:
mesh: the input Mesh from which to create a synthetic CT. spacing: Spacing of the output volume in mm. inside_decay: Decay of intensity values inside the mesh (rate constant). first_outside_decay: Decay of intensity values outside the mesh for the first component (rate constant). first_outside_amplitude: Amplitude of the first component outside the mesh. second_outside_decay: Decay of intensity values outside the mesh for the second component (rate constant). second_outside_amplitude: Amplitude of the second component outside the mesh.
- property first_outside_amplitude
Amplitude of the first component outside the mesh.
- property first_outside_decay
Decay of intensity values outside the mesh for the first component (rate constant).
- property inside_decay
Decay of intensity values inside the mesh (rate constant).
- property second_outside_amplitude
Amplitude of the second component outside the mesh.
- property second_outside_decay
Decay of intensity values outside the mesh for the second component (rate constant).
- property spacing
Spacing of the output volume in mm.
- class imfusion.computed_tomography.TransformationReference(self: TransformationReference, value: int)
Bases:
pybind11_object
Reference frame for geometry transformations.
This enum defines the reference frame used for relative transformations in cone beam geometry computations.
Members:
WORLD : Transformation relative to world coordinates
DETECTOR : Transformation relative to detector coordinates
- DETECTOR = <TransformationReference.DETECTOR: 1>
- WORLD = <TransformationReference.WORLD: 0>
- property name
- property value
- class imfusion.computed_tomography.XRay2D3DRegistration(self: imfusion.computed_tomography.XRay2D3DRegistration, projections: imfusion.SharedImageSet, volumes: imfusion.SharedImageSet, *, initialization_mode: imfusion.computed_tomography.InitializationMode = <InitializationMode.POINT_DIRECTION: 1>, anatomy_name: str = 'default', num_resolution_levels: int = 3)
Bases:
BaseAlgorithm
High-level 2D/3D X-ray registration algorithm for cone beam projections.
This algorithm performs registration between 2D X-ray projections and a 3D volume using various initialization strategies and multi-resolution optimization. It acts as a wrapper around lower-level registration components providing a convenient high-level interface.
Key Features:
Multiple initialization strategies (point-direction, keypoints, manual)
Multi-resolution registration for robustness
Support for cone beam and fan beam geometries
In-place geometry correction of projections
History tracking for debugging and analysis
Example
Basic 2D/3D registration:
>>> projections, *_ = imf.load("projections.dcm") >>> volume, *_ = imf.load("volume.nii") >>> reg = ct.XRay2D3DRegistration( ... projections, ... volume, ... initialization_mode=ct.InitializationMode.POINT_DIRECTION, ... num_resolution_levels=3 ... ) >>> reg.initialize() >>> reg()
Create 2D/3D X-ray registration algorithm.
- Parameters:
projections – 2D X-ray projections with cone beam geometry
volumes – 3D volume data to register to the projections
initialization_mode – Registration initialization strategy (default: InitializationMode.POINT_DIRECTION — initialization using point and direction information)
anatomy_name – Identifier for storing registration matrices
num_resolution_levels – Number of multi-resolution levels
- compute(self: XRay2D3DRegistration) None
Run registration algorithm and modify projections in-place.
Note: This modifies the input projections’ geometry directly.
- property anatomy_name
Name used for storing registration matrices in volume transformation stash.
- property current_resolution_level
Current resolution level being processed.
- property initialization_mode
Registration initialization strategy to use.
- property num_resolution_levels
Number of multi-resolution levels for coarse-to-fine optimization.
- class imfusion.computed_tomography.XRay2D3DRegistrationHistoryEntry(self: XRay2D3DRegistrationHistoryEntry)
Bases:
Configurable
Entry in the 2D/3D registration history.
Tracks the progress and state of registration computations, including transformation parameters, similarity values, and computation phases.
Create an empty XRay2D3DRegistrationHistoryEntry.
- property comment
Additional comments or metadata for this history entry.
- property computation_phase
Phase of computation when this entry was recorded.
- property id
Unique identifier for this history entry.
- class imfusion.computed_tomography.XRay2D3DRegistrationInitialization
Bases:
Configurable
Base class for 2D/3D registration initialization strategies.
This class defines the interface for custom initialization methods that can be used to provide initial pose estimates for registration.
- can_initialize(self: XRay2D3DRegistrationInitialization) bool
Check if all required parameters are set and the initialization can be performed.
- Returns:
True if initialization is possible, False otherwise
- class imfusion.computed_tomography.XRay2D3DRegistrationInitializationKeyPoints
Bases:
XRay2D3DRegistrationInitialization
Keypoint-based initialization for 2D/3D registration.
This initialization method uses arbitrary numbers of keypoint correspondences between projections and volumes to compute an initial transformation using least-squares optimization.
Features:
Support for multiple named keypoints
Robust optimization using Levenberg-Marquardt
Automatic reprojection error minimization
- set_shot_keypoint(self: XRay2D3DRegistrationInitializationKeyPoints, shot_num: int, key: str, value: ndarray[numpy.float64[3, 1]]) None
Set a keypoint on a specific shot.
- Parameters:
shot_num – Shot index (0-based)
key – Unique name/identifier for this keypoint
value – 3D position of the keypoint on the shot (image coordinates)
Example
>>> keypoint_init.set_shot_keypoint(0, "landmark1", [100.0, 150.0, 0.0])
- set_volume_keypoint(self: XRay2D3DRegistrationInitializationKeyPoints, key: str, value: ndarray[numpy.float64[3, 1]]) None
Set a keypoint on the volume.
- Parameters:
key – Unique name/identifier for this keypoint (should match shot keypoint)
value – 3D position of the keypoint on the volume (world coordinates)
Example
>>> keypoint_init.set_volume_keypoint("landmark1", [10.5, -5.2, 45.8])
- unset_shot_keypoint(self: XRay2D3DRegistrationInitializationKeyPoints, shot_num: int, key: str) None
Remove a keypoint from a specific shot.
- Parameters:
shot_num – Shot index (0-based)
key – Name/identifier of the keypoint to remove
- unset_volume_keypoint(self: XRay2D3DRegistrationInitializationKeyPoints, key: str) None
Remove a keypoint from the volume.
- Parameters:
key – Name/identifier of the keypoint to remove
- class imfusion.computed_tomography.XRay2D3DRegistrationInitializationPointDirection
Bases:
XRay2D3DRegistrationInitialization
Point-direction based initialization for 2D/3D registration.
This initialization method uses user-defined point correspondences between the 2D projections and 3D volume to compute an initial transformation estimate.
The method requires: - Two points on each projection image - Two corresponding points on the 3D volume - Optional second direction for full pose estimation
- shot_target_points(self: XRay2D3DRegistrationInitializationPointDirection, index: int) ShotTargetPointsType
Get target points parameter for a specific shot.
- Parameters:
index – Shot index (0-based)
- Returns:
Parameter object for setting target points on this shot
Example
>>> init.shot_target_points(0).setValue([point1, point2])
- property volume_second_direction_points
Points on the volume specifying the direction of the first shot.
These points help constrain the rotational degrees of freedom by providing directional information for pose estimation.
- property volume_target_points
Target points on the 3D volume (in world coordinates).
Usually two points that correspond to points marked on the projections. These points define the anatomical landmarks for registration.
- imfusion.computed_tomography.apply_full_geometry_representation(*args, **kwargs)
Overloaded function.
apply_full_geometry_representation(projections: imfusion.SharedImageSet, geometry: imfusion.computed_tomography.FullGeometryRepresentation, frame: int) -> None
Apply geometry representation to a single frame.
- Parameters:
projections – SharedImageSet to modify
geometry – FullGeometryRepresentation to apply
frame – Frame index to apply geometry to
apply_full_geometry_representation(projections: imfusion.SharedImageSet, geometry_list: list[imfusion.computed_tomography.FullGeometryRepresentation]) -> None
Apply geometry representations to all frames.
- Parameters:
projections – SharedImageSet to modify
geometry_list – List of FullGeometryRepresentation for each frame
- imfusion.computed_tomography.backproject(cone_beam_data: imfusion.SharedImageSet, frame: int, points_2d: list[numpy.ndarray[numpy.float64[2, 1]]], coordinate_type_2d: imfusion.imagemath.CoordinateType = <CoordinateType.TEXTURE: 2>) list[tuple[ndarray[numpy.float64[3, 1]], ndarray[numpy.float64[3, 1]]]]
Compute rays from source through detector points.
- Parameters:
cone_beam_data – SharedImageSet with cone beam data
frame – Frame index for backprojection
points_2d – List of 2D points on detector
coordinate_type_2d – Input coordinate system
- Returns:
List of ray line segments from source through detector points
Computes the 3D rays in world coordinates from the X-ray source through the specified detector points.
- imfusion.computed_tomography.convert_cone_beam_geometry_to_per_frame_geometry(geometry: ConeBeamGeometry, num_frames: int) list[FullGeometryRepresentation]
Convert legacy ConeBeamGeometry to modern per-frame geometry.
- Parameters:
geometry – Legacy ConeBeamGeometry object
num_frames – Number of projection frames
- Returns:
List of FullGeometryRepresentation objects for each frame
- imfusion.computed_tomography.frame_geometry_from_opencv_matrix(matrix: ndarray[numpy.float64[3, 4]], detector_width_px: int, detector_height_px: int, pixel_spacing: ndarray[numpy.float64[2, 1]]) FullGeometryRepresentation
Create geometry representation from OpenCV projection matrix.
- Parameters:
matrix – OpenCV projection matrix P = K*[R|t] (3x4)
detector_width_px – Detector width in pixels
detector_height_px – Detector height in pixels
pixel_spacing – Pixel size in mm (x_spacing, y_spacing)
- Returns:
FullGeometryRepresentation with computed geometry parameters
- imfusion.computed_tomography.frame_geometry_from_opengl_matrix(*args, **kwargs)
Overloaded function.
frame_geometry_from_opengl_matrix(matrix: numpy.ndarray[numpy.float64[4, 4]], detector_size: numpy.ndarray[numpy.float64[2, 1]]) -> imfusion.computed_tomography.FullGeometryRepresentation
Create geometry representation from OpenGL projection matrix and detector size.
- Parameters:
matrix – OpenGL projection matrix (4x4)
detector_size – Physical detector size in mm (width, height)
- Returns:
FullGeometryRepresentation with computed geometry parameters
frame_geometry_from_opengl_matrix(matrix: numpy.ndarray[numpy.float64[4, 4]], detector_width_px: int, detector_height_px: int, pixel_spacing: numpy.ndarray[numpy.float64[2, 1]]) -> imfusion.computed_tomography.FullGeometryRepresentation
Create geometry representation from OpenGL matrix with pixel information.
- Parameters:
matrix – OpenGL projection matrix (4x4)
detector_width_px – Detector width in pixels
detector_height_px – Detector height in pixels
pixel_spacing – Pixel size in mm (x_spacing, y_spacing)
- Returns:
FullGeometryRepresentation with proper image descriptor
- imfusion.computed_tomography.get_detector_scaling(projection_matrix: ndarray[numpy.float64[4, 4]], geometry: ConeBeamGeometry) object
Get detector width and source-detector distance from projection matrix.
- Parameters:
projection_matrix – 4x4 projection matrix
geometry – ConeBeamGeometry object
- Returns:
Named tuple with fields (detector_width, source_detector_distance)
- imfusion.computed_tomography.is_cone_beam_data(shared_image_set: SharedImageSet) bool
Check if SharedImageSet fulfills the ConeBeamData concept.
- Parameters:
shared_image_set – SharedImageSet to check
- Returns:
True if the SharedImageSet has ConeBeamMetadata, topDown=true, and modality=XRAY, making it valid cone beam data.
- imfusion.computed_tomography.make_cone_beam_data(shared_image_set: SharedImageSet) None
Make an existing SharedImageSet fulfill the ConeBeamData concept.
- Parameters:
shared_image_set – SharedImageSet to modify
Modifies the SharedImageSet in-place to add ConeBeamMetadata and set proper configuration for cone beam data usage.
- imfusion.computed_tomography.matrix_components_gl(*args, **kwargs)
Overloaded function.
matrix_components_gl(geometry: imfusion.computed_tomography.FullGeometryRepresentation) -> object
Get OpenGL projection and modelview matrix components.
- Parameters:
geometry – FullGeometryRepresentation
- Returns:
PM: 4x4 OpenGL projection matrix
MV: 4x4 OpenGL modelview matrix
- Return type:
Named tuple with fields (PM, MV) where
matrix_components_gl(projections: imfusion.SharedImageSet, frame: int) -> object
Get OpenGL matrix components from SharedImageSet frame.
- Parameters:
projections – SharedImageSet with cone beam data
frame – Frame index
- Returns:
Named tuple with fields (PM, MV) OpenGL matrices
- imfusion.computed_tomography.matrix_components_opencv_to_image(*args, **kwargs)
Overloaded function.
matrix_components_opencv_to_image(geometry: imfusion.computed_tomography.FullGeometryRepresentation) -> object
Get OpenCV camera matrix components in image coordinates.
- Parameters:
geometry – FullGeometryRepresentation
- Returns:
K: 3x3 intrinsic camera matrix (mm coordinates, origin at center)
R: 3x3 rotation matrix
t: 3D translation vector
- Return type:
Named tuple with fields (K, R, t) where
matrix_components_opencv_to_image(projections: imfusion.SharedImageSet, frame: int) -> object
Get OpenCV camera matrix components from SharedImageSet frame.
- Parameters:
projections – SharedImageSet with cone beam data
frame – Frame index
- Returns:
Named tuple with fields (K, R, t) in image coordinates
- imfusion.computed_tomography.matrix_components_opencv_to_pixel(*args, **kwargs)
Overloaded function.
matrix_components_opencv_to_pixel(geometry: imfusion.computed_tomography.FullGeometryRepresentation) -> object
Get OpenCV camera matrix components in pixel coordinates.
- Parameters:
geometry – FullGeometryRepresentation
- Returns:
K: 3x3 intrinsic camera matrix (pixel coordinates, origin at top-left)
R: 3x3 rotation matrix
t: 3D translation vector
- Return type:
Named tuple with fields (K, R, t) where
matrix_components_opencv_to_pixel(projections: imfusion.SharedImageSet, frame: int) -> object
Get OpenCV camera matrix components from SharedImageSet frame.
- Parameters:
projections – SharedImageSet with cone beam data
frame – Frame index
- Returns:
Named tuple with fields (K, R, t) in pixel coordinates
- imfusion.computed_tomography.matrix_from_image_to_world(*args, **kwargs)
Overloaded function.
matrix_from_image_to_world(geometry: imfusion.computed_tomography.FullGeometryRepresentation) -> numpy.ndarray[numpy.float64[4, 4]]
Get transformation matrix from image to world coordinates.
- Parameters:
geometry – FullGeometryRepresentation
- Returns:
4x4 transformation matrix (image -> world)
matrix_from_image_to_world(projections: imfusion.SharedImageSet, frame: int) -> numpy.ndarray[numpy.float64[4, 4]]
Get image-to-world transformation from SharedImageSet frame.
- Parameters:
projections – SharedImageSet with cone beam data
frame – Frame index
- Returns:
4x4 transformation matrix (image -> world)
- imfusion.computed_tomography.matrix_from_world_to_image(*args, **kwargs)
Overloaded function.
matrix_from_world_to_image(geometry: imfusion.computed_tomography.FullGeometryRepresentation) -> numpy.ndarray[numpy.float64[4, 4]]
Get projective transformation matrix from world to image coordinates.
- Parameters:
geometry – FullGeometryRepresentation
- Returns:
4x4 transformation matrix (world -> image)
matrix_from_world_to_image(projections: imfusion.SharedImageSet, frame: int) -> numpy.ndarray[numpy.float64[4, 4]]
Get projective world-to-image transformation from SharedImageSet frame.
- Parameters:
projections – SharedImageSet with cone beam data
frame – Frame index
- Returns:
4x4 transformation matrix (world -> image)
- imfusion.computed_tomography.matrix_gl_to_image(*args, **kwargs)
Overloaded function.
matrix_gl_to_image(geometry: imfusion.computed_tomography.FullGeometryRepresentation) -> numpy.ndarray[numpy.float64[4, 4]]
Get full OpenGL projection matrix (PM * MV).
- Parameters:
geometry – FullGeometryRepresentation
- Returns:
4x4 OpenGL projection matrix
matrix_gl_to_image(projections: imfusion.SharedImageSet, frame: int) -> numpy.ndarray[numpy.float64[4, 4]]
Get OpenGL projection matrix from SharedImageSet frame.
- Parameters:
projections – SharedImageSet with cone beam data
frame – Frame index
- Returns:
4x4 OpenGL projection matrix (supports legacy geometry)
- imfusion.computed_tomography.matrix_gl_to_image_top_left(*args, **kwargs)
Overloaded function.
matrix_gl_to_image_top_left(geometry: imfusion.computed_tomography.FullGeometryRepresentation) -> numpy.ndarray[numpy.float64[4, 4]]
Get OpenGL projection matrix with y-axis flipped for top-left origin.
- Parameters:
geometry – FullGeometryRepresentation
- Returns:
4x4 OpenGL projection matrix with flipped y-axis for image coordinates
matrix_gl_to_image_top_left(projections: imfusion.SharedImageSet, frame: int) -> numpy.ndarray[numpy.float64[4, 4]]
Get OpenGL projection matrix with y-axis flip from SharedImageSet.
- Parameters:
projections – SharedImageSet with cone beam data
frame – Frame index
- Returns:
4x4 OpenGL projection matrix with flipped y-axis (supports legacy geometry)
- imfusion.computed_tomography.matrix_opencv_to_image(*args, **kwargs)
Overloaded function.
matrix_opencv_to_image(geometry: imfusion.computed_tomography.FullGeometryRepresentation) -> numpy.ndarray[numpy.float64[3, 4]]
Get OpenCV projection matrix P = K * [R|t] in image coordinates.
- Parameters:
geometry – FullGeometryRepresentation
- Returns:
3x4 projection matrix in image coordinates (mm, origin at center)
matrix_opencv_to_image(projections: imfusion.SharedImageSet, frame: int) -> numpy.ndarray[numpy.float64[3, 4]]
Get OpenCV projection matrix from SharedImageSet frame.
- Parameters:
projections – SharedImageSet with cone beam data
frame – Frame index
- Returns:
3x4 projection matrix in image coordinates
- imfusion.computed_tomography.matrix_opencv_to_pixel(*args, **kwargs)
Overloaded function.
matrix_opencv_to_pixel(geometry: imfusion.computed_tomography.FullGeometryRepresentation) -> numpy.ndarray[numpy.float64[3, 4]]
Get OpenCV projection matrix P = K * [R|t] in pixel coordinates.
- Parameters:
geometry – FullGeometryRepresentation
- Returns:
3x4 projection matrix in pixel coordinates (origin at top-left)
matrix_opencv_to_pixel(projections: imfusion.SharedImageSet, frame: int) -> numpy.ndarray[numpy.float64[3, 4]]
Get OpenCV projection matrix from SharedImageSet frame.
- Parameters:
projections – SharedImageSet with cone beam data
frame – Frame index
- Returns:
3x4 projection matrix in pixel coordinates
- imfusion.computed_tomography.per_frame_geometry(projections: SharedImageSet) list[FullGeometryRepresentation]
Collect the per-frame geometry for all frames in a SharedImageSet.
- Parameters:
projections – SharedImageSet with cone beam data
- Returns:
List of FullGeometryRepresentation objects, one per frame
- imfusion.computed_tomography.project(cone_beam_data: imfusion.SharedImageSet, frame: int, points_3d: list[numpy.ndarray[numpy.float64[3, 1]]], coordinate_type_2d: imfusion.imagemath.CoordinateType = <CoordinateType.TEXTURE: 2>) list[ndarray[numpy.float64[2, 1]]]
Project 3D points onto 2D detector plane.
- Parameters:
cone_beam_data – SharedImageSet with cone beam data
frame – Frame index for projection
points_3d – List of 3D points to project
coordinate_type_2d – Output coordinate system (Texture, Image, World, or Pixel)
- Returns:
List of 2D points on detector plane
Projects 3D world coordinates to 2D detector coordinates using the geometry information stored in the cone beam data.
- imfusion.computed_tomography.reset_persistent_index_and_range(projections: SharedImageSet) None
Reset persistent index and range for a SharedImageSet.
- Parameters:
projections – SharedImageSet to reset indices for
- imfusion.computed_tomography.source_position_world(*args, **kwargs)
Overloaded function.
source_position_world(geometry: imfusion.computed_tomography.FullGeometryRepresentation) -> numpy.ndarray[numpy.float64[3, 1]]
Get X-ray source position in world coordinates.
- Parameters:
geometry – FullGeometryRepresentation
- Returns:
3D position of X-ray source in world coordinates
source_position_world(projections: imfusion.SharedImageSet, frame: int) -> numpy.ndarray[numpy.float64[3, 1]]
Get source position from SharedImageSet frame.
- Parameters:
projections – SharedImageSet with cone beam data
frame – Frame index
- Returns:
3D position of X-ray source in world coordinates
- imfusion.computed_tomography.source_to_detector_distance(*args, **kwargs)
Overloaded function.
source_to_detector_distance(geometry: imfusion.computed_tomography.FullGeometryRepresentation) -> float
Calculate source-to-detector distance.
- Parameters:
geometry – FullGeometryRepresentation
- Returns:
Source-to-detector distance in mm
source_to_detector_distance(projections: imfusion.SharedImageSet, frame: int) -> float
Calculate source-to-detector distance from SharedImageSet frame.
- Parameters:
projections – SharedImageSet with cone beam data
frame – Frame index
- Returns:
Source-to-detector distance in mm
imfusion.dicom
Submodules containing DICOM related functionalities.
To load a single DICOM file, use imfusion.dicom.load_file()
and imfusion.dicom.load_folder()
to load all series contained in a folder.
Both functions return a list of results.
In general, each DICOM series is loaded as one Data
.
This is not always possible though.
For example DICOM slices might not stack up in a way representable by a SharedImageSet
.
Besides loading DICOMs from the local filesystem, PACS and DicomWeb are supported as well through the imfusion.dicom.load_url()
function.
To load a series from PACS, use an URL with the following format:
pacs://<hostname>:<port>/<PACS AE title>?series=<series instance uid>&study=<study instance uid>
To receive DICOMs from the PACS, a temporary server will be started on the port defined
by imfusion.dicom.set_pacs_client_config()
.
To load a series from a DicomWeb compatible server, use the DicomWeb endpoint (depends on the server), e.g.:
https://<hostname>:<port>/dicom-web/studies/<study instance uid>/series/<series instance uid>
.
If the server requires authentication, a imfusion.dicom.AuthenticationProvider
has to be registered.
The authentication scheme depends on the server, but here is an example for HTTP Basic Auth with username and password:
class AuthProvider(imfusion.dicom.AuthorizationProvider):
def __init__(self):
imfusion.dicom.AuthorizationProvider.__init__(self)
self.token = ""
def authorization(self, url):
return self.token
def refresh_authorization(self, url, num_failed_requests):
if acquire_authorization(url, ""):
return True
else:
self.token = ""
return False
def acquire_authorization(self, url, message):
print("Please provide authorization for accessing", url)
if (message):
print(message)
try:
username = input("Username: ")
password = getpass.getpass()
except KeyboardInterrupt:
return False
self.token = "Basic " + base64.b64encode(f"{username}:{password}".encode("utf-8")).decode("utf=8")
return True
imfusion.dicom.set_default_authorization_provider(AuthProvider())
imfusion.dicom.load_url("https://example.com/dicom-web/studies/1.2.3.4/series/5.6.7.8")
- class imfusion.dicom.AuthorizationProvider(self: AuthorizationProvider)
Bases:
pybind11_object
- acquire_authorization(self: AuthorizationProvider, url: str, message: str) bool
Acquire authorization by e.g. asking the user.
This method might get called from another thread. In this case, implementations that require the main thread to show a GUI should just return false. An optional message can be provided (e.g. to display an error from a previous login attempt).
- authorization(self: AuthorizationProvider, url: str) str
Get the Authorization header for the given url.
The url is the complete url for the request that is going to be performed. Implementation should cache the value according to the server URL (see extract_server_url). When an empty string is returned, no Authorization header should be send. This method will be call from multiple threads.
- extract_server_url(self: AuthorizationProvider, url: str) str
Extract the server part of the URL.
E.g. http://example.com:8080/dicomweb/studies becomes http://example.com:8080.
- refresh_authorization(self: AuthorizationProvider, url: str, num_failed_requests: int) bool
Try to refresh the authorization without user interaction.
Implementations should stop retrying after a certain number of failed attemps. This method will be call from multiple threads.
- remove_authorization(self: AuthorizationProvider, url: str) None
Remove any cached authorization for the given server.
This should essentially log out the user and let the user re-authenticate with the next acquireAuthorization call.
- class imfusion.dicom.GeneralEquipmentModuleDataComponent(self: GeneralEquipmentModuleDataComponent)
Bases:
DataComponentBase
- property anatomical_orientation_type
- property device_serial_number
- property gantry_id
- property institution_address
- property institution_name
- property institutional_departmentname
- property manufacturer
- property manufacturers_model_name
- property software_versions
- property spatial_resolution
- property station_name
- class imfusion.dicom.RTStructureDataComponent(self: RTStructureDataComponent)
Bases:
DataComponentBase
DataComponent for PointClouds loaded from a DICOM RTStructureSet.
Provides information about the original structure/grouping of the points. See RTStructureIoAlgorithm for details about how RTStructureSets are loaded.
Warning
Since this component uses fixed indices into the PointCloud’s points structure, it can only be used if the PointCloud remains unchanged!
- class Contour
Bases:
pybind11_object
Represents a single item in the original ‘Contour Sequence’ (3006,0040).
- property length
- property start_index
- property type
- class GeometryType(self: GeometryType, value: int)
Bases:
pybind11_object
Defines how the points of a contour should be interpreted.
Members:
POINT
OPEN_PLANAR
CLOSED_PLANAR
OPEN_NONPLANAR
- CLOSED_PLANAR = <GeometryType.CLOSED_PLANAR: 2>
- OPEN_NONPLANAR = <GeometryType.OPEN_NONPLANAR: 3>
- OPEN_PLANAR = <GeometryType.OPEN_PLANAR: 1>
- POINT = <GeometryType.POINT: 0>
- property name
- property value
- class ROIGenerationAlgorithm(self: ROIGenerationAlgorithm, value: int)
Bases:
pybind11_object
Defines how the RT structure was generated
Members:
UNKNOWN
AUTOMATIC
SEMI_AUTOMATIC
MANUAL
- AUTOMATIC = <ROIGenerationAlgorithm.AUTOMATIC: 1>
- MANUAL = <ROIGenerationAlgorithm.MANUAL: 3>
- SEMI_AUTOMATIC = <ROIGenerationAlgorithm.SEMI_AUTOMATIC: 2>
- UNKNOWN = <ROIGenerationAlgorithm.UNKNOWN: 0>
- property name
- property value
- property color
- property contours
- property generation_algorithm
- property referenced_frame_of_reference_UID
- class imfusion.dicom.ReferencedInstancesComponent(self: ReferencedInstancesComponent)
Bases:
DataComponentBase
DataComponent to store DICOM instances that are referenced by the dataset.
A DICOM dataset can reference a number of other DICOM datasets that are somehow related. The references in this component are determined by the ReferencedSeriesSequence.
- is_referencing(*args, **kwargs)
Overloaded function.
is_referencing(self: imfusion.dicom.ReferencedInstancesComponent, arg0: imfusion.dicom.SourceInfoComponent) -> bool
Returns true if the instances of the given SourceInfoComponent are referenced by this component.
The instances and references have to only intersect for this to return true. This way, e.g. a segmentation would be considered referencing a CT if it only overlaps in a view slices.
is_referencing(self: imfusion.dicom.ReferencedInstancesComponent, arg0: imfusion.SharedImageSet) -> bool
Convenient method that calls the above method with SourceInfoComponent of sis.
Only returns true if all elementwise SourceInfoComponents are referenced.
- class imfusion.dicom.SourceInfoComponent(self: SourceInfoComponent)
Bases:
DataComponentBase
- property sop_class_uids
- property sop_instance_uids
- property source_uris
- imfusion.dicom.load_file(file_path: str) list
Load a single file as DICOM.
Depending on the SOPClassUID of the DICOM file, this can result in:
a 2D or 3D
SharedImageSet
containing one or multiple framesa segmentation labelmap (i.e. a 8-bit
SharedImageSet
with aLabelDataComponent
)a RT Structure Set (i.e. a
PointCloud
with aRTStructureDataComponent
)
For regular images, usually only one result is generated. If not it is usually an indication that the file could not be entirely reconstructed as a volume (e.g. the spacing between slices is not uniform).
For segmentations, multiple labelmaps will be returned if labels overlap (i.e. one pixel has at least 2 labels).
For RT Structure Sets, one
PointCloud
is returned per structure.
- imfusion.dicom.load_folder(folder_path: str, recursive: bool = True, ignore_non_dicom: bool = True) list
Load all DICOM files from a folder.
Generally this produces one dataset per DICOM series, however, this might not always be the case. Check
ImageInfoDataComponent
for the actual series UID.See
imfusion.dicom.load_file()
for a list of datasets that can be generated.- Parameters:
folder_path (str) – A path to a folder or an URL.
recursive (bool) – Whether subfolders should be scanned recursively for all DICOM files.
ignore_non_dicom (bool) – Whether files without a valid DICOM header should be ignored. This is usually faster and produces less warnings/errors, but technically the DICOM header is optional and might be missing. This is very rare though.
- imfusion.dicom.load_url(url: str, recursive: bool = True, ignore_non_dicom: bool = True) list
Load all DICOM files from a URL.
Generally this produces one dataset per DICOM series, however, this might not always be the case. Check
ImageInfoDataComponent
for the actual series UID.The URL support the file://, http(s):// and pacs:// schemes.
To load a series from PACS, use an URL with the following format:
pacs://<hostname>:<port>/<PACS AE title>?series=<series instance uid>&study=<study instance uid>
To receive DICOMs from the PACS, a temporary server will be started on the port defined byimfusion.dicom.set_pacs_client_config()
.- Parameters:
url (str) – An URL.
recursive (bool) – Whether subfolders should be scanned recursively for all DICOM files. Only used for file:// URLs.
ignore_non_dicom (bool) – Whether files without a valid DICOM header should be ignored. This is usually faster and produces less warnings/errors, but technically the DICOM header is optional and might be missing. This is very rare though. Only used for file:// URLs.
- imfusion.dicom.rtstruct_to_labelmap(rtstruct_set: list[PointCloud], referenced_image: SharedImageSet, combine_label_maps: bool = False) list[SharedImageSet]
Algorithm to convert a
PointCloud
with aRTStructureDataComponent
datacomponent to a labelmap.This is currently only supported for CLOSED_PLANAR contours in
RTStructureDataComponent
. The algorithm requires a reference volume that determines the size of the labelmap. Each contour is expected to be planar on a slice in the reference volume. This algorithm works best when using the volume that is referenced by the original DICOM RTStructureDataSet (seeimfusion.RTStructureDataComponent.referenced_frame_of_reference_UID
).Returns one labelmap per input RT Structure.
- imfusion.dicom.save_file(image: SharedImageSet, file_path: str, referenced_image: SharedImageSet = None) None
Save an image as a single DICOM file.
The SOP Class that is used for the export is determined based on the modality of the image. For example, CT images will be exported as ‘Enhanced CT Image Storage’ and LABEL images as ‘Segmentation Storage’.
When exporting volumes, note that older software might not be able to load them. Use
imfusion.dicom.save_folder()
instead.Optionally, the generated DICOMs can also reference another DICOM image, which is passed with the referenced_image argument. This referenced_image must have been loaded from DICOM and/or contain a elementwise
SourceInfoComponent
and aImageInfoDataComponent
contain a valid series instance UID. With such a reference, other software can determine whether different DICOMs are related. This is especially important when exporting segmentations with modality LABEL. The exported segmentations must reference the data that was used to generate the segmentation. If this reference is missing, the exported segmentations cannot be loaded in some software.When exporting segmentations, only the slices containing non-zero labels will be exported. After re-importing the file, it therefore might have a different number of slices.
For saving RT Structures, see
imfusion.dicom.save_rtstruct()
.- Parameters:
image (SharedImageSet) – The image to export
file_path (str) – File to write the resulting DICOM to. Existing files will be overwritten!
referenced_image (SharedImageSet) – An optional image that the exported image should reference.
Warning
At the moment, only exporting single frame CT and MR volumes is well supported. Since DICOM is an extensive standard, any other kind of image might lead to a non-standard or invalid DICOM.
- imfusion.dicom.save_folder(image: SharedImageSet, folder_path: str, referenced_image: SharedImageSet = None) None
Save an image as a DICOM folder containing potentially multiple files.
The SOP Class that is used for the export is determined based on the modality of the image. For example, CT images will be exported as ‘CT Image Storage’.
Works like
imfusion.dicom.save_file()
except for using different SOP Class UIDs.
- imfusion.dicom.save_rtstruct(*args, **kwargs)
Overloaded function.
save_rtstruct(labelmap: imfusion.SharedImageSet, referenced_image: imfusion.SharedImageSet, file_path: str) -> None
Save a labelmap as a RT Structure Set.
The contours of a label inside the labelmap will be used as a contour in the RT Structure. Each slice of the labelmap generates seperate contours (RT Structure does not support 3D contours).
save_rtstruct(rtstruct_set: list[imfusion.PointCloud], referenced_image: imfusion.SharedImageSet, file_path: str) -> None
Save a list of
PointCloud
as a RT Structure Set.Each
PointCloud
must provide aRTStructureDataComponent
.
- imfusion.dicom.set_default_authorization_provider(arg0: AuthorizationProvider) None
- imfusion.dicom.set_pacs_client_config(ae_title: str, port: int) None
Set the client configuration when connecting to a PACS.
To receive DICOMs from a PACS server, the AE title and port needs to be registered with the PACS as well (vendor specific and not done by this function!).
Warning
The values will be persisted on the system and will be restored when the application is restarted.
imfusion.imagemath
imfusion.imagemath - Bindings for ImageMath Operations
This module provides element-wise arithmetic operations for SharedImage
and SharedImageSet
. You can apply these imagemath
functionalities directly to objects of SharedImage
and SharedImageSet
with eager evaluation. Alternatively, the module offers lazy evaluation functionality through the submodule lazy
. You can create wrapper expressions using the Expression
provided by lazy
.
See Expression
for details.
Example for eager evaluation:
>>> from imfusion import imagemath
Add si1 and si2, which are SharedImage
instances:
>>> si1 = imfusion.load(ct_image_png)[0][0]
>>> si2 = si1.clone()
>>> res = si1 + si2
res is a SharedImage
instance.
>>> print(res)
imfusion.SharedImage(USHORT width: 512 height: 512 spacing: 0.661813x0.661813x1 mm)
Example for lazy evaluation:
>>> from imfusion.imagemath import lazy
Create expressions from SharedImage
instances:
>>> expr1 = lazy.Expression(si1)
>>> expr2 = lazy.Expression(si2)
Add expr1 and expr2:
>>> expr3 = expr1 + expr2
Alternatively, you could add expr1 and si2 or si1 and expr2. Any expression containing an instance of Expression
will be converted to lazy evaluation expression.
>>> expr3 = expr1 + si2
Find the result with lazy evaluation:
>>> res = expr3.evaluate()
res is a SharedImage
instance similar to eager evaluation case.
>>> print(res)
imfusion.SharedImage(USHORT width: 512 height: 512 spacing: 0.661813x0.661813x1 mm)
- class imfusion.imagemath.CoordinateType(self: CoordinateType, value: int)
Bases:
pybind11_object
Coordinate system types for image operations.
Defines which coordinate system to use for various image operations, particularly useful for projection and geometric transformations.
Members:
WORLD : World coordinate system.
IMAGE : Image coordinate system.
TEXTURE : Texture coordinate system (normalized 0-1).
PIXEL : Pixel coordinate system.
- __index__(self: CoordinateType) int
- __init__(self: CoordinateType, value: int) None
- __int__(self: CoordinateType) int
- __setstate__(self: CoordinateType, state: int) None
- IMAGE = <CoordinateType.IMAGE: 1>
- PIXEL = <CoordinateType.PIXEL: 3>
- TEXTURE = <CoordinateType.TEXTURE: 2>
- WORLD = <CoordinateType.WORLD: 0>
- __annotations__ = {}
- __members__ = {'IMAGE': <CoordinateType.IMAGE: 1>, 'PIXEL': <CoordinateType.PIXEL: 3>, 'TEXTURE': <CoordinateType.TEXTURE: 2>, 'WORLD': <CoordinateType.WORLD: 0>}
- __module__ = 'imfusion.imagemath'
- property name
- property value
- imfusion.imagemath.absolute(*args, **kwargs)
Overloaded function.
absolute(x: imfusion.SharedImage) -> imfusion.SharedImage
Absolute value, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
absolute(x: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Absolute value, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.add(*args, **kwargs)
Overloaded function.
add(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Addition, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
add(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Addition, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
add(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Addition, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
add(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Addition, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
add(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Addition, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
add(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Addition, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
add(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Addition, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
add(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Addition, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.arctan2(*args, **kwargs)
Overloaded function.
arctan2(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
arctan2(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
arctan2(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
arctan2(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
arctan2(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
arctan2(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
arctan2(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
arctan2(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.argmax(*args, **kwargs)
Overloaded function.
argmax(x: imfusion.SharedImage) -> list[numpy.ndarray[numpy.int32[4, 1]]]
Return a list of the indices of maximum values, channel-wise. The indices are represented as (x, y, z, image index).
- Parameters:
x (
SharedImage
) –SharedImage
instance.
argmax(x: imfusion.SharedImageSet) -> list[numpy.ndarray[numpy.int32[4, 1]]]
Return a list of the indices of maximum values, channel-wise. The indices are represented as (x, y, z, image index).
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.argmin(*args, **kwargs)
Overloaded function.
argmin(x: imfusion.SharedImage) -> list[numpy.ndarray[numpy.int32[4, 1]]]
Return a list of the indices of minimum values, channel-wise. The indices are represented as (x, y, z, image index).
- Parameters:
x (
SharedImage
) –SharedImage
instance.
argmin(x: imfusion.SharedImageSet) -> list[numpy.ndarray[numpy.int32[4, 1]]]
Return a list of the indices of minimum values, channel-wise. The indices are represented as (x, y, z, image index).
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.channel_swizzle(*args, **kwargs)
Overloaded function.
channel_swizzle(x: imfusion.SharedImage, indices: list[int]) -> imfusion.SharedImage
Reorders the channels of an image based on the input indices, e.g. indices[0] will correspond to the first channel of the output image.
- Parameters:
x (
SharedImage
) –SharedImage
instance.indices (List[int]) – List of channels indices to swizzle the channels of
SharedImage
.
channel_swizzle(x: imfusion.SharedImageSet, indices: list[int]) -> imfusion.SharedImageSet
Reorders the channels of an image based on the input indices, e.g. indices[0] will correspond to the first channel of the output image.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.indices (List[int]) – List of channels indices to swizzle the channels of
SharedImageSet
.
- imfusion.imagemath.cos(*args, **kwargs)
Overloaded function.
cos(x: imfusion.SharedImage) -> imfusion.SharedImage
Cosine, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
cos(x: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Cosine, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.divide(*args, **kwargs)
Overloaded function.
divide(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Division, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
divide(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Division, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
divide(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Division, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
divide(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Division, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
divide(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Division, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
divide(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Division, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
divide(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Division, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
divide(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Division, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.equal(*args, **kwargs)
Overloaded function.
equal(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
equal(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
equal(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
equal(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
equal(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
equal(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
equal(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
equal(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.exp(*args, **kwargs)
Overloaded function.
exp(x: imfusion.SharedImage) -> imfusion.SharedImage
Exponential operation, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
exp(x: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Exponential operation, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.greater(*args, **kwargs)
Overloaded function.
greater(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
greater(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
greater(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
greater(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
greater(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
greater(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
greater(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
greater(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.greater_equal(*args, **kwargs)
Overloaded function.
greater_equal(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
greater_equal(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
greater_equal(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
greater_equal(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
greater_equal(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
greater_equal(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
greater_equal(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
greater_equal(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.less(*args, **kwargs)
Overloaded function.
less(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
less(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
less(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
less(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
less(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
less(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
less(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
less(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.less_equal(*args, **kwargs)
Overloaded function.
less_equal(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
less_equal(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
less_equal(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
less_equal(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
less_equal(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
less_equal(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
less_equal(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
less_equal(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.log(*args, **kwargs)
Overloaded function.
log(x: imfusion.SharedImage) -> imfusion.SharedImage
Natural logarithm, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
log(x: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Natural logarithm, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.max(*args, **kwargs)
Overloaded function.
max(x: imfusion.SharedImage) -> numpy.ndarray[numpy.float64[m, 1]]
Return the list of the maximum elements of images, channel-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
max(x: imfusion.SharedImageSet) -> numpy.ndarray[numpy.float64[m, 1]]
Return the list of the maximum elements of images, channel-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.maximum(*args, **kwargs)
Overloaded function.
maximum(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return element-wise maximum of arguments.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
maximum(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return element-wise maximum of arguments.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
maximum(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Return element-wise maximum of arguments.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
maximum(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Return element-wise maximum of arguments.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
maximum(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return element-wise maximum of arguments.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
maximum(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Return element-wise maximum of arguments.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
maximum(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return element-wise maximum of arguments.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
maximum(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return element-wise maximum of arguments.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.mean(*args, **kwargs)
Overloaded function.
mean(x: imfusion.SharedImage) -> numpy.ndarray[numpy.float64[m, 1]]
Return a list of channel-wise average of image elements.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
mean(x: imfusion.SharedImageSet) -> numpy.ndarray[numpy.float64[m, 1]]
Return a list of channel-wise average of image elements.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.min(*args, **kwargs)
Overloaded function.
min(x: imfusion.SharedImage) -> numpy.ndarray[numpy.float64[m, 1]]
Return the list of the minimum elements of images, channel-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
min(x: imfusion.SharedImageSet) -> numpy.ndarray[numpy.float64[m, 1]]
Return the list of the minimum elements of images, channel-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.minimum(*args, **kwargs)
Overloaded function.
minimum(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return element-wise minimum of arguments.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
minimum(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return element-wise minimum of arguments.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
minimum(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Return element-wise minimum of arguments.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
minimum(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Return element-wise minimum of arguments.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
minimum(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return element-wise minimum of arguments.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
minimum(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Return element-wise minimum of arguments.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
minimum(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return element-wise minimum of arguments.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
minimum(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return element-wise minimum of arguments.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.multiply(*args, **kwargs)
Overloaded function.
multiply(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Multiplication, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
multiply(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Multiplication, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
multiply(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Multiplication, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
multiply(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Multiplication, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
multiply(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Multiplication, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
multiply(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Multiplication, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
multiply(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Multiplication, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
multiply(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Multiplication, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.negative(*args, **kwargs)
Overloaded function.
negative(x: imfusion.SharedImage) -> imfusion.SharedImage
Numerical negative, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
negative(x: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Numerical negative, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.norm(*args, **kwargs)
Overloaded function.
norm(x: imfusion.SharedImage, order: object = 2) -> numpy.ndarray[numpy.float64[m, 1]]
Returns the norm of an image instance, channel-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.order (int, float, 'inf') – Order of the norm. Default is L2 norm.
norm(x: imfusion.SharedImageSet, order: object = 2) -> numpy.ndarray[numpy.float64[m, 1]]
Returns the norm of an image instance, channel-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.order (int, float, 'inf') – Order of the norm. Default is L2 norm.
- imfusion.imagemath.not_equal(*args, **kwargs)
Overloaded function.
not_equal(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
not_equal(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
not_equal(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
not_equal(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
not_equal(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
not_equal(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
not_equal(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
not_equal(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.power(*args, **kwargs)
Overloaded function.
power(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
power(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
power(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
power(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
power(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
power(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
power(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
power(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.prod(*args, **kwargs)
Overloaded function.
prod(x: imfusion.SharedImage) -> numpy.ndarray[numpy.float64[m, 1]]
Return a list of channel-wise production of image elements.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
prod(x: imfusion.SharedImageSet) -> numpy.ndarray[numpy.float64[m, 1]]
Return a list of channel-wise production of image elements.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.sign(*args, **kwargs)
Overloaded function.
sign(x: imfusion.SharedImage) -> imfusion.SharedImage
Element-wise indication of the sign of image elements.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
sign(x: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Element-wise indication of the sign of image elements.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.sin(*args, **kwargs)
Overloaded function.
sin(x: imfusion.SharedImage) -> imfusion.SharedImage
Sine, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
sin(x: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Sine, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.sqrt(*args, **kwargs)
Overloaded function.
sqrt(x: imfusion.SharedImage) -> imfusion.SharedImage
Square-root operation, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
sqrt(x: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Square-root operation, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.square(*args, **kwargs)
Overloaded function.
square(x: imfusion.SharedImage) -> imfusion.SharedImage
Square operation, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
square(x: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Square operation, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.subtract(*args, **kwargs)
Overloaded function.
subtract(x1: imfusion.SharedImage, x2: imfusion.SharedImage) -> imfusion.SharedImage
Addition, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImage
) –SharedImage
instance.
subtract(x1: imfusion.SharedImage, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Addition, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
subtract(x1: imfusion.SharedImage, x2: float) -> imfusion.SharedImage
Addition, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (float) – scalar value.
subtract(x1: imfusion.SharedImageSet, x2: imfusion.SharedImage) -> imfusion.SharedImageSet
Addition, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImage
) –SharedImage
instance.
subtract(x1: imfusion.SharedImageSet, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Addition, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
SharedImageSet
) –SharedImageSet
instance.
subtract(x1: imfusion.SharedImageSet, x2: float) -> imfusion.SharedImageSet
Addition, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (float) – scalar value.
subtract(x1: float, x2: imfusion.SharedImage) -> imfusion.SharedImage
Addition, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImage
) –SharedImage
instance.
subtract(x1: float, x2: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Addition, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
SharedImageSet
) –SharedImageSet
instance.
- imfusion.imagemath.sum(*args, **kwargs)
Overloaded function.
sum(x: imfusion.SharedImage) -> numpy.ndarray[numpy.float64[m, 1]]
Return a list of channel-wise sum of image elements.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
sum(x: imfusion.SharedImageSet) -> numpy.ndarray[numpy.float64[m, 1]]
Return a list of channel-wise sum of image elements.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
imfusion.imagemath.lazy
Lazy evaluation (imagemath.lazy)
- class imfusion.imagemath.lazy.Expression(*args, **kwargs)
Bases:
pybind11_object
Expressions to be used for lazy evaluation.
This class serves as a wrapper for
SharedImage
,SharedImageSet
, and scalar values to be used for lazy evaluation. Lazy evaluation approach delays the actual evaluation until the point that the result is needed. If you prefer the eager evaluation approach, you can directly invoke operations onSharedImage
andSharedImageSet
objects.Here is an example how to use lazy evaluation approach:
>>> from imfusion import imagemath
Create expressions from
SharedImage
instances:>>> expr1 = imagemath.lazy.Expression(si1) >>> expr2 = imagemath.lazy.Expression(si2)
Any operation with the expressions will return another expression. Expressions are stored in the expression tree and not evaluated yet without any evaluation.
>>> expr3 = expr1 + expr2
Expressions must be explicitly evaluated to get results. Use the
evaluate()
method for this purpose:>>> res = expr3.evaluate()
Here, result is a
SharedImage
instance:>>> print(res) imfusion.SharedImage(USHORT width: 512 height: 512 spacing: 0.661813x0.661813x1 mm)
Overloaded function.
__init__(self: imfusion.imagemath.lazy.Expression, shared_image_set: imfusion.SharedImageSet) -> None
Creates an expression wrapping
SharedImageSet
instance.- Parameters:
shared_image_set (
SharedImageSet
) –SharedImageSet
instance to be wrapped byExpression
.
__init__(self: imfusion.imagemath.lazy.Expression, shared_image: imfusion.SharedImage) -> None
Creates an expression wrapping
SharedImage
instance.- Parameters:
shared_image (
SharedImage
) –SharedImage
instance to be wrapped byExpression
.
__init__(self: imfusion.imagemath.lazy.Expression, value: float) -> None
Creates an expression wrapping a scalar value.
- Parameters:
value (float) – Scalar value to be wrapped by
Expression
.
__init__(self: imfusion.imagemath.lazy.Expression, channel: int) -> None
Creates an expression wrapping a variable, e.g. a result of another computation which is not yet available during creation of the expr. Currently, only one per expression is allowed.
- Parameters:
channel (int) – The channel of the variable wrapped by
Expression
.
- __abs__(self: Expression) Expression
Expression for absolute value, element-wise.
- __add__(*args, **kwargs)
Overloaded function.
__add__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
__add__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
__add__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
__add__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x (float) – scalar value.
- __eq__(*args, **kwargs)
Overloaded function.
__eq__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
__eq__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
__eq__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
__eq__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x (float) – scalar value.
- __ge__(*args, **kwargs)
Overloaded function.
__ge__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
__ge__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
__ge__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
__ge__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x (float) – scalar value.
- __getitem__(self: Expression, index: int) Expression
This method only works with
SharedImageSet
Expression
instances. Returns aSharedImage
Expression
from aSharedImageSet
Expression
.- Parameters:
index (int) – The index of
SharedImage
Expression
.
- __gt__(*args, **kwargs)
Overloaded function.
__gt__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
__gt__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
__gt__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
__gt__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x (float) – scalar value.
- __init__(*args, **kwargs)
Overloaded function.
__init__(self: imfusion.imagemath.lazy.Expression, shared_image_set: imfusion.SharedImageSet) -> None
Creates an expression wrapping
SharedImageSet
instance.- Parameters:
shared_image_set (
SharedImageSet
) –SharedImageSet
instance to be wrapped byExpression
.
__init__(self: imfusion.imagemath.lazy.Expression, shared_image: imfusion.SharedImage) -> None
Creates an expression wrapping
SharedImage
instance.- Parameters:
shared_image (
SharedImage
) –SharedImage
instance to be wrapped byExpression
.
__init__(self: imfusion.imagemath.lazy.Expression, value: float) -> None
Creates an expression wrapping a scalar value.
- Parameters:
value (float) – Scalar value to be wrapped by
Expression
.
__init__(self: imfusion.imagemath.lazy.Expression, channel: int) -> None
Creates an expression wrapping a variable, e.g. a result of another computation which is not yet available during creation of the expr. Currently, only one per expression is allowed.
- Parameters:
channel (int) – The channel of the variable wrapped by
Expression
.
- __le__(*args, **kwargs)
Overloaded function.
__le__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
__le__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
__le__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
__le__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x (float) – scalar value.
- __lt__(*args, **kwargs)
Overloaded function.
__lt__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
__lt__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
__lt__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
__lt__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x (float) – scalar value.
- __mul__(*args, **kwargs)
Overloaded function.
__mul__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Multiplication, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
__mul__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Multiplication, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
__mul__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Multiplication, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
__mul__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression
Multiplication, element-wise.
- Parameters:
x (float) – scalar value.
- __ne__(*args, **kwargs)
Overloaded function.
__ne__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
__ne__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
__ne__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
__ne__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x (float) – scalar value.
- __neg__(self: Expression) Expression
Expression for numerical negative, element-wise.
- __pow__(*args, **kwargs)
Overloaded function.
__pow__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
__pow__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
__pow__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
__pow__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x (float) – scalar value.
- __radd__(*args, **kwargs)
Overloaded function.
__radd__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
__radd__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
__radd__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression
- __req__(*args, **kwargs)
Overloaded function.
__req__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
__req__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
__req__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression
- __rge__(*args, **kwargs)
Overloaded function.
__rge__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
__rge__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
__rge__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression
- __rgt__(*args, **kwargs)
Overloaded function.
__rgt__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
__rgt__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
__rgt__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression
- __rle__(*args, **kwargs)
Overloaded function.
__rle__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
__rle__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
__rle__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression
- __rlt__(*args, **kwargs)
Overloaded function.
__rlt__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
__rlt__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
__rlt__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression
- __rmul__(*args, **kwargs)
Overloaded function.
__rmul__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
__rmul__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
__rmul__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression
- __rne__(*args, **kwargs)
Overloaded function.
__rne__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
__rne__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
__rne__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression
- __rpow__(*args, **kwargs)
Overloaded function.
__rpow__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
__rpow__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
__rpow__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression
- __rsub__(*args, **kwargs)
Overloaded function.
__rsub__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
__rsub__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
__rsub__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression
- __rtruediv__(*args, **kwargs)
Overloaded function.
__rtruediv__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
__rtruediv__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
__rtruediv__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression
- __sub__(*args, **kwargs)
Overloaded function.
__sub__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
__sub__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
__sub__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
__sub__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x (float) – scalar value.
- __truediv__(*args, **kwargs)
Overloaded function.
__truediv__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Division, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
__truediv__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Division, element-wise.
- Parameters:
x (
SharedImage
) –SharedImage
instance.
__truediv__(self: imfusion.imagemath.lazy.Expression, x: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Division, element-wise.
- Parameters:
x (
SharedImageSet
) –SharedImageSet
instance.
__truediv__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression
Division, element-wise.
- Parameters:
x (float) – scalar value.
- argmax(self: Expression) list[ndarray[numpy.int32[4, 1]]]
Return the expression for computing a list of the indices of maximum values, channel-wise. The indices are represented as (x, y, z, image index).
- argmin(self: Expression) list[ndarray[numpy.int32[4, 1]]]
Return the expression for computing a list of the indices of minimum values, channel-wise. The indices are represented as (x, y, z, image index).
- channel_swizzle(self: Expression, indices: list[int]) Expression
Returns the expression which reorders the channels of an image based on the input indices, e.g. indices[0] will correspond to the first channel of the output image.
- Parameters:
indices (List[int]) – List of channels indices to swizzle the channels of the
SharedImage
orSharedImageSet
expressions.
- evaluate(self: Expression) object
Evalute the expression into an image object, which is
SharedImage
orSharedImageSet
instance. Scalar expressions return None when evaluated. Until this method is called, the operands and operations are stored in an expression tree but not evaluated yet.Returns:
SharedImage
orSharedImageSet
instance depending on the end result of the expression tree.
- max(self: Expression) ndarray[numpy.float64[m, 1]]
Return the expression for computing the list of the maximum elements of images, channel-wise.
- mean(self: Expression) ndarray[numpy.float64[m, 1]]
Return the expression for computing a list of channel-wise average of image elements.
- min(self: Expression) ndarray[numpy.float64[m, 1]]
Return the expression for computing the list of the minimum elements of images, channel-wise.
- norm(self: Expression, order: object = 2) ndarray[numpy.float64[m, 1]]
Returns the expression for computing the norm of an image, channel-wise.
- prod(self: Expression) ndarray[numpy.float64[m, 1]]
Return the expression for computing a list of channel-wise production of image elements.
- sum(self: Expression) ndarray[numpy.float64[m, 1]]
Return the expression for computing a list of channel-wise sum of image elements.
- __annotations__ = {}
- __hash__ = None
- __module__ = 'imfusion.imagemath.lazy'
- imfusion.imagemath.lazy.absolute(x: Expression) Expression
Expression for absolute value, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.add(*args, **kwargs)
Overloaded function.
add(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
add(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
add(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
add(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
add(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
add(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
add(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.arctan2(*args, **kwargs)
Overloaded function.
arctan2(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
arctan2(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
arctan2(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
arctan2(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
arctan2(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
arctan2(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
arctan2(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Trigonometric inverse tangent, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.argmax(x: Expression) list[ndarray[numpy.int32[4, 1]]]
Return the expression for computing a list of the indices of maximum values, channel-wise. The indices are represented as (x, y, z, image index).
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.argmin(x: Expression) list[ndarray[numpy.int32[4, 1]]]
Return the expression for computing a list of the indices of minimum values, channel-wise. The indices are represented as (x, y, z, image index).
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.astype(x: Expression, image_type: object) Expression
Expression for element-wise cast expression to float of image elements.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.channel_swizzle(x: Expression, indices: list[int]) Expression
Returns the expression which reorders the channels of an image based on the input indices, e.g. indices[0] will correspond to the first channel of the output image.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.indices (List[int]) – List of channels indices to swizzle the channels of the
SharedImage
orSharedImageSet
expressions.
- imfusion.imagemath.lazy.cos(x: Expression) Expression
Expression for cosine, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.divide(*args, **kwargs)
Overloaded function.
divide(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Division, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
divide(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Division, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
divide(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Division, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
divide(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Division, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
divide(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Division, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
divide(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Division, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
divide(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Division, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.equal(*args, **kwargs)
Overloaded function.
equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
equal(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
equal(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
equal(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
equal(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 == x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.exp(x: Expression) Expression
Expression for exponential operation, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.greater(*args, **kwargs)
Overloaded function.
greater(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
greater(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
greater(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
greater(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
greater(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
greater(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
greater(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 > x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.greater_equal(*args, **kwargs)
Overloaded function.
greater_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
greater_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
greater_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
greater_equal(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
greater_equal(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
greater_equal(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
greater_equal(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 >= x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.less(*args, **kwargs)
Overloaded function.
less(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
less(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
less(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
less(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
less(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
less(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
less(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 < x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.less_equal(*args, **kwargs)
Overloaded function.
less_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
less_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
less_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
less_equal(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
less_equal(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
less_equal(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
less_equal(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 <= x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.log(x: Expression) Expression
Expression for natural logarithm, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.max(x: Expression) ndarray[numpy.float64[m, 1]]
Return the expression for computing the list of the maximum elements of images, channel-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.maximum(*args, **kwargs)
Overloaded function.
maximum(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return element-wise maximum of arguments.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
maximum(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return element-wise maximum of arguments.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
maximum(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return element-wise maximum of arguments.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
maximum(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Return element-wise maximum of arguments.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
maximum(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return element-wise maximum of arguments.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
maximum(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return element-wise maximum of arguments.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
maximum(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return element-wise maximum of arguments.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.mean(x: Expression) ndarray[numpy.float64[m, 1]]
Return the expression for computing a list of channel-wise average of image elements.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.min(x: Expression) ndarray[numpy.float64[m, 1]]
Return the expression for computing the list of the minimum elements of images, channel-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.minimum(*args, **kwargs)
Overloaded function.
minimum(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return element-wise minimum of arguments.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
minimum(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return element-wise minimum of arguments.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
minimum(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return element-wise minimum of arguments.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
minimum(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Return element-wise minimum of arguments.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
minimum(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return element-wise minimum of arguments.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
minimum(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return element-wise minimum of arguments.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
minimum(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return element-wise minimum of arguments.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.multiply(*args, **kwargs)
Overloaded function.
multiply(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Multiplication, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
multiply(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Multiplication, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
multiply(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Multiplication, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
multiply(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Multiplication, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
multiply(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Multiplication, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
multiply(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Multiplication, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
multiply(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Multiplication, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.negative(x: Expression) Expression
Expression for numerical negative, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.norm(x: Expression, order: object = 2) ndarray[numpy.float64[m, 1]]
Returns the expression for computing the norm of an image, channel-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.order (int, float, 'inf') – Order of the norm. Default is L2 norm.
- imfusion.imagemath.lazy.not_equal(*args, **kwargs)
Overloaded function.
not_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
not_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
not_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
not_equal(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
not_equal(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
not_equal(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
not_equal(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Return the truth value of (x1 != x2), element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.power(*args, **kwargs)
Overloaded function.
power(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
power(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
power(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
power(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
power(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
power(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
power(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
The first argument is raised to powers of the second argument, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.prod(x: Expression) ndarray[numpy.float64[m, 1]]
Return the expression for computing a list of channel-wise production of image elements.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.sign(x: Expression) Expression
Expression for element-wise indication of the sign of image elements.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.sin(x: Expression) Expression
Expression for sine, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.sqrt(x: Expression) Expression
Expression for the square-root operation, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.square(x: Expression) Expression
Expression for the square operation, element-wise.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.subtract(*args, **kwargs)
Overloaded function.
subtract(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
subtract(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImage) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImage
) –SharedImage
instance.
subtract(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.SharedImageSet) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (
SharedImageSet
) –SharedImageSet
instance.
subtract(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.x2 (float) – scalar value.
subtract(x1: imfusion.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (
SharedImage
) –SharedImage
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
subtract(x1: imfusion.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (
SharedImageSet
) –SharedImageSet
instance.x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
subtract(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression
Addition, element-wise.
- Parameters:
x1 (float) – scalar value.
x2 (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
- imfusion.imagemath.lazy.sum(x: Expression) ndarray[numpy.float64[m, 1]]
Return the expression for computing a list of channel-wise sum of image elements.
- Parameters:
x (
Expression
) –Expression
instance wrappingSharedImage
instance,SharedImageSet
instance, or scalar value.
imfusion.labels
This module offers a way of interacting with Labels projects from python.
The central class in this module is the Project
class. It allow you to either create a new local project or load an existing local project:
import imfusion
from imfusion import labels
new_project = labels.Project('New Project', 'path/to/new/project/folder')
existing_project = labels.Project.load('path/to/existing/project')
remote_project = labels.Project.load('http://example.com', '1', 'username', 'password123')
From there you can add new tag definitions, annotation definitions and data to the project:
project.add_tag('NewTag', labels.TagKind.Bool)
project.add_labelmap_layer('NewLabelmap')
project.add_descriptor(imfusion.io.open('/path/to/image')[0])
The Project
instance is also the central way to access this kind of data:
new_tag = project.tags['NewTag'] # can also be indexed with an integer, i.e. tags[0]
new_labelmap = project.labelmap_layers['NewLabelmap'] # can also be indexed with an interger, i.e. labelmap_layers[0]
new_descriptor = project.descriptors[0]
The DataDescriptor
class represents an entry in the project’s database and can be used to access the the entry’s metadata, tags and annotations.
The interface for accessing tags and annotations is the same as in Project
but also offers the additional value
attribute to get the value of the tag / annotation:
name = descriptor.name
shape = (descriptor.n_images, descriptor.n_channels, descriptor.n_slices, descriptor.height, descriptor.width)
new_tag = descriptor.tags['NewTag']
tag_value = descriptor.tags['NewTag'].value
labelmap = descriptor.labelmap_layers['NewLabelmap'].load()
roi = descriptor.roi
image = descriptor.load_image(crop_to_roi=True)
Note
Keep in mind that all modifications made to a local project are stored in memory and will only be saved to disk if you call Project.save()
.
Modifications to remote projects are applied immediately.
Alternatively, you can also use the Project
as a context manager:
with Project('SomeName', /some/path) as project:
... # will automatically save the project when exiting the context if there was no exception
Warning
Changing annotation data is the only exception to this rule. It is written immediately to disk (see :meth:LabelMapLayer.save_new_data`, :meth:LandmarkLayer.save_new_data`, :meth:BoundingBoxLayer.save_new_data`)
- class imfusion.labels.BoundingBox
Bases:
pybind11_object
- property color
- property descriptor
- property index
- property name
- property project
- class imfusion.labels.BoundingBoxAccessor
Bases:
pybind11_object
- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, index: int) -> imfusion.labels._bindings.BoundingBox
Retrieve an entry from this
BoundingBoxAccessor
by its index.- Parameters:
index – Integer index of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, name: str) -> imfusion.labels._bindings.BoundingBox
Retrieve an entry from this
BoundingBoxAccessor
by its name.- Parameters:
name – Name of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, slice: slice) -> imfusion.labels._bindings.BoundingBoxAccessor
Retrieve multiple entries from this
BoundingBoxAccessor
using Python’s slice notation ([start:stop:step]
).- Parameters:
slice –
slice
instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.
__getitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, selection: list[int]) -> imfusion.labels._bindings.BoundingBoxAccessor
Retrieve multiple entries from this
BoundingBoxAccessor
by using a list of indices.- Parameters:
selection – List of integer indices of the entries to be retrieved.
- __setitem__(*args, **kwargs)
Overloaded function.
__setitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, index: int, value: object) -> None
Change an existing entry by index.
- Parameters:
index – Index of the entry to be changed.
value – Value to set at
index
.
__setitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, name: str, value: object) -> None
Change an existing entry by name.
- Parameters:
name – Name of the entry to be changed.
value – Value to set at
name
.
__setitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, index: slice, value: list) -> None
Change multiple entries denoted using Python’s slice notation (
[start:stop:step]
).
- size(self: BoundingBoxAccessor) int
- property names
List of the names of
BoundingBox
s available through thisBoundingBoxAccessor
- class imfusion.labels.BoundingBoxLayer
Bases:
pybind11_object
- add_annotation(self: BoundingBoxLayer, name: str, color: tuple[int, int, int] = (255, 255, 255)) BoundingBox
Define a new entry in this boundingbox layer. The definition consists of only the name, the actual coordinates for it are stored in the BoxSet.
- add_boundingbox()
Delegates to:
add_annotation()
- load(self: BoundingBoxLayer) object
- save_new_data(self: BoundingBoxLayer, value: object, lock_token: LockToken = LockToken(token='')) None
Change the data of this layer.
Warning
Beware that, unlike other modifications, new layer data is immediately written to disk, regardless of calls to
Project.save()
.
- property annotations
- property boundingboxes
- property descriptor
- property folder
- property id
- property index
- property name
- property project
- class imfusion.labels.BoundingBoxLayersAccessor
Bases:
pybind11_object
- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.labels._bindings.BoundingBoxLayersAccessor, index: int) -> imfusion.labels._bindings.BoundingBoxLayer
Retrieve an entry from this
BoundingBoxLayersAccessor
by its index.- Parameters:
index – Integer index of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.BoundingBoxLayersAccessor, name: str) -> imfusion.labels._bindings.BoundingBoxLayer
Retrieve an entry from this
BoundingBoxLayersAccessor
by its name.- Parameters:
name – Name of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.BoundingBoxLayersAccessor, slice: slice) -> imfusion.labels._bindings.BoundingBoxLayersAccessor
Retrieve multiple entries from this
BoundingBoxLayersAccessor
using Python’s slice notation ([start:stop:step]
).- Parameters:
slice –
slice
instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.
__getitem__(self: imfusion.labels._bindings.BoundingBoxLayersAccessor, selection: list[int]) -> imfusion.labels._bindings.BoundingBoxLayersAccessor
Retrieve multiple entries from this
BoundingBoxLayersAccessor
by using a list of indices.- Parameters:
selection – List of integer indices of the entries to be retrieved.
- size(self: BoundingBoxLayersAccessor) int
- property active
Return the currently active layer or None if no layer is active.
The active layer is usually only relevant when using Python inside the application. It can be set by the user to defined the layer that can be modified with e.g. the brush tool.
It’s currently not possible to change the active layer through the Python API but only in the UI.
- property names
List of the names of
BoundingBoxLayer
s available through thisBoundingBoxLayersAccessor
- class imfusion.labels.BoxSet(self: BoxSet, names: list[str], n_frames: int)
Bases:
pybind11_object
- add(self: BoxSet, type: str, frame: int, top_left: ndarray[numpy.float64[3, 1]], lower_right: ndarray[numpy.float64[3, 1]]) None
Add a box to the set.
- asdict(self: BoxSet) dict
Convert this AnnotationSet into a dict. Modifying the dict does not reflect on the AnnotationSet.
- static from_descriptor(descriptor: Descriptor, layer_name: str) BoxSet
Create a BoxSet tailored to a specific annotation layer in a descriptor.
- type(*args, **kwargs)
Overloaded function.
type(self: imfusion.labels._bindings.BoxSet, type: str) -> imfusion.labels._bindings.BoxSet
Select only the points that belong to the specified type.
type(self: imfusion.labels._bindings.BoxSet, type: int) -> imfusion.labels._bindings.BoxSet
Select only the points that belong to the specified type.
- class imfusion.labels.DataType(self: DataType, value: int)
Bases:
pybind11_object
Enum for specifying what is considered valid data in the project.
Members:
SingleChannelImages : Consider 2D greyscale images as valid data.
MultiChannelImages : Consider 2D color images as valid data.
SingleChannelVolumes : Consider 3D greyscale images as valid data.
MultiChannelVolumes : Consider 3D color images as valid data.
AnyDataType : Consider any kind of image data as valid data.
- AnyDataType = DataType.AnyDataType
- MultiChannelImages = DataType.MultiChannelImages
- MultiChannelVolumes = DataType.MultiChannelVolumes
- SingleChannelImages = DataType.SingleChannelImages
- SingleChannelVolumes = DataType.SingleChannelVolumes
- property name
- property value
- class imfusion.labels.Descriptor
Bases:
pybind11_object
Class representing an entry in the project’s database. It holds, amongst other things, meta data about the image, annotations and the location of the image.
- consider_frame_annotated(self: Descriptor, frame: int, annotated: bool) None
- is_considered_annotated(self: Descriptor, frame: object = None) bool
- load_image(self: Descriptor, crop_to_roi: bool) SharedImageSet
- load_thumbnail(self: Descriptor, generate: bool = True) SharedImageSet
Return the image thumbnail as a SharedImageSet.
- Parameters:
generate (bool) – Whether to generate the thumbnail if it’s missing. If this is False, the method will return None for missing thumbnails.
- lock(self: Descriptor) LockToken
- property boundingbox_layers
- property byte_size
- property comments
- property grouping
- property has_data
- property height
- property identifier
- property import_time
- property is_locked
- property labelmap_layers
- property landmark_layers
- property latest_edit_time
- property load_path
- property modality
- property n_channels
- property n_images
- property n_slices
- property name
- property original_data_path
- property own_copy
- property patient_name
- property project
- property region_of_interest
- property roi
- property scale
- property series_instance_uid
- property shift
- property spacing
- property sub_file_id
- property tags
- property thumbnail_path
- property top_down
- property type
- property width
- class imfusion.labels.GeometryKind(self: GeometryKind, value: int)
Bases:
pybind11_object
The kind of geometry that can be used for annotating inside a GEOMETRIC_ANNOTATION layer.
Members:
LANDMARK
BOUNDING_BOX
- BOUNDING_BOX = <GeometryKind.BOUNDING_BOX: 1>
- LANDMARK = <GeometryKind.LANDMARK: 0>
- property name
- property value
- class imfusion.labels.Label(self: Label, name: str, kind: LayerKind, color: tuple[int, int, int] | None = None, value: int | None = None)
Bases:
pybind11_object
A single Label of
Layer
that defines its name and color among other things.- property color
- property geometry
- property id
- property kind
- property name
- property value
- class imfusion.labels.LabelLegacy
Bases:
pybind11_object
- property color
- property descriptor
- property index
- property name
- property project
- property value
- class imfusion.labels.LabelMapLayer
Bases:
pybind11_object
- add_annotation(self: LabelMapLayer, name: str, value: int, color: tuple[int, int, int] | None = None) LabelLegacy
Define a new entry in this labelmap layer. A label is represented by a name and a corresponding integer value for designating voxels in the labelmap.
- add_label()
Delegates to:
add_annotation()
- create_empty_labelmap(self: LabelMapLayer) object
Create an empty labelmap that is compatible with this layer. The labelmap will have the same size and meta data as the image. The labelmap is completely independent of the layer and does not replace the existing labelmap of the layer! To use this labelmap for the layer, call
LabelMapLayer.save_new_data()
.
- has_data(self: LabelMapLayer) bool
Return whether the labelmap exists and is not empty.
- load(self: LabelMapLayer) object
Load the labelmap as a SharedImagetSet. If the labelmap is completely empty, None is returned. To create a new labelmap use
LabelMapLayer.create_empty_labelmap()
.
- path(self: LabelMapLayer) str
Returns the path where the labelmap is stored on disk. Empty for remote projects.
- save_new_data(self: LabelMapLayer, value: object, lock_token: LockToken = LockToken(token='')) None
Change the data of this layer.
Warning
Beware that, unlike other modifications, new layer data is immediately written to disk, regardless of calls to
Project.save()
.
- thumbnail_path(self: LabelMapLayer) str
Returns the path where the labelmap thumbnail is stored on disk. Empty for remote projects.
- property annotations
- property descriptor
- property folder
- property id
- property index
- property labels
- property name
- property project
- class imfusion.labels.LabelMapsAccessor
Bases:
pybind11_object
- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.labels._bindings.LabelMapsAccessor, index: int) -> imfusion.labels._bindings.LabelMapLayer
Retrieve an entry from this
LabelMapsAccessor
by its index.- Parameters:
index – Integer index of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.LabelMapsAccessor, name: str) -> imfusion.labels._bindings.LabelMapLayer
Retrieve an entry from this
LabelMapsAccessor
by its name.- Parameters:
name – Name of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.LabelMapsAccessor, slice: slice) -> imfusion.labels._bindings.LabelMapsAccessor
Retrieve multiple entries from this
LabelMapsAccessor
using Python’s slice notation ([start:stop:step]
).- Parameters:
slice –
slice
instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.
__getitem__(self: imfusion.labels._bindings.LabelMapsAccessor, selection: list[int]) -> imfusion.labels._bindings.LabelMapsAccessor
Retrieve multiple entries from this
LabelMapsAccessor
by using a list of indices.- Parameters:
selection – List of integer indices of the entries to be retrieved.
- size(self: LabelMapsAccessor) int
- property active
Return the currently active layer or None if no layer is active.
The active layer is usually only relevant when using Python inside the application. It can be set by the user to defined the layer that can be modified with e.g. the brush tool.
It’s currently not possible to change the active layer through the Python API but only in the UI.
- property names
List of the names of
LabelMap
s available through thisLabelMapsAccessor
- class imfusion.labels.LabelsAccessor
Bases:
pybind11_object
Like a
list
ofLabel
, but allows indexing by index or name.- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.labels._bindings.LabelsAccessor, index: int) -> imfusion.labels._bindings.Label
Retrieve an entry from this
LabelsAccessor
by its index.- Args:
index: Integer index of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.LabelsAccessor, name: str) -> imfusion.labels._bindings.Label
Retrieve an entry from this
LabelsAccessor
by its name.- Args:
name: Name of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.LabelsAccessor, slice: slice) -> imfusion.labels._bindings.LabelsAccessor
Retrieve multiple entries from this
LabelsAccessor
using Python’s slice notation ([start:stop:step]
).- Args:
slice:
slice
instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.
- property names
List of the names of
Label
s available through thisLabelsAccessor
- class imfusion.labels.LabelsAccessorLegacy
Bases:
pybind11_object
- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, index: int) -> imfusion.labels._bindings.LabelLegacy
Retrieve an entry from this
LabelsAccessorLegacy
by its index.- Parameters:
index – Integer index of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, name: str) -> imfusion.labels._bindings.LabelLegacy
Retrieve an entry from this
LabelsAccessorLegacy
by its name.- Parameters:
name – Name of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, slice: slice) -> imfusion.labels._bindings.LabelsAccessorLegacy
Retrieve multiple entries from this
LabelsAccessorLegacy
using Python’s slice notation ([start:stop:step]
).- Parameters:
slice –
slice
instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.
__getitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, selection: list[int]) -> imfusion.labels._bindings.LabelsAccessorLegacy
Retrieve multiple entries from this
LabelsAccessorLegacy
by using a list of indices.- Parameters:
selection – List of integer indices of the entries to be retrieved.
- __setitem__(*args, **kwargs)
Overloaded function.
__setitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, index: int, value: object) -> None
Change an existing entry by index.
- Parameters:
index – Index of the entry to be changed.
value – Value to set at
index
.
__setitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, name: str, value: object) -> None
Change an existing entry by name.
- Parameters:
name – Name of the entry to be changed.
value – Value to set at
name
.
__setitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, index: slice, value: list) -> None
Change multiple entries denoted using Python’s slice notation (
[start:stop:step]
).
- size(self: LabelsAccessorLegacy) int
- property names
List of the names of
LabelLegacy
s available through thisLabelsAccessorLegacy
- class imfusion.labels.Landmark
Bases:
pybind11_object
- property color
- property descriptor
- property index
- property name
- property project
- class imfusion.labels.LandmarkLayer
Bases:
pybind11_object
- add_annotation(self: LandmarkLayer, name: str, color: tuple[int, int, int] = (255, 255, 255)) Landmark
Define a new entry in this landmark layer. The definition consists of only the name, the actual coordinates for it are stored in the LandmarkSet.
- add_landmark()
Delegates to:
add_annotation()
- load(self: LandmarkLayer) object
- save_new_data(self: LandmarkLayer, value: object, lock_token: LockToken = LockToken(token='')) None
Change the data of this layer.
Warning
Beware that, unlike other modifications, new layer data is immediately written to disk, regardless of calls to
Project.save()
.
- property annotations
- property descriptor
- property folder
- property id
- property index
- property landmarks
- property name
- property project
- class imfusion.labels.LandmarkLayersAccessor
Bases:
pybind11_object
- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.labels._bindings.LandmarkLayersAccessor, index: int) -> imfusion.labels._bindings.LandmarkLayer
Retrieve an entry from this
LandmarkLayersAccessor
by its index.- Parameters:
index – Integer index of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.LandmarkLayersAccessor, name: str) -> imfusion.labels._bindings.LandmarkLayer
Retrieve an entry from this
LandmarkLayersAccessor
by its name.- Parameters:
name – Name of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.LandmarkLayersAccessor, slice: slice) -> imfusion.labels._bindings.LandmarkLayersAccessor
Retrieve multiple entries from this
LandmarkLayersAccessor
using Python’s slice notation ([start:stop:step]
).- Parameters:
slice –
slice
instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.
__getitem__(self: imfusion.labels._bindings.LandmarkLayersAccessor, selection: list[int]) -> imfusion.labels._bindings.LandmarkLayersAccessor
Retrieve multiple entries from this
LandmarkLayersAccessor
by using a list of indices.- Parameters:
selection – List of integer indices of the entries to be retrieved.
- size(self: LandmarkLayersAccessor) int
- property active
Return the currently active layer or None if no layer is active.
The active layer is usually only relevant when using Python inside the application. It can be set by the user to defined the layer that can be modified with e.g. the brush tool.
It’s currently not possible to change the active layer through the Python API but only in the UI.
- property names
List of the names of
LandmarkLayer
s available through thisLandmarkLayersAccessor
- class imfusion.labels.LandmarkSet(self: LandmarkSet, names: list[str], n_frames: int)
Bases:
pybind11_object
- add(self: LandmarkSet, type: str, frame: int, world: ndarray[numpy.float64[3, 1]]) None
Add a keypoint to the set.
- asdict(self: LandmarkSet) dict
Convert this AnnotationSet into a dict. Modifying the dict does not reflect on the AnnotationSet.
- frame(self: LandmarkSet, which: int) LandmarkSet
Select only the points that belong to the specified frame.
- static from_descriptor(descriptor: Descriptor, layer_name: str) LandmarkSet
Create a LandmarkSet tailored to a specific annotation layer in a descriptor.
- type(*args, **kwargs)
Overloaded function.
type(self: imfusion.labels._bindings.LandmarkSet, type: str) -> imfusion.labels._bindings.LandmarkSet
Select only the points that belong to the specified type.
type(self: imfusion.labels._bindings.LandmarkSet, type: int) -> imfusion.labels._bindings.LandmarkSet
Select only the points that belong to the specified type.
- class imfusion.labels.LandmarksAccessor
Bases:
pybind11_object
- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.labels._bindings.LandmarksAccessor, index: int) -> imfusion.labels._bindings.Landmark
Retrieve an entry from this
LandmarksAccessor
by its index.- Parameters:
index – Integer index of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.LandmarksAccessor, name: str) -> imfusion.labels._bindings.Landmark
Retrieve an entry from this
LandmarksAccessor
by its name.- Parameters:
name – Name of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.LandmarksAccessor, slice: slice) -> imfusion.labels._bindings.LandmarksAccessor
Retrieve multiple entries from this
LandmarksAccessor
using Python’s slice notation ([start:stop:step]
).- Parameters:
slice –
slice
instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.
__getitem__(self: imfusion.labels._bindings.LandmarksAccessor, selection: list[int]) -> imfusion.labels._bindings.LandmarksAccessor
Retrieve multiple entries from this
LandmarksAccessor
by using a list of indices.- Parameters:
selection – List of integer indices of the entries to be retrieved.
- __setitem__(*args, **kwargs)
Overloaded function.
__setitem__(self: imfusion.labels._bindings.LandmarksAccessor, index: int, value: object) -> None
Change an existing entry by index.
- Parameters:
index – Index of the entry to be changed.
value – Value to set at
index
.
__setitem__(self: imfusion.labels._bindings.LandmarksAccessor, name: str, value: object) -> None
Change an existing entry by name.
- Parameters:
name – Name of the entry to be changed.
value – Value to set at
name
.
__setitem__(self: imfusion.labels._bindings.LandmarksAccessor, index: slice, value: list) -> None
Change multiple entries denoted using Python’s slice notation (
[start:stop:step]
).
- size(self: LandmarksAccessor) int
- property names
List of the names of
Landmark
s available through thisLandmarksAccessor
- class imfusion.labels.Layer(self: Layer, name: str, kind: LayerKind, labels: list[Label] = [])
Bases:
pybind11_object
A single layer that defines which labels can be annotated for each
Descriptor
.- property id
- property kind
- property labels
- property name
- class imfusion.labels.LayerKind(self: LayerKind, value: int)
Bases:
pybind11_object
The kind of a layer defines what can be labelled in that layer.
Members:
PIXELWISE
BOUNDINGBOX
LANDMARK
GEOMETRIC_ANNOTATION
- BOUNDINGBOX = <LayerKind.BOUNDINGBOX: 1>
- GEOMETRIC_ANNOTATION = <LayerKind.GEOMETRIC_ANNOTATION: 3>
- LANDMARK = <LayerKind.LANDMARK: 2>
- PIXELWISE = <LayerKind.PIXELWISE: 0>
- property name
- property value
- class imfusion.labels.LayersAccessor
Bases:
pybind11_object
Like a
list
ofLayer
, but allows indexing by index or name.- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.labels._bindings.LayersAccessor, index: int) -> imfusion.labels._bindings.Layer
Retrieve an entry from this
LayersAccessor
by its index.- Args:
index: Integer index of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.LayersAccessor, name: str) -> imfusion.labels._bindings.Layer
Retrieve an entry from this
LayersAccessor
by its name.- Args:
name: Name of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.LayersAccessor, slice: slice) -> imfusion.labels._bindings.LayersAccessor
Retrieve multiple entries from this
LayersAccessor
using Python’s slice notation ([start:stop:step]
).- Args:
slice:
slice
instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.
- property names
List of the names of
Layer
s available through thisLayersAccessor
- class imfusion.labels.LockToken
Bases:
pybind11_object
A token representing a lock of a DataDescriptor.
Only the holder of the token can modify the layers of a locked
Descriptor
. Locking is only supported in remote projects. Local projects ignore the locking mechanism. A LockToken can be acquired throughlock()
. It can be used as a context manager so that it is unlocked automatically, when exiting the context. Tokens expire automatically after a certain time depending on the server (default: after 5 minutes).descriptor = project.descriptors[0] with descriptor.lock() as lock: ...
- class imfusion.labels.Project(self: imfusion.labels._bindings.Project, name: str, project_path: str, data_type: imfusion.labels._bindings.DataType = <DataType.AnyDataType: 15>)
Bases:
pybind11_object
Class that represents a Labels project. A project holds all information regarding defined annotations and data samples
Create a new local project. Doing so will also create a new project folder on disk.
- Parameters:
- add_boundingbox_layer(self: Project, name: str) BoundingBoxLayer
Define a new boundingbox layer for this project.
- Parameters:
name (str) – Name of the new boundingbox layer.
- add_descriptor(*args, **kwargs)
Overloaded function.
add_descriptor(self: imfusion.labels._bindings.Project, shared_image_set: imfusion.SharedImageSet, name: str = ‘’, own_copy: bool = False) -> object
Create a new entry in the project’s database from a given image. For local project, the descriptor to the dataset is returned immediately. For remote project, only the identifier of the descriptor is returned. The actual dataset will only become available after a call to sync().
- Parameters:
name (str) – Name of the new database entry.
shared_image_set (SharedImageSet) – Image for which the new entry will be created.
own_copy (bool) – If True, Labels will save a copy of the image in the project folder. Automatically set to True if the image does not have a DataSourceComponent, as this implies that is was created rather then loaded.
add_descriptor(self: imfusion.labels._bindings.Project, name: str, shared_image_set: imfusion.SharedImageSet, own_copy: bool = False) -> object
Create a new entry in the project’s database from a given image. For local project, the descriptor to the dataset is returned immediately. For remote project, only the identifier of the descriptor is returned. The actual dataset will only become available after a call to sync().
- Parameters:
name (str) – Name of the new database entry.
shared_image_set (SharedImageSet) – Image for which the new entry will be created.
own_copy (bool) – If True, Labels will save a copy of the image in the project folder. Automatically set to True if the image does not have a DataSourceComponent, as this implies that is was created rather then loaded.
- add_labelmap_layer(self: Project, name: str) LabelMapLayer
Define a new labelmap layer for this project.
- Parameters:
name (str) – Name of the new labelmap layer.
- add_landmark_layer(self: Project, name: str) LandmarkLayer
Define a new landmark layer for this project.
- Parameters:
name (str) – Name of the new landmark layer.
- add_tag(self: Project, name: str, kind: TagKind, color: tuple[int, int, int] = (255, 255, 255), options: list[str] = []) TagLegacy
Define a new tag for this project.
- static create(settings: ProjectSettings, path: str = '', username: str = '', password: str = '') Project
Create a new project with the given settings.
path
can be either a path or URL.Passing a folder will create a local project. The folder must be empty otherwise an exception is raised.
When passing a http(s) URL, it must point to the base URL of a Labels server (e.g. https://example.com and not https://example.com/api/v1/projects). Additionally, a valid username and password must be specified. The server might reject a project, e.g. because a project with the same name already exists. In this case, an exception is raised.
- delete_descriptors(self: Project, descriptors: list[Descriptor]) None
“Remove the given descriptors from the project.
- Parameters:
descriptors (list[Descriptors]) – list of descriptors that should be deleted from the project.
- edit(self: Project, arg0: ProjectSettings) None
Edit the project settings by applying the given settings.
Editing a project is a potentially destructive action that cannot be reverted.
When adding new tags, layers or label their “id” field should be empty (an id will be automatically assigned).
Warning
Remote project are not edited in-place at the moment. After calling this method, you need to reload the project from the server. Otherwise, the project settings will be out of sync with the server.
- static load(path: str, project_id: str | None = None, username: str | None = None, password: str | None = None) Project
Load an existing project from disk or from a remote server.
- refresh_access_token(self: Project) None
Refresh the access token of a remote project. Access tokens expire after a predefined period of time, and need to be refreshed in order to make further requests.
- settings(self: Project) ProjectSettings
Return the current settings of a project.
The settings are not connected to the project, so changing the settings object does not change the project. Use
edit()
to apply new settings.
- sync(self: Project) int
Synchronize the local state of a remote project. Any “event” that occured between the last sync() call and this one are replayed locally, such that the local Project reflects the last known state of the project on the server. An “event” refers to any change being made on the project data by any client (including this one), such as a dataset being added or deleted, a new label map being uploaded, a tag value being changed, etc.
Returns the number of events applied to the project.
- property boundingbox_layers
Returns an
BoundingBoxLayersAccessor
to the boundingbox layers defined in the project.
- property configuration
- property data_type
- property descriptors
- property grouping_hierachy
- property id
Return the unique id of a remote project.
- property is_local
Returns whether the project is local
- property is_remote
Returns whether the project is remote
- property labelmap_layers
Returns an Accessor to the labelmap layers defined in the project.
- property landmark_layers
Returns an
LandmarkLayerAccessor
to the landmark layers defined in the project.
- property path
- property tags
Returns an Accessor to the tags defined in the project.
- class imfusion.labels.ProjectSettings(self: ProjectSettings, name: str, tags: list[Tag] = [], layers: list[Layer] = [])
Bases:
pybind11_object
Contains the invididual elements that make up a project definition.
- add_layer(self: ProjectSettings, arg0: Layer) None
Add a new layer to the settings.
- add_tag(self: ProjectSettings, arg0: Tag) None
Add a new tag to the settings.
- remove_layer(self: ProjectSettings, arg0: Layer) None
- remove_tag(self: ProjectSettings, arg0: Tag) None
- property layers
- property name
- property tags
- class imfusion.labels.Tag(self: Tag, name: str, kind: TagKind, color: tuple[int, int, int] | None = None, options: list = [], readonly: bool = False)
Bases:
pybind11_object
A Tag definition. Tag values can be set on a
Descriptor
according to this definition.- property color
- property id
- property kind
- property name
- property options
- property readonly
- class imfusion.labels.TagKind(self: TagKind, value: int)
Bases:
pybind11_object
Enum for differentiating different kinds of tags.
Members:
Bool : Tag that stores a single boolean value.
Enum : Tag that stores a list of string options.
Float : Tag that stores a single float value.
- Bool = <TagKind.Bool: 0>
- Enum = <TagKind.Enum: 1>
- Float = <TagKind.Float: 2>
- property name
- property value
- class imfusion.labels.TagLegacy
Bases:
pybind11_object
- add_option(self: TagLegacy, option: str) None
Add a new value option for this tag (only works with enum tags).
- Parameters:
option – New option to be added to this tag.
- property color
- property descriptor
- property id
- property index
- property kind
- property locked
- property name
- property options
- property project
- property value
- class imfusion.labels.TagsAccessor
Bases:
pybind11_object
Like a
list
ofTag
, but allows indexing by index or name.- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.labels._bindings.TagsAccessor, index: int) -> imfusion.labels._bindings.Tag
Retrieve an entry from this
TagsAccessor
by its index.- Args:
index: Integer index of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.TagsAccessor, name: str) -> imfusion.labels._bindings.Tag
Retrieve an entry from this
TagsAccessor
by its name.- Args:
name: Name of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.TagsAccessor, slice: slice) -> imfusion.labels._bindings.TagsAccessor
Retrieve multiple entries from this
TagsAccessor
using Python’s slice notation ([start:stop:step]
).- Args:
slice:
slice
instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.
- property names
List of the names of
Tag
s available through thisTagsAccessor
- class imfusion.labels.TagsAccessorLegacy
Bases:
pybind11_object
- __getitem__(*args, **kwargs)
Overloaded function.
__getitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, index: int) -> imfusion.labels._bindings.TagLegacy
Retrieve an entry from this
TagsAccessorLegacy
by its index.- Parameters:
index – Integer index of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, name: str) -> imfusion.labels._bindings.TagLegacy
Retrieve an entry from this
TagsAccessorLegacy
by its name.- Parameters:
name – Name of the entry to be retrieved.
__getitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, slice: slice) -> imfusion.labels._bindings.TagsAccessorLegacy
Retrieve multiple entries from this
TagsAccessorLegacy
using Python’s slice notation ([start:stop:step]
).- Parameters:
slice –
slice
instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.
__getitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, selection: list[int]) -> imfusion.labels._bindings.TagsAccessorLegacy
Retrieve multiple entries from this
TagsAccessorLegacy
by using a list of indices.- Parameters:
selection – List of integer indices of the entries to be retrieved.
- __setitem__(*args, **kwargs)
Overloaded function.
__setitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, index: int, value: object) -> None
Change an existing entry by index.
- Parameters:
index – Index of the entry to be changed.
value – Value to set at
index
.
__setitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, name: str, value: object) -> None
Change an existing entry by name.
- Parameters:
name – Name of the entry to be changed.
value – Value to set at
name
.
__setitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, index: slice, value: list) -> None
Change multiple entries denoted using Python’s slice notation (
[start:stop:step]
).
- size(self: TagsAccessorLegacy) int
- property names
List of the names of
TagLegacy
s available through thisTagsAccessorLegacy
- imfusion.labels.wraps(wrapped, assigned=('__module__', '__name__', '__qualname__', '__doc__', '__annotations__', '__type_params__'), updated=('__dict__',))
Decorator factory to apply update_wrapper() to a wrapper function
Returns a decorator that invokes update_wrapper() with the decorated function as the wrapper argument and the arguments to wraps() as the remaining arguments. Default arguments are as for update_wrapper(). This is a convenience function to simplify applying partial() to update_wrapper().
imfusion.machinelearning
Submodules containing routines for pre- and post-processing data to feed to a ML training framework.
- class imfusion.machinelearning.AbstractOperation(self: imfusion.machinelearning.Operation, name: str, processing_policy: imfusion.machinelearning.Operation.ProcessingPolicy = <ProcessingPolicy.EVERYTHING_BUT_LABELS: 0>, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None)
Bases:
Operation
- class imfusion.machinelearning.AddCenterBoxOperation(*args, **kwargs)
Bases:
Operation
Add an additional channel to the input image with a binary box at its center. The purpose of that operation is to give a location information to the model.
- Parameters:
box_half_width – Half-width of the box in pixels.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.AddCenterBoxOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.AddCenterBoxOperation, box_half_width: int, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.AddDegradedLabelAsChannelOperation(*args, **kwargs)
Bases:
Operation
Append a channel to the image that contains a degraded version of the label. Given provided blob coordinates, the channel is zero except for at blobs at specified locations. The nonzero values are positive/negative based on whether the values are inside/outside of a label that has been eroded/dilated based on the
label_dilation
parameter.- Parameters:
blob_radius – Radius of each blob, in pixel coordinates. Default: 5.0
invert – Extra channel is positive/negative based on the label values except for at the blobs, where it is zero. Default: False
blob_coordinates – Centers of the blobs in pixel coordinates. Default: []
only_positive – If true, output channel is clamped to zero from below. Default: False
label_dilation – The dilation (if positive) or erosion (if negative), none if zero. Default: 0.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.AddDegradedLabelAsChannelOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.AddDegradedLabelAsChannelOperation, blob_radius: float = 5.0, invert: bool = False, blob_coordinates: list[numpy.ndarray[numpy.float64[3, 1]]] = [], only_positive: bool = False, label_dilation: float = 0.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.AddPixelwisePredictionChannelOperation(*args, **kwargs)
Bases:
Operation
Run an existing pixelwise model and add result to the input image as additional channels. The prediction is automatically resampled to the input image resolution.
- Parameters:
config_path – path to the YAML configuration file of the pixelwise model
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.AddPixelwisePredictionChannelOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.AddPixelwisePredictionChannelOperation, config_path: str, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.AddPositionChannelOperation(*args, **kwargs)
Bases:
Operation
Add additional channels with the position of the pixels. Execute the algorithm AddPositionAsChannelAlgorithm internally, and uses the same configuration (parameter names and values).
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.AddPositionChannelOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.AddPositionChannelOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.AddRandomNoiseOperation(*args, **kwargs)
Bases:
Operation
Apply a pixelwise random noise to the image intensities.
Fortype == "uniform"
: noise is drawn in \([-\textnormal{intensity}, \textnormal{intensity}]\).Fortype == "gaussian
: noise is drawn from a Gaussian with zero mean and standard deviation equal to \(\textnormal{intensity}\).Fortype == "gamma"
: noise is drawn from a Gamma distribution with \(k = \theta = \textnormal{intensity}\) (note that this noise has a mean of 1.0 so it is biased).Fortype == "shot"
: noise is drawn from a Gaussian with zero mean and standard deviation equal to \(\textnormal{intensity} * \sqrt{\textnormal{pixel_value}}\).- Parameters:
type – Distribution of the noise (‘uniform’, ‘gaussian’, ‘gamma’, ‘shot’). Default: ‘uniform’
intensity – Value related to the standard deviation of the generated noise. Default: 0.2
probability – Value in [0.0, 1.0] indicating the probability of this operation to be performed. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.AddRandomNoiseOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.AddRandomNoiseOperation, type: str = ‘uniform’, intensity: float = 0.2, probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.AdjustShiftScaleOperation(*args, **kwargs)
Bases:
Operation
Apply a shift and scale to each channel of the input image. If shift and scale are vectors with multiple values, then for each channel c, \(\textnormal{output}_c = (\textnormal{input}_c + \textnormal{shift}_c) / \textnormal{scale}_c\). If shift and scale have a single value, then for each channel c, \(\textnormal{output}_c = (\textnormal{input} + \textnormal{shift}_c) / \textnormal{scale}\).
- Parameters:
shift – Shift parameters as double (one value per channel, or one single value for all channels). Default: [0.0]
scale – Scaling parameter as double (one value per channel, or one single value for all channels). Default: [1.0]
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.AdjustShiftScaleOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.AdjustShiftScaleOperation, shift: list[float] = [0.0], scale: list[float] = [1.0], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ApplyTopDownFlagOperation(*args, **kwargs)
Bases:
Operation
Flip the input image if it has a
topDown
flag set to false.Note
The topDown flag is not accessible from Python
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
axes: [‘y’]
Overloaded function.
__init__(self: imfusion.machinelearning.ApplyTopDownFlagOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ApplyTopDownFlagOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ApproximateToHigherResolutionOperation(*args, **kwargs)
Bases:
Operation
Replicate the input image of the operation from the original reference image (in
ReferenceImageDataComponent
) This operation is to be used mainly as post-processing, when a model produces a filtered image at a sub-resolution: it then tries to replicate the output from the original image so that no resolution is lost. It consists in estimating a multiplicative scalar field between the input and the downsampled original image, upsample it and then re-apply it on the original image.- Parameters:
epsilon – Used to avoid division by zero in case the original image has zero values. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ApproximateToHigherResolutionOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ApproximateToHigherResolutionOperation, epsilon: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ArgMaxOperation(*args, **kwargs)
Bases:
Operation
Create a label map with the indices corresponding of the input channel with the highest value. The output of this operation is zero indexed, i.e. no matter which channels where selected the output is always in range [0; n - 1] where n is the number of selected channels (+ 1 if background threshold selected).
- Parameters:
selected_channels – List of channels to be selected for the argmax. If empty, use all channels (default). Indices are zero indexed, e.g. [0, 1, 2, 3] selects the first 4 channels.
background_threshold – If set, the arg-max operation assumes the background is not explicitly encoded, and is only set when all activations are below background_threshold. The output then encodes 0 as the background. E.g. if the first 4 channels were selected, the possible output values would be [0, 1, 2, 3, 4] with 0 for the background and the rest for the selected channels.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ArgMaxOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ArgMaxOperation, selected_channels: list[int] = [], background_threshold: Optional[float] = None, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.AxisFlipOperation(*args, **kwargs)
Bases:
Operation
Flip image content along specified set of axes.
- Parameters:
axes – List of strings from {‘x’,’y’,’z’} specifying the axes to flip. For 2D images, only ‘x’ and ‘y’ are valid.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.AxisFlipOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.AxisFlipOperation, axes: list[str], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.AxisRotationOperation(*args, **kwargs)
Bases:
Operation
Rotate image around image axis with axis-specific rotation angles that are signed multiples of 90 degrees.
- Parameters:
axes – List of strings from {‘x’,’y’,’z’} specifying the axes to rotate around. For 2D images, only [‘z’] is valid.
angles – List of integers (with same lengths as axis) specifying the rotation angles in degrees. Only +- 0/90/180/270 are valid.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.AxisRotationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.AxisRotationOperation, axes: list[str], angles: list[int], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.BakeDeformationOperation(*args, **kwargs)
Bases:
Operation
Deform an image with its attached Deformation and store the result into the returned output image. This operation will return a clone of the input image if it does not have any deformation attached. The output image will not have an attached Deformation.
- Parameters:
adjust_size – whether the size of the output image would be automatically adjusted to fit the deformed content. Default: True
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.BakeDeformationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.BakeDeformationOperation, adjust_size: bool = True, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.BakePhotometricInterpretationOperation(*args, **kwargs)
Bases:
Operation
Bake the Photometric Interpretation into the intensities of the image. If the image has a Photometric Interpretation of MONOCHROME1, the intensities will run be inverted using: \(\textnormal{output} = \textnormal{max} - (\textnormal{input} - \textnormal{min})\)
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.BakePhotometricInterpretationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.BakePhotometricInterpretationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.BakeTransformationOperation(*args, **kwargs)
Bases:
Operation
Apply the rotation contained in the matrix of the input volume. The internal memory buffer will be re-organized but the image location in world coordinate will not change. The output matrix is guaranteed to have no rotation (but may still have a translation component). Pixels outside the original image extent will be padded according to the padding_mode parameter. Note: If a mask is present, this operation assumes that it is an ExplicitMask and will process it as well.
- Parameters:
padding_mode – defines which type of padding is used in [“zero”, “clamp”, “mirror”]. Default:
ZERO
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.BakeTransformationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.BakeTransformationOperation, padding_mode: imfusion.PaddingMode = <PaddingMode.ZERO: 0>, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.BlobsFromKeypointsOperation(*args, **kwargs)
Bases:
Operation
Transforms keypoints into an actual image (blob map with the same size of the image). Requires an input image called “data” (can be overwritten with the parameter
image_field_name
) and some keypoints called “keypoints” (can be overwritten with the parameterapply_to
).- Parameters:
blob_radius – Size of the generated blobs in mm. Default: 5.0
image_field_name – Field name of the reference image. Default: “data”
blobs_field_name – Field name of the output blob map. Default: “label”
label_map_mode – Generate ubyte label map instead of multi-channel gaussian blobs. Default: False
sharp_blobs – Specifies whether to sharpen the profiles of the blob function, making its support more compact. Default: False
blob_radius_units – The units to use when interpreting the blob_radius parameter. Default: “mm”
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.BlobsFromKeypointsOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.BlobsFromKeypointsOperation, blob_radius: float = 5.0, image_field_name: str = ‘data’, blobs_field_name: str = ‘label’, label_map_mode: bool = False, sharp_blobs: bool = False, blob_radius_units: imfusion.machinelearning.ParamUnit = MM, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.BoundingBoxElement(self: BoundingBoxElement, boundingbox_set: BoundingBoxSet)
Bases:
DataElement
Initialize a BoundingBoxElement
- Parameters:
boundingbox_set – In case the argument is a numpy array, the array shape is expected to be [N, C, B, 2, 3], where N is the batch size, C the number of different keypoint types (channel), B the number of instances of the same box type. Each Box is expected to have dimensions [2, 3]. If the argument is a nested list, the same concept applies also to the size of each level of nesting.
- property boxes
Access to the underlying BoundingBoxSet.
- class imfusion.machinelearning.BoundingBoxSet(*args, **kwargs)
Bases:
Data
Class for managing sets of bounding boxes
The class is meant to be used in parallel with SharedImageSet. For each frame in the set, and for each type of bounding box (i.e. car, airplane, lung, cat), there is a list of boxes that encompass an instance of that type in the reference image. In terms of tensor dimensions, this would be represented as [N, C, B], where N is the batch size, C is the number of channels (i.e. types of boxes), and B is the number of boxes for the same instance type. Each Box has a dimension of [2, 3], consisting of a pair of vec3 for describing center and extent. See the Box class for more information.
Note
The API for this class is experimental and may change soon.
Overloaded function.
__init__(self: imfusion.machinelearning.BoundingBoxSet, boxes: list[list[list[imfusion.machinelearning.Box]]]) -> None
__init__(self: imfusion.machinelearning.BoundingBoxSet, boxes: list[list[list[tuple[numpy.ndarray[numpy.float64[3, 1]], numpy.ndarray[numpy.float64[3, 1]]]]]]) -> None
__init__(self: imfusion.machinelearning.BoundingBoxSet, boxes: list[list[list[tuple[list[float], list[float]]]]]) -> None
__init__(self: imfusion.machinelearning.BoundingBoxSet, array: numpy.ndarray[numpy.float64]) -> None
- static load(location: str | PathLike) BoundingBoxSet | None
Load a BoundingBoxSet from an ImFusion file.
- Parameters:
location – input path.
- save(self: BoundingBoxSet, location: str | PathLike) None
Save a BoundingBoxSet as an ImFusion file.
- Parameters:
location – output path.
- property data
- class imfusion.machinelearning.Box(*args, **kwargs)
Bases:
pybind11_object
Bounding Box class for ML tasks. Since bounding boxes are axis aligned by definition, a Box is represented by its center and its extent. This representation allows for easy rotation, augmentation etc.
Overloaded function.
__init__(self: imfusion.machinelearning.Box, center: numpy.ndarray[numpy.float64[3, 1]], extent: numpy.ndarray[numpy.float64[3, 1]]) -> None
__init__(self: imfusion.machinelearning.Box, center_and_extent: tuple[numpy.ndarray[numpy.float64[3, 1]], numpy.ndarray[numpy.float64[3, 1]]]) -> None
- property center
- property extent
- class imfusion.machinelearning.CenterROISampler(*args, **kwargs)
Bases:
ImageROISampler
Sampler which samples one ROI from the input image and label map with a target size. The ROI is centered on the image center. The arrays will be padded if the target size is larger than the input image.
- Parameters:
roi_size – Target size of the ROIs to be extracted as [Width, Height, Slices]
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
padding_mode: Properties.EnumStringParam(value=”clamp”, admitted_values={“clamp”, “mirror”, “zero”})
label_padding_mode: Properties.EnumStringParam(value=”clamp”, admitted_values={“clamp”, “mirror”, “zero”})
Overloaded function.
__init__(self: imfusion.machinelearning.CenterROISampler, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.CenterROISampler, roi_size: numpy.ndarray[numpy.int32[3, 1]], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.CheckDataOperation(*args, **kwargs)
Bases:
Operation
Checks if all input data match a set of expected conditions. If parameters are zero or empty, they are not checked.
- Parameters:
num_dimensions – Expected number of dimensions in the input. Set to 0 to skip this check. Default: 0
num_images – Expected number of images in the input. Set to 0 to skip this check. Default: 0
num_channels – Expected number of channels in the input. Set to 0 to skip this check. Default: 0
data_type – Expected datatype of input. Must be one of: [“”, “float”, “uint8”, “int8”, “uint16”, “int16”, “uint32”, “int32”, “double”]. Empty string skips this check. Default: “”
dimensions – Expected spatial dimensions [width, height, depth] of input image. Set all dimensions to 0 to skip checking it. Default: [0,0,0]
spacing – Expected spacing [x, y, z] of input image in mm. Set all components to 0 to skip checking it. Default: [0,0,0]
label_match_input – Whether label dimensions and channel count must match the input image. Default: False
label_type – Expected datatype of labels. Must be one of: [“”, “float”, “uint8”, “int8”, “uint16”, “int16”, “uint32”, “int32”, “double”]. Empty string skips this check. Default: “”
label_values – List of required label values (excluding 0). No other values are allowed. When check_label_values_are_subset is false, all must be present. Empty list skips this check. Default: []
label_dimensions – Expected spatial dimensions [width, height, depth] of label image. Set all dimensions to 0 to skip checking it. Default: [0,0,0]
label_channels – Expected number of channels in the label image. Set to 0 to skip this check. Default: 0
check_rotation_matrix – Whether to verify the input image has no rotation matrix. Default: False
check_deformation – Whether to verify the input image has no deformation. Default: False
check_shift_scale – Whether to verify the input image has identity intensity transformation. Default: False
fail_on_error – Whether to raise an exception on validation failure (True) or just log an error (False). Default: True
save_path_on_error – Path where to save the failing input as an ImFusion file (.imf) when validation fails. Empty string disables saving. Default: “”
check_label_values_are_subset – Whether to check if the label values are a subset of the label values provided with label_values, otherwise check if all values are present in the label image. Default: False
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.CheckDataOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.CheckDataOperation, num_dimensions: int = 0, num_images: int = 0, num_channels: int = 0, data_type: str = ‘’, dimensions: numpy.ndarray[numpy.int32[3, 1]] = array([0, 0, 0], dtype=int32), spacing: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), label_match_input: bool = False, label_type: str = ‘’, label_values: list[int] = [], label_dimensions: numpy.ndarray[numpy.int32[3, 1]] = array([0, 0, 0], dtype=int32), label_channels: int = 0, check_rotation_matrix: bool = False, check_deformation: bool = False, check_shift_scale: bool = False, fail_on_error: bool = True, save_path_on_error: str = ‘’, check_label_values_are_subset: bool = False, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ClipOperation(*args, **kwargs)
Bases:
Operation
Clip the intensities to a minimum and maximum value: all intensities outside this range will be clipped to the range border.
- Parameters:
min – Minimum intensity of the output image. Default: 0.0
max – Maximum intensity of the output image. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ClipOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ClipOperation, min: float = 0.0, max: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ComputingDevice(*args, **kwargs)
Bases:
pybind11_object
Members:
FORCE_CPU
GPU_IF_GL_IMAGE
GPU_IF_OPENGL
FORCE_GPU
Overloaded function.
__init__(self: imfusion.machinelearning.ComputingDevice, value: int) -> None
__init__(self: imfusion.machinelearning.ComputingDevice, arg0: str) -> None
- FORCE_CPU = <ComputingDevice.FORCE_CPU: 0>
- FORCE_GPU = <ComputingDevice.FORCE_GPU: 3>
- GPU_IF_GL_IMAGE = <ComputingDevice.GPU_IF_GL_IMAGE: 1>
- GPU_IF_OPENGL = <ComputingDevice.GPU_IF_OPENGL: 2>
- property name
- property value
- class imfusion.machinelearning.ConcatenateNeighboringFramesToChannelsOperation(*args, **kwargs)
Bases:
Operation
This function iterates over each frame, augmenting the channel dimension by appending or adding information from neighboring frames from both sides. For instance, with radius=1 concatenation, an image with dimensions (10, 1, 256, 256, 1) becomes an (10, 1, 256, 256, 3) image, meaning each frame will now include its predecessor (channel 0), itself (channel 1), and its successor (channel 2). For multi-channel inputs, only the first channel is used for concatenation; other channels are appended after these in the output. With reduction_mode, central and augmented frames can be reduced to a single frame to preserve the original number of channels.
- Parameters:
radius – Defines the number of neighboring frames added to each side within the channel dimension. Default: 0
reduction_mode – Determines if and how to reduce neighboring frames. Options: “none” (default, concatenates), “average”, “maximum”.
same_padding – Use frame replication (not zero-padding) at sequence edges. Default: True
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ConcatenateNeighboringFramesToChannelsOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ConcatenateNeighboringFramesToChannelsOperation, radius: int = 0, reduction_mode: str = ‘none’, same_padding: bool = True, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ConvertSlicesToVolumeOperation(*args, **kwargs)
Bases:
Operation
Stacks a set of 2D images extracted along a specified axis into an actual 3D volume.
- Parameters:
axis – Axis along which to extract slices (must be either ‘x’, ‘y’ or ‘z’)
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ConvertSlicesToVolumeOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ConvertSlicesToVolumeOperation, axis: str = ‘z’, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ConvertToGrayOperation(*args, **kwargs)
Bases:
Operation
Convert the input image to a single channel image by averaging all channels.
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ConvertToGrayOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ConvertToGrayOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ConvertVolumeToSlicesOperation(*args, **kwargs)
Bases:
Operation
Unstacks a 3D volume to a set of 2D images extracted along one of the axes.
- Parameters:
axis – Axis along which to extract slices (must be either ‘x’, ‘y’ or ‘z’)
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ConvertVolumeToSlicesOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ConvertVolumeToSlicesOperation, axis: str = ‘z’, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ConvolutionalCRFOperation(*args, **kwargs)
Bases:
Operation
Adapt segmentation map or raw output of model to image content.
- Parameters:
adaptiveness – Indicates how much the segmentation should be adapted to the image content. Range [0, 1]. Default: 0.5
smooth_weight – Weight of the smoothness kernel. Higher values create a greater penalty for nearby pixels having different labels. Default: 0.1
radius – Radius of the message passing window in pixels. Default: 5
downsampling – Amount of downsampling used in message passing, makes the effective radius of the message passing window larger. Default: 2
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
appearance_sigma: 0.25
smoothness_sigma: 1.0
max_num_iter: 50
convergence_threshold: 0.00100000004749745
label_compatibilites:
negative_label_score: -1.0
positive_label_score: 1.0
Overloaded function.
__init__(self: imfusion.machinelearning.ConvolutionalCRFOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ConvolutionalCRFOperation, adaptiveness: float = 0.5, smooth_weight: float = 0.1, radius: int = 5, downsampling: int = 2, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.CopyOperation(*args, **kwargs)
Bases:
Operation
Copies a set of fields of a data item.
- Parameters:
source – list of the elements to be copied
target – list of names of the new elements (must match the size of source)
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.CopyOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.CopyOperation, source: list[str], target: list[str], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.CropAroundLabelMapOperation(*args, **kwargs)
Bases:
Operation
Crops the input image and label to the bounds of the specified label value, and sets the label value to 1 and all other values to zero in the resulting label.
- Parameters:
label_values – Label values to select. Default: [1]
margin – Margin, in pixels. Default: 1
reorder – Whether label values in result should be mapped to 1,2,3… based on input in label_values. Default: False
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.CropAroundLabelMapOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.CropAroundLabelMapOperation, label_values: list[int] = [1], margin: int = 1, reorder: bool = False, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.CropOperation(*args, **kwargs)
Bases:
Operation
Crop input images and label maps with a given size and offset.
- Parameters:
size – List of integers representing the target dimensions of the image to be cropped. If -1 is specified, the whole dimension will be kept, starting from the corresponding offset.
offset – List of integers representing the position of the lower corner of the cropped image
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.CropOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.CropOperation, size: numpy.ndarray[numpy.int32[3, 1]], offset: numpy.ndarray[numpy.int32[3, 1]], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.CutOutOperation(*args, **kwargs)
Bases:
Operation
Cut out input images and label maps with a given size, offset and fill values.
- Parameters:
size – List of 3-dim vectors representing the target dimensions of the image to be cut out. Default: [1, 1, 1]
offset – List of 3-dim vectors representing the position of the lower corner of the cut out area. Default: [0, 0, 0]
fill_value – List of intensity value (floats) for filling out cutout region. Default: [0.0]
size_units – Units of the size parameter (ParamUnit.MM or “mm”, ParamUnit.FRACTION or “fraction”, ParamUnit.VOXEL or “voxel”). Default:
MM
offset_units – Units of the offset parameter (ParamUnit.MM or “mm”, ParamUnit.FRACTION or “fraction”, ParamUnit.VOXEL or “voxel”). Default:
VOXEL
- Note:
ParamUnit
can be automatically converted from a string. This means you can directly pass a string like “mm”, “fraction”, or “voxel” to the param_units parameters instead of using the enum values. device: Specifies whether this Operation should run on CPU or GPU. seed: Specifies seeding for any randomness that might be contained in this operation. error_on_unexpected_behaviour: Specifies whether to throw an exception instead of warning about unexpected behavior. apply_to: Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
) record_identifier: Unused for this operation as it is not invertible- Other parameters accepted by configure():
device: Properties.EnumStringParam(value=”GPUIfOpenGl”, admitted_values={“ForceGPU”, “GPUIfOpenGl”, “GPUIfGlImage”, “ForceCPU”})
error_on_unexpected_behaviour: False
record_identifier:
Overloaded function.
__init__(self: imfusion.machinelearning.CutOutOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.CutOutOperation, size: list[numpy.ndarray[numpy.float64[3, 1]]] = [array([1., 1., 1.])], offset: list[numpy.ndarray[numpy.float64[3, 1]]] = [array([0., 0., 0.])], fill_value: list[float] = [0.0], size_units: imfusion.machinelearning.ParamUnit = MM, offset_units: imfusion.machinelearning.ParamUnit = VOXEL, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.DataElement
Bases:
pybind11_object
- clone(self: DataElement, with_data: bool = True) DataElement
Create a copy of the element.
- numpy(copy=False)
- Parameters:
self (DataElement) –
- split(self: DataElement) list[DataElement]
Split an element into several ones along the batch dimension.
- static stack(elements: list[DataElement]) DataElement
Stack several elements along the batch dimension.
- tag_as_target(self: DataElement) None
Mark this element as being a target.
- torch(device: device = None, dtype: dtype = None, same_as: Tensor = None) Tensor
Convert SharedImageSet or a SharedImage to a torch.Tensor.
- Parameters:
self (DataElement | SharedImageSet | SharedImage) – Instance of SharedImageSet or SharedImage (this function bound as a method to SharedImageSet and SharedImage)
device (device) – Target device for the new torch.Tensor
dtype (dtype) – Type of the new torch.Tensor
same_as (Tensor) – Template tensor whose device and dtype configuration should be matched.
device
anddtype
are still applied afterwards.
- Returns:
New torch.Tensor
- Return type:
- untag_as_target(self: DataElement) None
Remove target status from this element.
- property batch_size
Returns the batch size of the element.
- property components
Returns the list of DataComponents for this element.
- property content
Access to the underlying Data.
- property dimension
Returns the dimensionality of the underlying data
- property is_target
Returns true if this element is marked as a target.
- property ndim
Returns the dimensionality of the underlying data
- property type
Returns the type of the underlying data
- class imfusion.machinelearning.DataItem(self: DataItem, elements: dict[str, DataElement] = {})
Bases:
Data
Class managing a dictionary of DataElements. This class is used as the container for applying Operations to a collection of heterogeneous data in a consistent way. This class implements the concept of batch size for the contained elements. As such, a DataItem can be split or stacked along the batch axis like the contained DataElements, but because of that it enforces that all DataElement stored have consistent batch size.
Construct a DataItem with existing Elements if provided.
- Parameters:
elements (Dict[str, imfusion.DataElement]) – elements to be inserted into the DataItem, default: {}
- __getitem__(self: DataItem, arg0: str) DataElement
- __iter__(self: DataItem) Iterator[tuple[str, DataElement]]
- __setitem__(*args, **kwargs)
Overloaded function.
__setitem__(self: imfusion.machinelearning.DataItem, field: str, element: imfusion.machinelearning.DataElement) -> None
Set a DataElement to the DataItem.
- Parameters:
field (str) – field name
element (DataElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.
__setitem__(self: imfusion.machinelearning.DataItem, field: str, element: imfusion.machinelearning.ImageElement) -> None
Set a ImageElement to the DataItem.
- Parameters:
field (str) – field name
element (ImageElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.
__setitem__(self: imfusion.machinelearning.DataItem, field: str, element: imfusion.machinelearning.KeypointsElement) -> None
Set a KeypointsElement to the DataItem.
- Parameters:
field (str) – field name
element (KeypointsElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.
__setitem__(self: imfusion.machinelearning.DataItem, field: str, element: imfusion.machinelearning.BoundingBoxElement) -> None
Set a BoundingBoxElement to the DataItem.
- Parameters:
field (str) – field name
element (BoundingBoxElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.
__setitem__(self: imfusion.machinelearning.DataItem, field: str, element: imfusion.machinelearning.VectorElement) -> None
Set a VectorElement to the DataItem.
- Parameters:
field (str) – field name
element (VectorElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.
__setitem__(self: imfusion.machinelearning.DataItem, field: str, element: imfusion.machinelearning.TensorSetElement) -> None
Set a TensorSetElement to the DataItem.
- Parameters:
field (str) – field name
element (TensorSetElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.
__setitem__(self: imfusion.machinelearning.DataItem, field: str, shared_image_set: imfusion.SharedImageSet) -> None
Set a SharedImageSet to the DataItem.
- Parameters:
field (str) – field name
element (SharedImageSet) – element to be inserted into the DataItem, if the field exists it’s overwritten.
__setitem__(self: imfusion.machinelearning.DataItem, field: str, keypoint_set: imfusion.machinelearning.KeypointSet) -> None
Set a KeypointSet to the DataItem.
- Parameters:
field (str) – field name
element (imfusion.KeypointSet) – element to be inserted into the DataItem, if the field exists it’s overwritten.
__setitem__(self: imfusion.machinelearning.DataItem, field: str, bboxes: imfusion.machinelearning.BoundingBoxSet) -> None
Set a BoundingBoxSet to the DataItem.
- Parameters:
field (str) – field name
element (BoundingBoxSet) – element to be inserted into the DataItem, if the field exists it’s overwritten.
__setitem__(self: imfusion.machinelearning.DataItem, field: str, tensorset: imfusion.machinelearning.TensorSet) -> None
Set a Tensor to the DataItem.
- contains(self: DataItem, arg0: str) bool
Checks if the data item contains a field with the given name.
- get(*args, **kwargs)
Overloaded function.
get(self: imfusion.machinelearning.DataItem, field: str) -> imfusion.machinelearning.DataElement
Returns a reference to an element (raises a KeyError if field is not in item)
- Parameters:
field (str) – Name of the field to retrieve.
get(self: imfusion.machinelearning.DataItem, field: str, default: imfusion.machinelearning.DataElement) -> imfusion.machinelearning.DataElement
Returns a reference to an element (or the default value if field is not in item)
- Parameters:
field (str) – Name of the field to retrieve.
default (DataElement) – default value to return if field is not in DataItem.
- get_all(self: DataItem, arg0: ElementType) set[DataElement]
Returns a list of all elements of the specified type.
- items(self: DataItem) Iterator[tuple[str, DataElement]]
- static load(location: str | PathLike) DataItem
Load data item from ImFusion file.
- Parameters:
location – input path.
- static merge(items: list[DataItem]) DataItem
Merge several data items by setting all their fields to the output item
- Parameters:
items (List[DataItem]) – List of input items to merge.
Note
Raises an exception is the same field is contained in more than one item.
- pop(self: DataItem, field: str) DataElement
Remove the DataElement associated to the given field and returns it.
- Parameters:
field (str) – Name of the field to remove.
- save(self: DataItem, location: str | PathLike) None
Save data item as ImFusion file.
- Parameters:
location – output path.
- set(*args, **kwargs)
Overloaded function.
set(self: imfusion.machinelearning.DataItem, field: str, element: imfusion.machinelearning.DataElement) -> None
Set a DataElement to the DataItem.
- Parameters:
field (str) – field name
element (DataElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.
set(self: imfusion.machinelearning.DataItem, field: str, element: imfusion.machinelearning.ImageElement) -> None
Set a ImageElement to the DataItem.
- Parameters:
field (str) – field name
element (ImageElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.
set(self: imfusion.machinelearning.DataItem, field: str, element: imfusion.machinelearning.KeypointsElement) -> None
Set a KeypointsElement to the DataItem.
- Parameters:
field (str) – field name
element (KeypointsElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.
set(self: imfusion.machinelearning.DataItem, field: str, element: imfusion.machinelearning.BoundingBoxElement) -> None
Set a BoundingBoxElement to the DataItem.
- Parameters:
field (str) – field name
element (BoundingBoxElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.
set(self: imfusion.machinelearning.DataItem, field: str, element: imfusion.machinelearning.VectorElement) -> None
Set a VectorElement to the DataItem.
- Parameters:
field (str) – field name
element (VectorElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.
set(self: imfusion.machinelearning.DataItem, field: str, element: imfusion.machinelearning.TensorSetElement) -> None
Set a TensorSetElement to the DataItem.
- Parameters:
field (str) – field name
element (TensorSetElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.
set(self: imfusion.machinelearning.DataItem, field: str, shared_image_set: imfusion.SharedImageSet) -> None
Set a SharedImageSet to the DataItem.
- Parameters:
field (str) – field name
element (SharedImageSet) – element to be inserted into the DataItem, if the field exists it’s overwritten.
set(self: imfusion.machinelearning.DataItem, field: str, keypoint_set: imfusion.machinelearning.KeypointSet) -> None
Set a KeypointSet to the DataItem.
- Parameters:
field (str) – field name
element (KeypointSet) – element to be inserted into the DataItem, if the field exists it’s overwritten.
set(self: imfusion.machinelearning.DataItem, field: str, bounding_box_set: imfusion.machinelearning.BoundingBoxSet) -> None
Set a BoundingBoxSet to the DataItem.
- Parameters:
field (str) – field name
element (BoundingBoxSet) – element to be inserted into the DataItem, if the field exists it’s overwritten.
set(self: imfusion.machinelearning.DataItem, field: str, tensorset: imfusion.machinelearning.TensorSet) -> None
Set a TensorSet to the DataItem.
- static split(item: DataItem) list[DataItem]
Split a data item along the batch channels into items, each with batch size 1
- Parameters:
item (DataItem) – Item to split.
- static stack(items: list[DataItem]) DataItem
Stack several data items along the batch dimension.
- Parameters:
items (List[DataItem]) – List of input items to stack.
- update(self: DataItem, other: DataItem, clone: bool = True) None
Update the contents of self with elements from other
- Parameters:
Note
Raises an exception if the batch_size of other does not match self.
- values(self: DataItem) Iterator[DataElement]
- property batch_size
Returns the batch size of the fields, zero if no elements are present, or None if there are inconsistencies within them.
- property dimension
Returns the dimensionality of the elements, or zero if no elements are present or if there are inconsistencies within them.
- property fields
Returns the set of fields contained in the data item.
- property ndim
Returns the dimensionality of the elements, or zero if no elements are present or if there are inconsistencies within them.
- class imfusion.machinelearning.DataLoaderSpecs(self: DataLoaderSpecs, arg0: str, arg1: Properties, arg2: Phase, arg3: list[str], arg4: str)
Bases:
pybind11_object
- property configuration
- property inputs
- property name
- property output
- property phase
- class imfusion.machinelearning.Dataset(*args, **kwargs)
Bases:
pybind11_object
Class for creating an iterable dataset by chaining data loading and transforming operations executed in a lazy fashion. The Dataset implements an iterable interface, which allows to use
iter()
andnext()
built-ins as well as range based loops.Overloaded function.
__init__(self: imfusion.machinelearning.Dataset, data_lists: list[tuple[dict[int, str], list[str]]], shuffle: bool = False, verbose: bool = False) -> None
Constructs a dataset from lists of filenames.
- Parameters:
__init__(self: imfusion.machinelearning.Dataset, read_from: str, reader_properties: imfusion.Properties, verbose: bool = False) -> None
Constructs a dataset by specifying a reader type as a string.
- Parameters:
read_from (string) – specifies the type of reader that is created implicitly. Options: “filesystem”.
reader_properties (Properties) – properties used to configure the reader.
verbose (bool) – print debug information when running the data loader. Default: false
__init__(self: imfusion.machinelearning.Dataset, verbose: bool = False) -> None
Constructs an empty dataset.
- Parameters:
verbose (bool) – print debug information when running the data loader. Default: false
- static available_filter_functions() list[str]
Returns filter function keys to be used in Dataset.filter decorator function
- static available_map_functions() list[str]
Returns map function keys to be used in Dataset.map decorator function
- batch(self: Dataset, batch_size: int = 1, pad: bool = False, overlap: int = 0) Dataset
Batches the next
batch_size
items in a single one before returning it.
- build_pipeline(self: imfusion.machinelearning.Dataset, property_list: list[imfusion.machinelearning.DataLoaderSpecs], config_phase: imfusion.machinelearning.Phase = <Phase.ALWAYS: 7>) None
Configures the Dataset decorators to be used based on a list of Properties.
- cache(self: Dataset, make_exclusive_cpu: bool = True, lazy: bool = True, compression_level: int = 0, shuffle: bool = False) Dataset
Deprecated - please use ‘memory_cache’ now.
- disk_cache(self: Dataset, location: str = '', lazy: bool = True, reload_from_disk: bool = True, compression: bool = False, shuffle: bool = False) Dataset
Caches the dataset already loaded in a persistent manner (on a disk location) Raises a DataLoaderError if the dataset is not countable.
- Parameters:
location (string) – path to the folder where all the data will be cached.
lazy (bool) – if false, the cache is filled upon construction (otherwise as items are requested).
reload_from_disk (bool) – try to reload the cache from a previous session (reload is the deprecated name of this parameter).
compression (bool) – use ZStandard compression.
shuffle (bool) – re-shuffle the cache order every epoch.
- filter(*args, **kwargs)
Overloaded function.
filter(self: imfusion.machinelearning.Dataset, func: Callable[[imfusion.machinelearning.DataItem], bool]) -> imfusion.machinelearning.Dataset
Filters the dataset according to a user defined function. Note: Filtering makes the dataset uncountable, since the func output is conditional.
- Parameters:
func (def func(dict) -> bool) – filtering criterion to be applied to each input item. The input must be of the form dict[str, SharedImageSet]
filter(self: imfusion.machinelearning.Dataset, func_name: str) -> imfusion.machinelearning.Dataset
Filters the dataset according to a user defined function. Note: Filtering makes the dataset uncountable, since the func output is conditional.
- Parameters:
func_name (str) – name of a registered filter function specifying a criterion to be applied to each input item. The input must be of the form dict[str, SharedImageSet]
- map(*args, **kwargs)
Overloaded function.
map(self: imfusion.machinelearning.Dataset, func: Callable[[imfusion.machinelearning.DataItem], None], num_parallel_calls: int = 1) -> imfusion.machinelearning.Dataset
Applies a mapping to each item of the dataset. Optionally specify the number
num_parallel_calls
of asynchronous threads which are used for the mapping.- Parameters:
map(self: imfusion.machinelearning.Dataset, func_name: str, num_parallel_calls: int = 1) -> imfusion.machinelearning.Dataset
Applies a mapping to each item of the dataset. Optionally specify the number
num_parallel_calls
of asynchronous threads which are used for the mapping.
- memory_cache(self: Dataset, make_exclusive_cpu: bool = True, lazy: bool = True, compression_level: int = 0, shuffle: bool = False, num_threads: int = 1) Dataset
Caches the dataset already loaded. Raises a DataLoaderError if the dataset is not countable. Raises a MemoryError if the system runs out of memory.
- Parameters:
make_exclusive_cpu (bool) – keep the data exclusively on CPU.
lazy (bool) – if false, the cache is filled upon construction (otherwise as items are requested).
compression_level (int) – controls compression, valid values are between 0 and 20. Higher means more compression, but slower. 0 disables compression.
shuffle (bool) – re-shuffle the cache order every epoch.
num_threads (int) – number of threads to use for copying from the cache
- prefetch(self: Dataset, prefetch_size: int, sync_to_gl: bool = True) Dataset
Prefetches items from the underlying loader in a background thread.
- preprocess(*args, **kwargs)
Overloaded function.
preprocess(self: imfusion.machinelearning.Dataset, preprocessing_pipeline: list[tuple[str, imfusion.Properties, imfusion.machinelearning.Phase]], exec_phase: imfusion.machinelearning.Phase = <Phase.ALWAYS: 7>, num_parallel_calls: int = 1) -> imfusion.machinelearning.Dataset
Adds a generic preprocessing step to the data pipeline. The processing is performed by the underlying sequence of
Operation
.- Parameters:
preprocessing_pipeline – List of specifications to construct the underlying
OperationsSequence
. Each specification must be a tuple consisting of the name of the operation, itsPhase
, andProperties
for configuring it.exec_phase –
Execution phase for the entire preprocessing pipeline. The execution will run only those operations whose phase (specified in the specs) corresponds to the current exec_phase, with the following exceptions:
Operations marked with phase == Phase.Always are always run regardless of the exec_phase.
If exec_phase == Phase.Always, all operations in the preprocessing pipeline are run regardless of their individual phase.
num_parallel_calls – specifies the number
num_parallel_calls
of asynchronous threads which are used for the preprocessing. Defaults to 1.
preprocess(self: imfusion.machinelearning.Dataset, operations: list[imfusion.machinelearning.Operation], num_parallel_calls: int = 1) -> imfusion.machinelearning.Dataset
Adds a generic preprocessing step to the data pipeline. The processing is performed by the underlying sequence of
Operation
.- Parameters:
operations – List of operations that will do the actual processing.
num_parallel_calls – specifies the number
num_parallel_calls
of asynchronous threads which are used for the preprocessing. Defaults to 1.
- read(self: Dataset, reader_type: str, reader_properties: Properties, verbose: bool = False) Dataset
Constructs a dataset by specifying a reader type as a string.
- Parameters:
reader_type – specifies the type of reader that is created implicitly. Options: “filesystem” (MemoryReader needs to be fixed to work with properties)
reader_properties – properties used to configure the reader.
verbose – print debug information when running the data loader. Default: false
- reinit(self: Dataset) None
Reinit the dataset, clearing state surviving reset() (i.e. data caches).
- repeat(self: Dataset, num_epoch_repetitions: int, num_item_repetitions: int = 1) Dataset
Repeats the dataset
num_epoch_repetitions
times and each individual itemnum_item_repetitions
times.
- sample(*args, **kwargs)
Overloaded function.
sample(self: imfusion.machinelearning.Dataset, sampling_pipeline: list[tuple[str, imfusion.Properties]], *, num_parallel_calls: int = 1, sampler_selection_seed: int = 1) -> imfusion.machinelearning.Dataset
Adds a ROI sampling step to the data pipeline. During this step the loaded image is reduced to a region of interest (ROI). The strategy for sampling this regions location is determined by the
ImageROISampler
s, which is randomly chosen from the underlying sampler set each time this step executes.- Parameters:
sampling_set_config – List of tuples of sampler name and corresponding
Properties
for configuring it.num_parallel_calls – Number of asynchronous threads which are used for the preprocessing. Defaults to 1.
sampler_selection_seed – Seed for the random generator of the samplers selection
sample(self: imfusion.machinelearning.Dataset, samplers: list[imfusion.machinelearning.ImageROISampler], weights: Optional[list[float]] = None, *, sampler_selection_seed: int = 1, num_parallel_calls: int = 1) -> imfusion.machinelearning.Dataset
Adds a ROI sampling step to the data pipeline. During this step the loaded image is reduced to a region of interest (ROI). The strategy for sampling this regions location is determined by the
ImageROISampler
s, which is randomly chosen from the underlying sampler set each time this step executes.- Parameters:
samplers – List of sampler to choose from when sampling.
weights – Probability weights for the samplers specifying the relative probability of choosing each sampler.
num_parallel_calls – Number of asynchronous threads which are used for the preprocessing. Defaults to 1.
sampler_selection_seed (unsigned int) – Seed for the random generator of the samplers selection
sample(self: imfusion.machinelearning.Dataset, sampler: imfusion.machinelearning.ImageROISampler, *, num_parallel_calls: int = 1) -> imfusion.machinelearning.Dataset
Adds a ROI sampling step to the data pipeline. During this step the loaded image is reduced to a region of interest (ROI). The strategy for sampling this regions location is determined by the
ImageROISampler
s, which is randomly chosen from the underlying sampler set each time this step executes.- Parameters:
sampler – Sampler to choose from when sampling.
num_parallel_calls – Number of asynchronous threads which are used for the preprocessing. Defaults to 1.
- shuffle(self: Dataset, shuffle_buffer: int = -1, seed: int = -1) Dataset
Shuffles the next
how_many
items of the dataset. Defaults to -1, i.e. shuffles the entire dataset. Ifhow_many
is not specified and the dataset is not countable, it throws a DataLoaderError.
- split(self: Dataset, num_items: int = -1) Dataset
Splits the content of the SharedImagesSets into SIS containing a single image.
- Parameters:
num_items – Keep only the first
num_items
frames. Default is -1, which keeps all frames.
Note
Calling this method will make the dataset uncountable
- property size
Returns the length of the dataset or None if the set is uncountable.
- property verbose
Flag indicating whether extra information is logged when fetching data items.
- class imfusion.machinelearning.DefaultROISampler(*args, **kwargs)
Bases:
ImageROISampler
Sampler which simply returns the image and the label map, after padding of a specified dimension divisor: each spatial dimension of the output arrays will be divisible by
dimension_divisor
.- Parameters:
dimension_divisor – Divisor of dimensions of the output images
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
padding_mode: Properties.EnumStringParam(value=”clamp”, admitted_values={“clamp”, “mirror”, “zero”})
label_padding_mode: Properties.EnumStringParam(value=”clamp”, admitted_values={“clamp”, “mirror”, “zero”})
Overloaded function.
__init__(self: imfusion.machinelearning.DefaultROISampler, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.DefaultROISampler, dimension_divisor: int, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.DeformationOperation(*args, **kwargs)
Bases:
Operation
Apply a deformation to the image using a specified control point grid and specified displacements.
- Parameters:
num_subdivisions – list specifying the number of subdivisions for each dimension (the number of control points is subdivisions+1). For 2D images, there must be 0 subdivision in the last component. Default: [1, 1, 1]
displacements – list of 3-dim vectors specifying the displacement (mm) for each control point. Should have length equal to the number of control points. Default: []
padding_mode – defines which type of padding is used in [“zero”, “clamp”, “mirror”]. Default:
ZERO
adjust_size – configures whether the resulting image should adjust its size to encompass the deformation. Default: False
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Note:
PaddingMode
can be automatically converted from a string. This means you can directly pass a string like “zero”, “clamp”, or “mirror” to the padding_mode parameters instead of using the enum values.Overloaded function.
__init__(self: imfusion.machinelearning.DeformationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.DeformationOperation, num_subdivisions: numpy.ndarray[numpy.int32[3, 1]] = array([1, 1, 1], dtype=int32), displacements: list[numpy.ndarray[numpy.float32[3, 1]]] = [], padding_mode: imfusion.PaddingMode = <PaddingMode.ZERO: 0>, adjust_size: bool = False, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.DiceMetric(self: DiceMetric, ignore_background: bool = True)
Bases:
Metric
- compute_dice(self: DiceMetric, arg0: SharedImageSet, arg1: SharedImageSet) list[dict[int, float]]
- class imfusion.machinelearning.ElementType(*args, **kwargs)
Bases:
pybind11_object
Members:
IMAGE
KEYPOINT
BOUNDING_BOX
VECTOR
TENSOR
Overloaded function.
__init__(self: imfusion.machinelearning.ElementType, value: int) -> None
__init__(self: imfusion.machinelearning.ElementType, arg0: str) -> None
- BOUNDING_BOX = <ElementType.BOUNDING_BOX: 1>
- IMAGE = <ElementType.IMAGE: 0>
- KEYPOINT = <ElementType.KEYPOINT: 2>
- TENSOR = <ElementType.TENSOR: 4>
- VECTOR = <ElementType.VECTOR: 3>
- property name
- property value
- class imfusion.machinelearning.Engine(self: Engine, name: str)
Bases:
pybind11_object
Generic interface for machine learning models serialized by specific frameworks (e.g. PyTorch, ONNX, etc.).
This class is used by the
MachineLearningModel
to forward the prediction request to the framework that was used to serialize the model.See
imfusion.machinelearning.engines
for examples of Python engine implementations.- available_providers(self: Engine) list[ExecutionProvider]
Returns the execution providers available to the Engine
- check_input_fields(self: Engine, input: DataItem) None
Checks that input fields specified in the model yaml config are present in the input item.
- check_output_fields(self: Engine, input: DataItem) None
Checks the output fields specified in the model yaml config are present in the item returned by predict.
- configure(self: Engine, properties: Properties) None
Configures the Engine.
- connect_signals(self: Engine) None
Connects signals like on_model_file_changed, on_force_cpu_changed.
- init(self: Engine, properties: Properties) None
Initializes the Engine.
- provider(self: Engine) ExecutionProvider | None
Returns the execution provider currently used by the Engine.
- property force_cpu
If set, forces the model to run on CPU.
- property input_fields
Names of the model input heads.
- property model_file
Path to the yaml model configuration.
- property name
- property output_fields
Names of the model output heads.
- property output_fields_to_ignore
Model output heads to discard.
- property version
Version of the model configuration.
- class imfusion.machinelearning.EngineConfiguration
Bases:
pybind11_object
- configure(self: EngineConfiguration, properties: Properties) None
Configures the EngineConfiguration.
- to_properties(self: EngineConfiguration) Properties
Converts the EngineConfiguration to a Properties object.
- default_input_name = 'Input'
- default_output_name = 'Prediction'
- property engine_specific_parameters
Parameter that are specific to the type of Engine.
- property force_cpu
If set, forces the model to run on CPU.
- property input_fields
Names of the model input heads.
- property model_file
Path to the yaml model configuration.
- property output_fields
Names of the model output heads.
- property output_fields_to_ignore
Model output heads to discard.
- property type
Type of Engine, i.e. torch, onnx, openvino…
- property version
Version of the model configuration.
- class imfusion.machinelearning.EnsureExplicitMaskOperation(*args, **kwargs)
Bases:
Operation
Converts the existing mask of all input images into explicit masks. If an image does not have a mask, no mask will be created. Warning: This operation might be computationally extensive since it processes every frame of the SharedImageSet independently.
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.EnsureExplicitMaskOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.EnsureExplicitMaskOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.EnsureOneToOneMatrixMappingOperation(*args, **kwargs)
Bases:
Operation
Ensures that it is possible to get/set the matrix of each frame of the input image set independently. This operation is targeted at TrackedSharedImageSets, which might define their matrices via a tracking sequence with timestamps (there is then no one-to-one correspondence between matrices and images, but matrices are looked-up and interpolated via their timestamps). In such cases, the operation creates a new tracking sequence with as many samples as images and turns off the timestamp usage.
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.EnsureOneToOneMatrixMappingOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.EnsureOneToOneMatrixMappingOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ExecutionProvider(self: ExecutionProvider, value: int)
Bases:
pybind11_object
Members:
CPU
CUDA
CUSTOM
DIRECTML
MPS
OPENVINO
- CPU = <ExecutionProvider.CPU: 0>
- CUDA = <ExecutionProvider.CUDA: 2>
- CUSTOM = <ExecutionProvider.CUSTOM: 1>
- DIRECTML = <ExecutionProvider.DIRECTML: 3>
- MPS = <ExecutionProvider.MPS: 5>
- OPENVINO = <ExecutionProvider.OPENVINO: 4>
- property name
- property value
- class imfusion.machinelearning.ExtractRandomSubsetOperation(*args, **kwargs)
Bases:
Operation
Extracts a random subset from a SharedImageSet.
- Parameters:
subset_size – Size of the extracted subset of images. Default: 1
keep_order – If true the extracted subset will have the same ordering as the input. Default: False
probability – Probability of applying this Operation. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ExtractRandomSubsetOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ExtractRandomSubsetOperation, subset_size: int = 1, keep_order: bool = False, probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ExtractSubsetOperation(*args, **kwargs)
Bases:
Operation
Extracts a subset from a SharedImageSet.
- Parameters:
subset – Indices of the selected images.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ExtractSubsetOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ExtractSubsetOperation, subset: list[int], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ForegroundGuidedLabelUpsamplingOperation(*args, **kwargs)
Bases:
Operation
Generates a high-resolution label map by upsampling a multi-class softmax prediction guided by a high-resolution binary segmentation. This operation combines a high-resolution binary segmentation (e.g., from a sigmoid prediction) with a lower-resolution multi-class one-hot encoded segmentation (e.g., from a softmax prediction) to produce a refined high-resolution multi-class label map. The approach is inspired by pan-sharpening techniques used in remote sensing (https://arxiv.org/abs/1504.04531). The multi-class one hot image should contain the background class as the first channel.
- Parameters:
apply_to – List of field names for input images, expected order: [“highResSigmoid”, “lowResSoftmax”]
output_field – Name for the output field. If not specified, overwrites first input field
remove_fields – Remove input fields after processing. Default: True
apply_sigmoid – Use sigmoid intensities to guide foreground/background decision. If False, outputs most likely non-background class (if any, otherwise background) from softmax. Default: True
guidance_weight – Weight of sigmoid vs softmax for foreground decision [0-1]. Lower values can reduce false positives. Ignored if apply_sigmoid=False. Default: 1.0
boundary_refinement_max_iter – Maximum iterations for boundary refinement at output resolution. Higher values may be needed for larger resolution differences. Ideal values depend on the data and
boundary_refinement_smooth
. Default: 3boundary_refinement_smooth – Smoothing factor for boundary refinement. Larger values remove smaller label patches. Default: 1.0
boundary_refinement_add_only – Optional list of label values to restrict the refinement to additions only. Default: []
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ForegroundGuidedLabelUpsamplingOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ForegroundGuidedLabelUpsamplingOperation, apply_to: list[str] = [‘highResSigmoid’, ‘lowResSoftmax’], output_field: Optional[str] = None, remove_fields: bool = True, apply_sigmoid: bool = True, guidance_weight: float = 1.0, boundary_refinement_max_iter: int = 3, boundary_refinement_smooth: float = 1.0, boundary_refinement_add_only: list[int] = [], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.GammaCorrectionOperation(*args, **kwargs)
Bases:
Operation
Apply a gamma correction which changes the overall contrast (see https://en.wikipedia.org/wiki/Gamma_correction)
- Parameters:
gamma – Power applied to the normalized intensities. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.GammaCorrectionOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.GammaCorrectionOperation, gamma: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.GenerateRandomKeypointsOperation(*args, **kwargs)
Bases:
Operation
Generate uniformly distributed random keypoints in the image. Optionally the distribution is restricted to label values that are nonzero, otherwise (or if there are no nonzero label values), then the keypoints are sampled from the entire image extent. There is a fixed (but configurable) number of keypoints per channel, and a fixed (but configurable) number of output channels in the output keypoint element.
- Parameters:
num_points – Number of points to generate per channel. Default: 1.
num_channels – Number of channels in the output keypoint set. Default: 1.
sample_from_label – Whether or not points should be drawn from the label if possible. Default: False.
output_field_name – Name of the output keypoints field. Default: “keypoints”.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.GenerateRandomKeypointsOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.GenerateRandomKeypointsOperation, num_points: int = 1, num_channels: int = 1, sample_from_label: bool = False, output_field_name: str = ‘keypoints’, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.HighPassOperation(*args, **kwargs)
Bases:
Operation
Smooths the input image with a Gaussian kernel with
half_kernel_size
, then subtracts the smoothed image from the input, resulting in a reduction of low-frequency components.- Parameters:
half_kernel_size – half kernel size in pixels. Corresponding standard deviation is half_kernel_size / 3.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.HighPassOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.HighPassOperation, half_kernel_size: int, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ImageElement(self: ImageElement, image: SharedImageSet)
Bases:
SISBasedElement
Initialize an ImageElement from a SharedImageSet.
- Parameters:
image (SharedImageSet) – image to be converted to a ImageElement
- from_torch()
- class imfusion.machinelearning.ImageMathOperation(*args, **kwargs)
Bases:
Operation
Computes a specified formula involving images from the input dataitem. Supported operations between images of same shape or an image and a scalar:
Addition/Substraction (+, -)
Multiplication/Division (*, /)
Parenthesis can be used to specify operation priorities. For instance, if the dataitem contains 3 elements: image, additive_noise, multiplicative_noise, one can compute: noisy_image = image * multiplicative_noise + additive_noise and store the output in the dataitem under the “noisy_image” field.
The different images are expected to have the same shape.
The resulting image is of type float and does not have a matrix or spacing. These will be copied from the “metaDataFrom” element.
- Parameters:
formula (string) – Formula to be computed. Variables from the dataitem must be referred to by their dataitem field (see example above).
meta_data_from (string) – Optional ImageElement to get the matrix and spacing information from. If not specified or empty, the output won’t have a matrix or spacing information. Default: ‘’
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ImageMathOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ImageMathOperation, formula: str, meta_data_from: str = ‘’, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ImageMattingOperation(*args, **kwargs)
Bases:
Operation
Refine edges of label-map based on the intensities of the input image. This can make coarse predictions smoother or may correct wrong predictions on the boundaries. It applies the method from the paper “Guided Image Filtering” by Kaiming He et al.
- Parameters:
img_size – target image dimension. No downsampling if 0.
kernel_size – guided filter kernel size.
epsilon – guided filter epsilon.
num_iters – guided filter number of iterations.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ImageMattingOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ImageMattingOperation, img_size: int, kernel_size: int, epsilon: float, num_iters: int, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ImageROISampler
Bases:
Operation
Base class for ROI samplers
- compute_roi(self: ImageROISampler, image: SharedImageSet) RegionOfInterest | None
Compute ROI on the given image.
- extract_roi(self: ImageROISampler, image: SharedImageSet, roi: RegionOfInterest | None) SharedImageSet
Extract ROIs from an image.
- property label_padding_mode
The label padding mode property.
- property padding_mode
The image padding mode property.
- property requires_label
Bool indicating whether ROI must be computed on the label map.
- class imfusion.machinelearning.ImagewiseClassificationMetrics(self: ImagewiseClassificationMetrics, num_classes: int = 2)
Bases:
Metric
- class Result
Bases:
pybind11_object
- property confusion_matrix
- property prediction
- property target
- compute_results(self: ImagewiseClassificationMetrics, prediction: SharedImageSet, target: SharedImageSet) list[Result]
- class imfusion.machinelearning.InterleaveMode(*args, **kwargs)
Bases:
pybind11_object
Members:
Alternate
Proportional
Overloaded function.
__init__(self: imfusion.machinelearning.InterleaveMode, value: int) -> None
__init__(self: imfusion.machinelearning.InterleaveMode, arg0: str) -> None
- Alternate = <InterleaveMode.Alternate: 0>
- Proportional = <InterleaveMode.Proportional: 1>
- property name
- property value
- class imfusion.machinelearning.InverseOperation(*args, **kwargs)
Bases:
Operation
Operation that inverts a specific operation by using the InversionComponent.
This operation provides a way to invert a specific operation by its record identifier. It retrieves the inverse operation specifications from the InversionComponent of the processed elements, creates an appropriate inverse operation, and then after successful processing, removes the inversion information. The process works as follows:
The InverseOperation searches for elements with the InversionComponent matching the target identifier
It creates and configures an operation based on these specifications
It applies this inverse operation to the input
After successful processing, it removes the inversion information from all processed elements. This guarantees LIFO order when operations with the same identifier are applied multiple times.
Note: This inverts only operations that explicitly support inversion and that have recorded themselves with the specified record identifier. Inversions may not be be able to fully recover the input image, e.g. inverting a cropping operation yields a padded image, not the original image.
Usage example:
# Apply a padding operation with a specific record identifier pad_op = PadOperation((10, 10), (10, 10), (0, 0)) props = Properties({"record_identifier": "my padding"}) pad_op.configure(props) padded_image = pad_op.process(some_input_image) # Create an inverse operation to undo the padding, using the record_identifier as target identifier for inversion. inv_op = InverseOperation("my padding") unpadded_image = inv_op.process(padded_image)
Note: The InverseOperation reuses the created inverse operation when possible, only creating a new one when the type changes, and only reconfiguring when the properties change. If element-specific properties are needed, they should be set by the Operation that is to be inverted in process() via data components and used in process() of the InverseOperation.
- Args:
target_identifier: The identifier of the operation to invert. Default: “”
device: Specifies whether this Operation should run on CPU or GPU. seed: Specifies seeding for any randomness that might be contained in this operation. error_on_unexpected_behaviour: Specifies whether to throw an exception instead of warning about unexpected behavior. apply_to: Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
) record_identifier: Unused for this operation as it is not invertible- Other parameters accepted by configure():
device: Properties.EnumStringParam(value=”GPUIfOpenGl”, admitted_values={“ForceGPU”, “GPUIfOpenGl”, “GPUIfGlImage”, “ForceCPU”})
error_on_unexpected_behaviour: False
record_identifier:
target_identifier:
Overloaded function.
__init__(self: imfusion.machinelearning.InverseOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.InverseOperation, target_identifier: str = ‘’, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.InvertOperation(*args, **kwargs)
Bases:
Operation
Invert the intensities of the image: \(\textnormal{output} = -\textnormal{input}\).
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.InvertOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.InvertOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.InvertibleOperation(self: InvertibleOperation, name: str, processing_policy: ProcessingPolicy, *, device: ComputingDevice | None = None, apply_to: list[str] | None = None, seed: int | None = None, error_on_unexpected_behaviour: bool | None = None)
Bases:
Operation
Base class for operations that support inversion.
Basic Usage:
Inherit from InvertibleOperation
Implement the inverse_specs() method
Implement the DataElement-specific process methods (process_images(), process_points(), process_boxes()) (or override process(DataItem) and call super().process(item))
Use InverseOperation with the same identifier to create and apply the inverse
Implementation Patterns
Pattern 1: Override process_images() (Recommended)
class PyImageIntensityScalingOperation(ml.InvertibleOperation): def __init__(self, scale_factor: float = 2.0): ml.InvertibleOperation.__init__(self, "PyImageIntensityScalingOperation", ml.Operation.ProcessingPolicy.EVERYTHING) self.scale_factor: float = scale_factor def process_images(self, images): """ Scale the images by the scale_factor """ return ml.LinearIntensityMappingOperation(factor=self.scale_factor, bias=0.0).process(images) def configure(self, properties: Properties) -> bool: """ Configure this Operation. Since the inverse of this operation is itself with inverted parameter, this method is used to set up the inverse operation parameters. """ params = properties.asdict() if "scale_factor" in params: self.scale_factor = params["scale_factor"] properties.remove_param("scale_factor") return super().configure(properties) def inverse_specs(self): """ Return specs for this operation with inverse parameters """ props = Properties({"scale_factor": (1.0 / self.scale_factor) if self.scale_factor != 0 else np.nan}) return ml.Operation.Specs("PyImageIntensityScalingOperation", props, ml.Phase.ALWAYS)
Pattern 2: Override process(DataItem) (Advanced)
class PyDataItemIntensityScalingOperation(ml.InvertibleOperation): def __init__(self, scale_factor: float = 2.0): ml.InvertibleOperation.__init__(self, "PyDataItemIntensityScalingOperation", ml.Operation.ProcessingPolicy.EVERYTHING) self.scale_factor: float = scale_factor def process(self, input: Union[ml.DataItem, imf.SharedImageSet]) -> Optional[imf.SharedImageSet]: """ Record this operation for inversion (in this case, before running the actual transformation), and apply the actual transformation. Due to overloaded process() method, the input can be either a DataItem or a SharedImageSet. In the case of a DataItem, the operation is applied inplace. """ # record the operation for inversion: ret = super().process(input) # apply the actual transformation: if isinstance(input, ml.DataItem): assert ret is None, f"DataItem is not expected to be returned, but got {ret}" ml.LinearIntensityMappingOperation( factor=self.scale_factor).process(input) elif isinstance(input, imf.SharedImageSet): return ml.LinearIntensityMappingOperation( factor=self.scale_factor).process(ret) else: raise ValueError(f"Invalid input type: {type(input)}") def process_images(self, sis: imf.SharedImageSet) -> imf.SharedImageSet: """ Pass-through method to define the compatible data element. The recording and inversion happen in the InvertibleOperation.process() call. """ return sis def inverse_specs(self) -> ml.Operation.Specs: """ Return specs for an operation that can perform the inverse Note: The inverse operation can be any registered operation, not necessarily this class """ props = Properties({"factor": (1.0 / self.scale_factor) if self.scale_factor != 0 else np.nan, "bias": 0.}) return ml.Operation.Specs("LinearIntensityMapping", props, ml.Phase.ALWAYS)
Complete Workflow Example
# Create forward operation forward_op = PyImageIntensityScalingOperation(scale_factor=2.0) forward_op.record_identifier = "scale_2x" # this is required for inversion information to be stored # Apply forward transformation forward_op.process(data_item) # Create and apply inverse operation inverse_op = ml.InverseOperation("scale_2x") # inversion information is retrieved from the target record_identifier inverse_op.process(data_item) # Undoes the scaling
How It Works Internally
When process() is called, InvertibleOperation records operation details in an InversionComponent
The InversionComponent stores the operation name and configuration needed for inversion
InverseOperation uses this recorded information plus your inverse_specs() to create the inverse
The system supports both Python operations inverting themselves and delegating to other operations
If needed, data-specific inversion information may be attached to the DataElement in the forward operation so it can be used by the specified inverse operation
- configuration(self: InvertibleOperation) Properties
- configure(self: InvertibleOperation, properties: Properties) bool
- process(*args, **kwargs)
Overloaded function.
process(self: imfusion.machinelearning.InvertibleOperation, item: imfusion.machinelearning.DataItem) -> None
Execute the operation on the input DataItem in-place, i.e. the input item will be modified.
process(self: imfusion.machinelearning.InvertibleOperation, images: imfusion.SharedImageSet, in_place: bool = False) -> imfusion.SharedImageSet
- Execute the operation on the input images and returns its output.
- Args:
images (SharedImageSet): the input images. in_place (bool): If False, the input is guaranteed to be unchanged and the function will return a new object. If True, the input will be changed and the function will return it. (Default: False).
process(self: imfusion.machinelearning.InvertibleOperation, points: imfusion.machinelearning.KeypointSet, in_place: bool = False) -> imfusion.machinelearning.KeypointSet
- Execute the operation on the input keypoints. The output will always be a different set of keypoints, i.e. this function never works in-place.
- Args:
points (SharedImageSet): the input points. in_place (bool): if True, the input will be changed and the function will return it. If False, the input is guaranteed to be unchanged and the function will return a new object (Default: False).
process(self: imfusion.machinelearning.InvertibleOperation, boxes: imfusion.machinelearning.BoundingBoxSet, in_place: bool = False) -> imfusion.machinelearning.BoundingBoxSet
- Execute the operation on the input bounding boxes. The output will always be a different set of bounding boxes, i.e. this function never works in-place.
- Args:
boxes (SharedImageSet): the input boxes. in_place (bool): If False, the input is guaranteed to be unchanged and the function will return a new object. If True, the input will be changed and the function will return it. (Default: False).
- seed_random_engine(self: InvertibleOperation, seed: int) None
- property active_fields
Fields in the data item that this operation will process.
- property computing_device
The computing device property.
- property does_not_modify_input
- property error_on_unexpected_behaviour
Treat unexpected behaviour warnings as errors.
- property name
- property processing_policy
The processing_policy property. Resetting it overrides the default operation behaviour on label.
- property record_identifier
Identifier used to record this operation for inversion. Setting this enables inversion if the operation supports it.
- property seed
- property supports_inversion
Returns whether this operation supports inversion.
- class imfusion.machinelearning.KeepLargestComponentOperation(*args, **kwargs)
Bases:
Operation
Create a label map with the largest components above the specified threshold. The output label map encodes each component with a different label value (1 for the largest, 2 for the second largest, etc.). Input images may be float or integer, output are unsigned 8-bit integer images (i.e. max 255 components). The operation will automatically set the default processing policy based on its input (if the input contains more than than one image, then only the label maps will be processed).
- Parameters:
max_number_components – the maximum number of components to keep. Default: 1
min_component_size – the minimum size of a component to keep. Default: -1, i.e. no minimum
max_component_size – the maximum size of a component to keep Default: -1, i.e. no maximum
threshold – the threshold to use for the binarization. Default: 0.5
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.KeepLargestComponentOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.KeepLargestComponentOperation, max_number_components: int = 1, min_component_size: int = -1, max_component_size: int = -1, threshold: float = 0.5, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.KeypointSet(*args, **kwargs)
Bases:
Data
Class for managing sets of keypoints
The class is meant to be used in parallel with SharedImageSet. For each frame in the set, and for each type of keypoint (i.e. body, pedicles, etc..), there is a list of points indicating an instance of that type in the reference image. In terms of tensor dimensions, this would be represented as [N, C, K], where N is the batch size, C is the number of channels (i.e. types of keypoints), and K is the number of keypoints for the same instance type. Each Keypoint is a vec3 having a further dimension [3].
Note
This class API is experimental and might change soon.
Overloaded function.
__init__(self: imfusion.machinelearning.KeypointSet, points: list[list[list[numpy.ndarray[numpy.float64[3, 1]]]]]) -> None
__init__(self: imfusion.machinelearning.KeypointSet, points: list[list[list[list[float]]]]) -> None
__init__(self: imfusion.machinelearning.KeypointSet, array: numpy.ndarray[numpy.float64]) -> None
- static load(location: str | PathLike) KeypointSet | None
Load a KeypointSet from an ImFusion file.
- Parameters:
location – input path.
- save(self: KeypointSet, location: str | PathLike) None
Save a KeypointSet as an ImFusion file.
- Parameters:
location – output path.
- property data
- class imfusion.machinelearning.KeypointsElement(self: KeypointsElement, keypoint_set: KeypointSet)
Bases:
DataElement
Initialize a KeypointsElement.
- Parameters:
keypoint_set – In case the argument is a numpy array, the array shape is expected to be [N, C, K, 3], where N is the batch size, C the number of different keypoint types (channel), K the number of instances of the same point type, which are expected to have dimension 3. If the argument is a nested list, the same concept applies also to the size of each level of nesting.
- property keypoints
Access to the underlying KeypointSet.
- class imfusion.machinelearning.KeypointsFromBlobsOperation(*args, **kwargs)
Bases:
Operation
Extracts keypoints from blob image. Takes ImageElement specified in :code:’apply_to’ as input. If :code:’apply_to’ is not specified and there is only one image in the data item, this image will automatically be selected.
- Parameters:
keypoints_field_name – Field name of the output keypoints. Default: “keypoints”
keypoint_extraction_mode – Extraction mode: 0: Max, 1: Mean, 2: Local Max. Default: 0
blob_intensity_cutoff – Minimum blob intensity to be considered in analysis. Default: 0.02
min_cluster_distance – In case of local aggregation methods, minimum distance allowed among clusters. Default: 10.0
min_cluster_weight – In case of local aggregation methods, minimum intensity for cluster to be consider independent. Default: 0.1
max_internal_clusters – In case of local aggregation methods, maximum number of internal clusters to be considered; to avoid excessive numbers that stall the algorithm. If there are more, the lower weighted ones are removed first. Default: 1000
run_smoothing – Runs a Gaussian smoothing with 1 pixel standard deviation to improve stability of local maxima. Default: False
smoothing_half_kernel – Runs a Gaussian smoothing with 1 pixel standard deviation to improve stability of local maxima. Default: 2
run_intensity_based_refinement – Runs blob intensity based refinement of clustered keypoints. Default: False
apply_to – Field containing the blob image. If not specified and if there is only one image in the data item, this image will automatically be selected. Default: []
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.KeypointsFromBlobsOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.KeypointsFromBlobsOperation, keypoints_field_name: str = ‘keypoints’, keypoint_extraction_mode: int = 0, blob_intensity_cutoff: float = 0.02, min_cluster_distance: float = 10.0, min_cluster_weight: float = 0.1, max_internal_clusters: int = 1000, run_smoothing: bool = False, smoothing_half_kernel: int = 2, run_intensity_based_refinement: bool = False, apply_to: list[str] = [], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.LabelROISampler(*args, **kwargs)
Bases:
ImageROISampler
Sampler which samples ROIs from the input image and label map, such that one particular label appears. For each ROI, one of the
labels_values
will be selected and the sampler will make sure that the ROI includes this label. If thesample_boundaries_only
flag is set to true, regions will at least have two different label values. If the constraints are not feasible, the sampler will either extract a random ROI with the target size or return an empty image, based on the flagfallback_to_random
. (The actual purpose of returning an empty image is to actually chain this sampler with a FilterDataLoader, so that images without a valid label are just completely skipped).- Parameters:
roi_size – Target size of the ROIs to be extracted as [Width, Height, Slices]
labels_values – List of integers representing the target labels
sample_boundaries_only – Make sure that the ROI contains a boundary (i.e. at least two different label values)
fallback_to_random – Whether to sample a random ROI or return an empty one when the target label values are not found. Default: True
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
padding_mode: Properties.EnumStringParam(value=”clamp”, admitted_values={“clamp”, “mirror”, “zero”})
label_padding_mode: Properties.EnumStringParam(value=”clamp”, admitted_values={“clamp”, “mirror”, “zero”})
Overloaded function.
__init__(self: imfusion.machinelearning.LabelROISampler, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.LabelROISampler, roi_size: numpy.ndarray[numpy.int32[3, 1]], labels_values: list[int], sample_boundaries_only: bool, fallback_to_random: bool = True, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.LazyModule(name: str)
Bases:
object
Wrapper that delays importing a package until its attributes are accessed. We need this to keep the import time of the
ìmfusion
package reasonable.Note
This wrapper is fairly basic and does not support assignments to the modules, i.e. no monkey-patching.
- Parameters:
name (str) –
- class imfusion.machinelearning.LinearIntensityMappingOperation(*args, **kwargs)
Bases:
Operation
Apply a linear shift and scale to the image intensities. \(\textnormal{output} = \textnormal{factor} * \textnormal{input} + \textnormal{bias}\)
- Parameters:
factor – Multiplying factor (see formula). Default: 1.0
bias – Additive bias (see formula). Default: 0.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.LinearIntensityMappingOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.LinearIntensityMappingOperation, factor: float = 1.0, bias: float = 0.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.MRIBiasFieldCorrectionOperation(*args, **kwargs)
Bases:
Operation
Perform bias field correction using an implicitly trained neural network (see MRIBiasFieldCorrectionAlgorithm for more details and the parameters description).
- Parameters:
iterations – For values > 1, the field is iteratively refined. Default: 1
config_path – Path of the machine learning model (use “GENERIC3D” or “GENERIC2D” for the default models). Default: “GENERIC3D”
field_smoothing_half_kernel – For values > 0, additional smoothing with a Gaussian kernel. Default: -1
preserve_mean_intensity – Preserve the mean image intensity in the output. Default: True
output_is_field – Produce the field, not the corrected image. Default: False
field_dimensions – Internal field dimensions (zeroes represent the model default dimensions). Default: [0, 0, 0]
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.MRIBiasFieldCorrectionOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.MRIBiasFieldCorrectionOperation, iterations: int = 1, config_path: str = ‘GENERIC3D’, field_smoothing_half_kernel: int = -1, preserve_mean_intensity: bool = True, output_is_field: bool = False, field_dimensions: numpy.ndarray[numpy.int32[3, 1]] = array([0, 0, 0], dtype=int32), *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.MRIBiasFieldGenerationOperation(*args, **kwargs)
Bases:
Operation
Apply or generate a multiplicative intensity modulation field. If the output is a field, it is shifted as close to mean 1 as possible while remaining positive everywhere. If the output is not a field, the image intensity is shifted so that the mean intensity of the input image is preserved.
- Parameters:
length_scale_mm – Length scale (in mm) of the Gaussian radial basis function. Default: 100.0
field_amplitude – Total field amplitude (centered around one). I.e. 0.4 for a 40% field. Default: 0.4
center – Relative center of the Gaussian with respect to the image axes. Values from [0..1] for locations inside the image. Default: [0.25, 0.25, 0.25]
distance_scaling – Relative scaling of the x, y, z world coordinates for field anisotropy. Default: [1, 1, 1]
invert_field – Invert the final field: field <- 2 - field. Default: False
output_is_field – Produce the field, not the corrupted image. Note, the additive normalization method depends on this. Default: False
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.MRIBiasFieldGenerationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.MRIBiasFieldGenerationOperation, length_scale_mm: float = 100.0, field_amplitude: float = 0.4, center: numpy.ndarray[numpy.float64[3, 1]] = array([0.25, 0.25, 0.25]), distance_scaling: numpy.ndarray[numpy.float64[3, 1]] = array([1., 1., 1.]), invert_field: bool = False, output_is_field: bool = False, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.MachineLearningModel(self: imfusion.machinelearning.MachineLearningModel, config_path: Union[str, os.PathLike], default_prediction_output: imfusion.machinelearning.PredictionOutput = <PredictionOutput.UNKNOWN: -1>)
Bases:
pybind11_object
Class for creating a MachineLearningModel.
Create a MachineLearningModel. If the resource required by the MachineLearningModel could not be acquired, raises a RuntimeError.
- Parameters:
config_path – Path to the configuration file used to create ModelConfiguration object owned by the model.
default_prediction_output – Parameter used to specify the prediction output of a model if this is missing from the config file. The prediction output type must be specified either here or in the configuration file under the key PredictionOutput. If it is specified in both places, the one from the config file is used.
- engine(self: MachineLearningModel) Engine
Returns the underlying engine used by the model. This can be useful for setting CPU/GPU mode, querying whether CUDA is available, etc.
- predict(*args, **kwargs)
Overloaded function.
predict(self: imfusion.machinelearning.MachineLearningModel, input: imfusion.machinelearning.DataItem) -> imfusion.machinelearning.DataItem
Method to execute a generic multiple input/multiple output model The input and output type of a machine learning model is the DataItem, which allows to give and retrieve an heterogeneous map-type container of the data needed and returned by the model.
- Parameters:
input (DataItem) – Input data item containing all data to be used for inference
predict(self: imfusion.machinelearning.MachineLearningModel, images: imfusion.SharedImageSet) -> imfusion.SharedImageSet
Convenience method to execute a single-input/single-output image-based model.
- Parameters:
images (SharedImageSet) – Input image set to be used for inference
- property label_names
Dict of the list of label names for each output. Keys are the engine output names if specified, else “Prediction”.
- class imfusion.machinelearning.MakeFloatOperation(*args, **kwargs)
Bases:
Operation
Convert the input image to float with original values (internal shifts and scales are baked in).
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.MakeFloatOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.MakeFloatOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.MarkAsTargetOperation(*args, **kwargs)
Bases:
Operation
Mark elements from the input data item as learning “target” which might affect the behaviour of the subsequent operations that rely on
ProcessingPolicy
or use other custom target-specific logic.- Parameters:
apply_to – fields to mark as targets (will initialize the underlying
apply_to
parameter)device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.MarkAsTargetOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.MarkAsTargetOperation, apply_to: list[str], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.MergeAsChannelsOperation(*args, **kwargs)
Bases:
Operation
Merge multiple DataElements into a single one along the channel dimension. Only applicable for ImageElements and VectorElements.
- Parameters:
apply_to – fields which should be merged.
output_field – name of the resulting field.
remove_fields – remove fields used for merging from the data item. Default: True
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.MergeAsChannelsOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.MergeAsChannelsOperation, apply_to: list[str] = [], output_field: str = ‘’, remove_fields: bool = True, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.Metric
Bases:
pybind11_object
- __call__(self: Metric, item: DataItem) list[dict[str, ndarray[numpy.float64[m, n]]]]
Delegates to:
compute()
- configuration(self: Metric) Properties
- configure(self: Metric, properties: Properties) None
- property data_scheme
- class imfusion.machinelearning.ModelConfiguration(self: imfusion.machinelearning.ModelConfiguration, config_path: str, default_prediction_output: imfusion.machinelearning.PredictionOutput = <PredictionOutput.UNKNOWN: -1>)
Bases:
pybind11_object
Configuration class for
MachineLearningModel
parameters.This class parses YAML configuration files and validates their consistency. It supports versioned configurations to maintain API compatibility.
Version Management: - The
p_version
parameter tracks the configuration format version at the time the model was created. - When changes to the ModelConfiguration class API are made,VERSION_COUNT
is incremented. - Older configurations are automatically upgraded to the latest version. - Use thesave()
function to convert configurations to the latest version.Create a ModelConfiguration. If the resource required by the ModelConfiguration could not be acquired, raises a RuntimeError. :param config_path: Path to the YAML configuration file used to create ModelConfiguration object. :param default_prediction_output: type of prediction output, can be [Image, Vector, Keypoints, BoundingBoxes, Tensor]. For legacy configuration (Version < 3) this parameter has to be given programmatically.
- compare_with(self: ModelConfiguration, other: ModelConfiguration, ignore_version: bool = False) bool
Compare this configuration with another ModelConfiguration.
This method performs a deep comparison of all configuration parameters between this instance and the provided configuration.
- Parameters:
other (ModelConfiguration) – The configuration to compare against.
ignore_version (bool, optional) – If True, version differences are ignored during comparison. Defaults to False.
- Returns:
True if the configurations are identical, False otherwise.
- Return type:
- save(self: ModelConfiguration, config_path: str) bool
Save the ModelConfiguration to a file. Note: This can be useful for converting an old configuration to the latest version. :param config_path: Path to the configuration file used to save the ModelConfiguration.
- VERSION_COUNT = 8
- property version
Version of the model configuration.
- class imfusion.machinelearning.ModelType(*args, **kwargs)
Bases:
pybind11_object
Members:
RANDOM_FOREST
NEURAL_NETWORK
Overloaded function.
__init__(self: imfusion.machinelearning.ModelType, value: int) -> None
__init__(self: imfusion.machinelearning.ModelType, arg0: str) -> None
- NEURAL_NETWORK = <ModelType.NEURAL_NETWORK: 1>
- RANDOM_FOREST = <ModelType.RANDOM_FOREST: 0>
- property name
- property value
- class imfusion.machinelearning.MorphologicalFilterOperation(*args, **kwargs)
Bases:
Operation
Runs a morphological operation on the input.
- Parameters:
mode – name of the operation in [‘dilation’, ‘erosion’, ‘opening’, ‘closing’]
op_size – size of the structuring element
use_l1_distance – flag to use L1 (absolute) or L2 (squared) distance in the local computations
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.MorphologicalFilterOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.MorphologicalFilterOperation, mode: str, op_size: int, use_l1_distance: bool, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.NormalizeMADOperation(*args, **kwargs)
Bases:
Operation
Normalize the input image based on robust statistics. The image is shifted so that the median corresponds to 0 and normalized with the median absolute deviation (see https://en.wikipedia.org/wiki/Median_absolute_deviation). The operation is performed channel-wise.
- Parameters:
selected_channels – channels selected for MAD normalization. If empty, all channels are normalized (default).
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
fix_median: False
Overloaded function.
__init__(self: imfusion.machinelearning.NormalizeMADOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.NormalizeMADOperation, selected_channels: list[int] = [], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.NormalizeNormalOperation(*args, **kwargs)
Bases:
Operation
Normalize the input image so that it has a zero-mean and a unit-standard deviation. A particular intensity value can be set to be ignored during the computations.
- Parameters:
keep_background – Should ignore all intensities with
background_value
. Default: Falsebackground_value – Intensity value to be potentially ignored. Default: 0.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.NormalizeNormalOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.NormalizeNormalOperation, keep_background: bool = False, background_value: float = 0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.NormalizePercentileOperation(*args, **kwargs)
Bases:
Operation
Normalize the input image based on its intensity distribution, in particular on a lower and upper percentile. The output image is not guaranteed to be in [0;1] but the lower percentile will be mapped to 0 and the upper one to 1.
- Parameters:
min_percentile – Lower percentile in [0;1]. Default: 0.0
max_percentile – Lower percentile in [0;1], Default: 1.0
clamp_values – Intensities are clipped to the new range. Default: False
ignore_zeros – Whether to ignore zeros when computing the percentiles. Default: False
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.NormalizePercentileOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.NormalizePercentileOperation, min_percentile: float = 0.0, max_percentile: float = 1.0, clamp_values: bool = False, ignore_zeros: bool = False, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.NormalizeUniformOperation(*args, **kwargs)
Bases:
Operation
Normalize the input image based so their minimum/maximum intensity so that the output image has a [min; max] range. The operation is performed channel-wise.
- Parameters:
min – New minimum value of the image after normalization. Default: 0.0
max – New maximum value of the image after normalization. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.NormalizeUniformOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.NormalizeUniformOperation, min: float = 0.0, max: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.OneHotOperation(*args, **kwargs)
Bases:
Operation
Encode a single channel label image, to a one-hot representation of ‘channels’ channels. If encode_background is off, label ‘0’ will denote the background and doesn’t encode to anything, Label ‘1’ will set the value ‘1’ in the first channel, Label ‘2’ will set the value ‘1’ in the second channels, etc. If encode_background is on, label ‘0’ will be the background and set the value ‘1’ in the first channel, Label ‘1’ will set the value ‘1’ in the second channel, etc. The number of channels must be large enough to contain this encoding.
- Parameters:
num_channels – Number of channels in the output. Must be equal or larger to the highest possible label value. Default: 0
encode_background – whether to encode background in first channel. Default: True
to_ubyte – return label as ubyte (=int8) instead of float. Default: False
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.OneHotOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.OneHotOperation, num_channels: int = 0, encode_background: bool = True, to_ubyte: bool = False, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.Operation(self: imfusion.machinelearning.Operation, name: str, processing_policy: imfusion.machinelearning.Operation.ProcessingPolicy = <ProcessingPolicy.EVERYTHING_BUT_LABELS: 0>, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None)
Bases:
pybind11_object
- class ProcessingPolicy(*args, **kwargs)
Bases:
pybind11_object
Members:
EVERYTHING_BUT_LABELS
EVERYTHING
ONLY_LABELS
Overloaded function.
__init__(self: imfusion.machinelearning.Operation.ProcessingPolicy, value: int) -> None
__init__(self: imfusion.machinelearning.Operation.ProcessingPolicy, arg0: str) -> None
- EVERYTHING = <ProcessingPolicy.EVERYTHING: 1>
- EVERYTHING_BUT_LABELS = <ProcessingPolicy.EVERYTHING_BUT_LABELS: 0>
- ONLY_LABELS = <ProcessingPolicy.ONLY_LABELS: 2>
- property name
- property value
- class Specs(*args, **kwargs)
Bases:
pybind11_object
Overloaded function.
__init__(self: imfusion.machinelearning.Operation.Specs) -> None
__init__(self: imfusion.machinelearning.Operation.Specs, name: str, configuration: imfusion.Properties, when_to_apply: imfusion.machinelearning.Phase) -> None
- property name
- property prop
- property when_to_apply
- configuration(self: Operation) Properties
- configure(self: Operation, properties: Properties) bool
- process(*args, **kwargs)
Overloaded function.
process(self: imfusion.machinelearning.Operation, item: imfusion.machinelearning.DataItem) -> None
Execute the operation on the input DataItem in-place, i.e. the input item will be modified.
process(self: imfusion.machinelearning.Operation, images: imfusion.SharedImageSet, in_place: bool = False) -> imfusion.SharedImageSet
- Execute the operation on the input images and returns its output.
- Args:
images (SharedImageSet): the input images. in_place (bool): If False, the input is guaranteed to be unchanged and the function will return a new object. If True, the input will be changed and the function will return it. (Default: False).
process(self: imfusion.machinelearning.Operation, points: imfusion.machinelearning.KeypointSet, in_place: bool = False) -> imfusion.machinelearning.KeypointSet
- Execute the operation on the input keypoints. The output will always be a different set of keypoints, i.e. this function never works in-place.
- Args:
points (SharedImageSet): the input points. in_place (bool): if True, the input will be changed and the function will return it. If False, the input is guaranteed to be unchanged and the function will return a new object (Default: False).
process(self: imfusion.machinelearning.Operation, boxes: imfusion.machinelearning.BoundingBoxSet, in_place: bool = False) -> imfusion.machinelearning.BoundingBoxSet
- Execute the operation on the input bounding boxes. The output will always be a different set of bounding boxes, i.e. this function never works in-place.
- Args:
boxes (SharedImageSet): the input boxes. in_place (bool): If False, the input is guaranteed to be unchanged and the function will return a new object. If True, the input will be changed and the function will return it. (Default: False).
- EVERYTHING = <ProcessingPolicy.EVERYTHING: 1>
- EVERYTHING_BUT_LABELS = <ProcessingPolicy.EVERYTHING_BUT_LABELS: 0>
- ONLY_LABELS = <ProcessingPolicy.ONLY_LABELS: 2>
- property active_fields
Fields in the data item that this operation will process.
- property computing_device
The computing device property.
- property does_not_modify_input
- property error_on_unexpected_behaviour
Treat unexpected behaviour warnings as errors.
- property name
- property processing_policy
The processing_policy property. Resetting it overrides the default operation behaviour on label.
- property record_identifier
Identifier used to record this operation for inversion. Setting this enables inversion if the operation supports it.
- property seed
- property supports_inversion
Returns whether this operation supports inversion.
- class imfusion.machinelearning.OperationsSequence(*args, **kwargs)
Bases:
pybind11_object
Helper class that executes a list of operations sequentially. This class tries to minimize the number of intermediate copies and should be used for performance reasons.
Overloaded function.
__init__(self: imfusion.machinelearning.OperationsSequence) -> None
Default constructor that initializes the class with an empty list of operations.
__init__(self: imfusion.machinelearning.OperationsSequence, pipeline_config: list[tuple[str, imfusion.Properties, imfusion.machinelearning.Phase]]) -> None
Init the sequential processing with a pipeline of Operations and their relative specs. The operations are executed according to their pipeline order.
- Parameters:
pipeline_config – List of specs for the operations to add to the sequence.
- add_operation(*args, **kwargs)
Overloaded function.
add_operation(self: imfusion.machinelearning.OperationsSequence, operation: imfusion.machinelearning.Operation, phase: imfusion.machinelearning.Phase = <Phase.ALWAYS: 7>) -> bool
Add an operation to the sequential processing. The operations are executed according to the addition order.
- Parameters:
operation – operation instance to add to the sequence.
phase – when to execute the added operation. Default: Phase.Always
add_operation(self: imfusion.machinelearning.OperationsSequence, name: str, properties: imfusion.Properties, phase: imfusion.machinelearning.Phase = <Phase.ALWAYS: 7>) -> None
Add an operation to the sequential processing. The operations are executed according to the addition order.
- Parameters:
name – name of the operation to add to the sequence. You must use the name used for registering the op in the operation factory. A list of the available ops can be retrieved by
available_operations())()
.properties – properties to configure the operation.
phase – specifies at which execution phase should the operation be run.
- static available_cpp_operations() list[str]
Returns the list of registered C++ operations available for usage in OperationsSequence.
- static available_operations() list[str]
Returns the list of all registered operations available for usage in OperationsSequence.
- static available_py_operations() list[str]
Returns the list of registered Python operations available for usage in OperationsSequence.
- ok(self: OperationsSequence) bool
Returns whether operation setup was successful.
- operation_names(self: OperationsSequence) list[str]
Returns the operation names added to the sequence.
- process(*args, **kwargs)
Overloaded function.
process(self: imfusion.machinelearning.OperationsSequence, input: imfusion.SharedImageSet, exec_phase: imfusion.machinelearning.Phase = <Phase.ALWAYS: 7>, in_place: bool = True) -> imfusion.SharedImageSet
Execute the preprocessing pipeline on the given input images. This function never works in-place.
- Parameters:
input – input image
exec_phase –
specifies the execution phase of the preprocessing pipeline. The execution will run only those operations whose phase (specified in the specs) corresponds to the current exec_phase, with the following exceptions:
Operations marked with phase == Phase.Always are always run regardless of the exec_phase.
If exec_phase == Phase.Always, all operations in the preprocessing pipeline are run regardless of their individual phase.
in_place (bool): If False, the input is guaranteed to be unchanged and the function will return a new object. If True, the input will be changed and the function will return it. (Default: False).
process(self: imfusion.machinelearning.OperationsSequence, input: imfusion.machinelearning.DataItem, exec_phase: imfusion.machinelearning.Phase = <Phase.ALWAYS: 7>) -> bool
Execute the preprocessing pipeline on the given input. This function always works in-place, i.e. the input DataItem will be modified.
- Parameters:
input – DataItem to be processed
exec_phase –
specifies the execution phase of the preprocessing pipeline. The execution will run only those operations whose phase (specified in the specs) corresponds to the current exec_phase, with the following exceptions:
Operations marked with phase == Phase.Always are always run regardless of the exec_phase.
If exec_phase == Phase.Always, all operations in the preprocessing pipeline are run regardless of their individual phase.
- set_error_on_unexpected_behaviour(self: OperationsSequence, arg0: bool) None
Set flag on all operations whether to throw an error when an operation warn about an unexpected behaviour.
- property operations
- class imfusion.machinelearning.OrientedROISampler(*args, **kwargs)
Bases:
ImageROISampler
The OrientedROISampler draws
num_samples
randomly ROIs of sizeroi_size
with spacingroi_spacing
per dataset. The sampler takesn_guided = floor(sample_from_labels_proportion * num_samples)
label guided samples, and uniformly random samples for the rest of the samples. Labelmaps and Keypoints are supported for label guided sampling; for labelmap sampling, the labelmap is interpreted as a probabilistic output and sampled accordingly (thus negative values break the sampling, and labelmaps need to be one-hot encoded in case of multiple label values). Random augmentations can applied, including rotation, flipping, shearing, scaling and jitter. These augmentations are directly changing the matrix of the sample, thus the samples are not guaranteed to be affine or even in a right-handed coordinate system. The samples retain their matrices, so they can be viewed in their original position. May throw an ImageSamplerError- Parameters:
roi_size – Target size of the ROIs to be extracted as [Width, Height, Slices]
roi_spacing – Target spacing of the ROIs to be extracted in mm
num_samples – Number of samples to draw from one image
random_rotation_range – Vector defining deviation in quaternion rotation over the corresponding axis. Default: [0, 0, 0]
random_flipping_probability – Vector defining the change that corresponding dimension gets flipped. Default: [0, 0, 0]
random_shearing_range – Vector defining the range of proportional shearing in each dimension. Default: [0, 0, 0]
random_scaling_range – Vector defining the range of scaling in each dimension. Default: [0, 0, 0]
random_jitter_range – Vector defining the range of jitter applied on top of the crop location, defined as the standard deviation in mm in each dimension. Default: [0, 0, 0]
sample_from_labels_proportion – If num_samples is greater than one, guaranteed per-batch proportion of num_samples that should contain non-zero labels in the ROI if possible. If num_samples is one, then this parameter is treated as a probability that the sample is forced to contain non-zero labels. Default: 0.0.
avoid_borders – When taking random samples, the samples avoid to see the border if this is turned on. Default: false
align_crop – Align crop to image grid system, before applying augmentations. Default: false
centers – Optional list of centers to sample from. Default: []
random_scaling_logarithmic – Sample the scaling from a distribution that yields uniform scaling factors in 1/x … x with \(x = 1 + \textnormal{random_scaling_range}\) instead of using scalings from \(\max(|1.0 + \mathcal{N}(0, \text{randomScalingRange})|, 0.001)\). Default: False
random_rotation_probability – Probability to apply a random rotation (parametrized by random_rotation_range). Default: 1.f
random_shearing_probability – Probability to apply a random shearing (parametrized by random_shearing_range). Default: 1.f
random_scaling_probability – Probability to apply a random scaling (parametrized by random_scaling_range). Default: 1.f
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
padding_mode: Properties.EnumStringParam(value=”zero”, admitted_values={“clamp”, “mirror”, “zero”})
label_padding_mode: Properties.EnumStringParam(value=”zero”, admitted_values={“clamp”, “mirror”, “zero”})
y_axis_down: False
squeeze: False
Overloaded function.
__init__(self: imfusion.machinelearning.OrientedROISampler, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.OrientedROISampler, roi_size: numpy.ndarray[numpy.int32[3, 1]], roi_spacing: numpy.ndarray[numpy.float64[3, 1]], num_samples: int, random_rotation_range: numpy.ndarray[numpy.float64[3, 1]], random_flipping_probability: numpy.ndarray[numpy.float64[3, 1]], random_shearing_range: numpy.ndarray[numpy.float64[3, 1]], random_scaling_range: numpy.ndarray[numpy.float64[3, 1]], random_jitter_range: numpy.ndarray[numpy.float64[3, 1]], sample_from_labels_proportion: float = 0.0, avoid_borders: bool = False, align_crop: bool = False, centers: Optional[list[numpy.ndarray[numpy.float64[3, 1]]]] = None, random_scaling_logarithmic: bool = False, random_rotation_probability: float = 1.0, random_shearing_probability: float = 1.0, random_scaling_probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.PadDimsOperation(*args, **kwargs)
Bases:
InvertibleOperation
This operation expands an image by adding padding pixels to any or all sides. The value of the border can be specified by the padding mode:
Clamp: The border pixels are the same as the closest image pixel.
Mirror: The border pixels are the same as the closest image pixel.
Zero: Constant padding with zeros or, if provided, with paddingValue.
For label maps (i.e. modality == Modality.LABEL), a separate padding mode and value can be specified:
If both label padding mode and label padding value are specified, those values are used to pad the label map.
If only the label padding mode is specified, the
paddingValue
is used to fill the label map (only for zero padding).If only the label padding value is specified, the
paddingMode
is used as the label padding mode.If neither label padding mode nor label padding value are specified,
paddingMode
andpaddingValue
are used for label maps as well.
- Note: the padding widths are evenly distributed to the left and right of the input image.
If the difference
delta
between the target dimensions and the input dimensions is odd, the padding is distributed as delta / 2 to the left and delta / 2 + 1 to the right.
Note:
PaddingMode
can be automatically converted from a string. This means you can directly pass a string like “zero”, “clamp”, or “mirror” to the padding_mode parameters instead of using the enum values.- Parameters:
target_dims – Target dimensions [width, height, depth] for the padded image. Default: [1, 1, 1]
padding_mode – Mode for padding in [“zero”, “clamp”, “mirror”]. Default:
MIRROR
padding_value – Value to use for padding when using Zero mode (optional). Default: None
label_padding_mode – Mode for padding label maps in [“zero”, “clamp”, “mirror”], optional. Default: None
label_padding_value – Value to use for padding label maps when using Zero mode (optional). Default: None
allow_dimension_change – Allow padding dimensions equal to 1, which can change image dimension (e.g. 2D to 3D). Default: True
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Identifier of this operation to retrieve inversion parameters from the record
Overloaded function.
__init__(self: imfusion.machinelearning.PadDimsOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.PadDimsOperation, target_dims: numpy.ndarray[numpy.int32[3, 1]] = array([1, 1, 1], dtype=int32), padding_mode: imfusion.PaddingMode = <PaddingMode.MIRROR: 1>, padding_value: Optional[float] = None, label_padding_mode: Optional[imfusion.PaddingMode] = None, label_padding_value: Optional[int] = None, allow_dimension_change: bool = True, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.PadDimsToNextMultipleOperation(*args, **kwargs)
Bases:
InvertibleOperation
Pads each dimension of the input image to the next multiple of the specified divisor. For example, if an image has dimensions (100, 150, 200) and dimension_divisor is (32, 16, 64), the output will have dimensions (128, 160, 256). This operation is useful for ensuring that the input dimensions are compatible with a ML model (e.g. a CNN or UNet) that expects specific dimensions. The value of the border can be specified by the padding mode:
Clamp: The border pixels are the same as the closest image pixel.
Mirror: The border pixels are the same as the closest image pixel.
Zero: Constant padding with zeros or, if provided, with paddingValue.
For label maps (i.e. modality == Modality.LABEL), a separate padding mode and value can be specified:
If both label padding mode and label padding value are specified, those values are used to pad the label map.
If only the label padding mode is specified, the
paddingValue
is used to fill the label map (only for zero padding).If only the label padding value is specified, the
paddingMode
is used as the label padding mode.If neither label padding mode nor label padding value are specified,
paddingMode
andpaddingValue
are used for label maps as well.
- Note: the padding widths are evenly distributed to the left and right of the input image.
If the difference
delta
between the target dimensions and the input dimensions is odd, the padding is distributed as delta / 2 to the left and delta / 2 + 1 to the right.
Note:
PaddingMode
can be automatically converted from a string. This means you can directly pass a string like “zero”, “clamp”, or “mirror” to the padding_mode parameters instead of using the enum values.- Parameters:
dimension_divisor – The divisor for each dimension of the input image. Default: [1, 1, 1]
padding_mode – Mode for padding in [“zero”, “clamp”, “mirror”]. Default:
MIRROR
padding_value – Value to use for padding when using Zero mode (optional). Default: None
label_padding_mode – Mode for padding label maps in [“zero”, “clamp”, “mirror”], optional. Default: None
label_padding_value – Value to use for padding label maps when using “zero” mode (optional). Default: None
allow_dimension_change – Allow padding dimensions equal to 1, which can change image dimension (e.g. 2D to 3D). Default: True
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Identifier of this operation to retrieve inversion parameters from the record
Overloaded function.
__init__(self: imfusion.machinelearning.PadDimsToNextMultipleOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.PadDimsToNextMultipleOperation, dimension_divisor: numpy.ndarray[numpy.int32[3, 1]] = array([1, 1, 1], dtype=int32), padding_mode: imfusion.PaddingMode = <PaddingMode.MIRROR: 1>, padding_value: Optional[float] = None, label_padding_mode: Optional[imfusion.PaddingMode] = None, label_padding_value: Optional[int] = None, allow_dimension_change: bool = True, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.PadOperation(*args, **kwargs)
Bases:
InvertibleOperation
Pad an image to a specific padding size in each dimension. This operation expands an image by adding padding pixels to any or all sides. The value of the border can be specified by the padding mode:
Clamp: The border pixels are the same as the closest image pixel.
Mirror: The border pixels are the same as the closest image pixel.
Zero: Constant padding with zeros or, if provided, with paddingValue.
For label maps (i.e. modality == Modality.LABEL), a separate padding mode and value can be specified:
If both label padding mode and label padding value are specified, those values are used to pad the label map.
If only the label padding mode is specified, the
paddingValue
is used to fill the label map (only for zero padding).If only the label padding value is specified, the
paddingMode
is used as the label padding mode.If neither label padding mode nor label padding value are specified,
paddingMode
andpaddingValue
are used for label maps as well.
Note: Padding sizes are specified in pixels, and can be positive, negative or mixed. Negative padding means cropping.
Note: Both GPU and CPU implementations are provided.
Note:
PaddingMode
can be automatically converted from a string. This means you can directly pass a string like “zero”, “clamp”, or “mirror” to the padding_mode parameters instead of using the enum values.- Parameters:
pad_size_x – Padding width in pixels for X dimension [left, right]. Default: [0, 0]
pad_size_y – Padding width in pixels for Y dimension [top, bottom]. Default: [0, 0]
pad_size_z – Padding width in pixels for Z dimension [front, back]. Default: [0, 0]
padding_mode – Mode for padding in [“zero”, “clamp”, “mirror”]. Default:
MIRROR
padding_value – Optional value to use for padding when using Zero mode. Default: None
label_padding_mode – Optional mode for padding label maps in [“zero”, “clamp”, “mirror”]. Default: None
label_padding_value – Optional value to use for padding label maps when using “zero” mode. Default: None
allow_dimension_change – Allow padding dimensions equal to 1, which can change image dimension (e.g. 2D to 3D). Default: True
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Identifier of this operation to retrieve inversion parameters from the record
Overloaded function.
__init__(self: imfusion.machinelearning.PadOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.PadOperation, pad_size_x: numpy.ndarray[numpy.int32[2, 1]] = array([0, 0], dtype=int32), pad_size_y: numpy.ndarray[numpy.int32[2, 1]] = array([0, 0], dtype=int32), pad_size_z: numpy.ndarray[numpy.int32[2, 1]] = array([0, 0], dtype=int32), padding_mode: imfusion.PaddingMode = <PaddingMode.MIRROR: 1>, padding_value: Optional[float] = None, label_padding_mode: Optional[imfusion.PaddingMode] = None, label_padding_value: Optional[int] = None, allow_dimension_change: bool = True, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ParamUnit(*args, **kwargs)
Bases:
pybind11_object
Members:
MM
FRACTION
VOXEL
Overloaded function.
__init__(self: imfusion.machinelearning.ParamUnit, value: int) -> None
__init__(self: imfusion.machinelearning.ParamUnit, arg0: str) -> None
- FRACTION = FRACTION
- MM = MM
- VOXEL = VOXEL
- property name
- property value
- class imfusion.machinelearning.Phase(*args, **kwargs)
Bases:
pybind11_object
Members:
TRAINING
VALIDATION
INFERENCE
ALWAYS
Overloaded function.
__init__(self: imfusion.machinelearning.Phase, value: int) -> None
__init__(self: imfusion.machinelearning.Phase, arg0: str) -> None
__init__(self: imfusion.machinelearning.Phase, arg0: list[str]) -> None
- ALWAYS = <Phase.ALWAYS: 7>
- INFERENCE = <Phase.INFERENCE: 4>
- TRAINING = <Phase.TRAINING: 1>
- VALIDATION = <Phase.VALIDATION: 2>
- property name
- property value
- class imfusion.machinelearning.PixelwiseClassificationMetrics(self: PixelwiseClassificationMetrics)
Bases:
Metric
- compute_per_label(self: PixelwiseClassificationMetrics, arg0: SharedImageSet, arg1: SharedImageSet) list[dict[str, dict[int, float]]]
- class imfusion.machinelearning.PolyCropOperation(*args, **kwargs)
Bases:
Operation
Masks the image with a convex polygon as described in Markova et al. 2022. (https://arxiv.org/abs/2205.03439)
- Parameters:
points – Each point (texture coordinates) in
points
defines a plane (perpendicular to the direction from the center to the point), this plane splits the volume in two parts, the part of the image that doesn’t contain the image center is discarded.device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.PolyCropOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.PolyCropOperation, points: list[numpy.ndarray[numpy.float64[3, 1]]] = [], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.PredictionOutput(*args, **kwargs)
Bases:
pybind11_object
Members:
UNKNOWN
VECTOR
IMAGE
KEYPOINTS
BOUNDING_BOXES
Overloaded function.
__init__(self: imfusion.machinelearning.PredictionOutput, value: int) -> None
__init__(self: imfusion.machinelearning.PredictionOutput, arg0: str) -> None
- BOUNDING_BOXES = <PredictionOutput.BOUNDING_BOXES: 3>
- IMAGE = <PredictionOutput.IMAGE: 1>
- KEYPOINTS = <PredictionOutput.KEYPOINTS: 2>
- UNKNOWN = <PredictionOutput.UNKNOWN: -1>
- VECTOR = <PredictionOutput.VECTOR: 0>
- property name
- property value
- class imfusion.machinelearning.PredictionType(*args, **kwargs)
Bases:
pybind11_object
Members:
UNKNOWN
CLASSIFICATION
REGRESSION
OBJECT_DETECTION
Overloaded function.
__init__(self: imfusion.machinelearning.PredictionType, value: int) -> None
__init__(self: imfusion.machinelearning.PredictionType, arg0: str) -> None
- CLASSIFICATION = <PredictionType.CLASSIFICATION: 0>
- OBJECT_DETECTION = <PredictionType.OBJECT_DETECTION: 2>
- REGRESSION = <PredictionType.REGRESSION: 1>
- UNKNOWN = <PredictionType.UNKNOWN: -1>
- property name
- property value
- class imfusion.machinelearning.ProcessingRecordComponent(self: ProcessingRecordComponent)
Bases:
DataComponentBase
- class imfusion.machinelearning.RandomAddDegradedLabelAsChannelOperation(*args, **kwargs)
Bases:
Operation
Append a channel to the image that contains a randomly degraded version of the label.
- Parameters:
blob_radius – Radius of each blob, in pixel coordinates. Default: 5.0.
probability_no_blobs – Probability that zero blobs are chosen. Default: 0.1
probability_invert – Probability of inverting the blobs, in this case the extra channel is positive/negative based on the label except at blobs, where it is zero. Default: 0.0
mean_num_blobs – Mean of (Poisson-distributed) number of blobs to draw, conditional on
probability_no_blobs
. Default: 100.0only_positive – If true, output channel is clamped to zero from below. Default: False
label_dilation_range – The label_dilation parameter of the underlying AddDegradedLabelAsChannelOperation is uniformly drawn from this range. Default: [0.0, 0.0]
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
probability: 1.0
dilation_range: 0 0
Overloaded function.
__init__(self: imfusion.machinelearning.RandomAddDegradedLabelAsChannelOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomAddDegradedLabelAsChannelOperation, blob_radius: float = 5.0, probability_no_blobs: float = 0.1, probability_invert: float = 0.0, mean_num_blobs: float = 100.0, only_positive: bool = False, dilation_range: numpy.ndarray[numpy.float64[2, 1]] = array([0., 0.]), *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomAddRandomNoiseOperation(*args, **kwargs)
Bases:
Operation
Apply
AddRandomNoiseOperation
to images with randomized intensity parameter.- Parameters:
type – Distribution of the noise (‘uniform’, ‘gaussian’, ‘gamma’,’shot’). Default: ‘uniform’. See
AddRandomNoiseOperation
. intensity_range: Range of the interval used to draw the intensity parameter. Default: [0.0, 0.0]. Absolute values of drawn values are taken.device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
probability – Float in [0, 1] defining the probability for the operation to be executed. Default: 1.0
- Other parameters accepted by configure():
intensity_range: 0 0
Overloaded function.
__init__(self: imfusion.machinelearning.RandomAddRandomNoiseOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomAddRandomNoiseOperation, type: str = ‘uniform’, intensity_range: numpy.ndarray[numpy.float64[2, 1]] = array([0., 0.]), probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomAxisFlipOperation(*args, **kwargs)
Bases:
Operation
Flip image content along specified set of axes, with independent sampling for each axis.
- Parameters:
axes – List of strings from {‘x’,’y’,’z’} specifying the axes to flip. For 2D images, only ‘x’ and ‘y’ are valid.
probability – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomAxisFlipOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomAxisFlipOperation, axes: list[str] = [], probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomAxisRotationOperation(*args, **kwargs)
Bases:
Operation
Rotate image around image axis with independently drawn axis-specific random rotation angle of +-{90, 180, 270} degrees.
- Parameters:
axes – List of strings from {‘x’,’y’,’z’} specifying the axes to rotate around. For 2D images, only [‘z’] is valid.
probability – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomAxisRotationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomAxisRotationOperation, axes: list[str] = [], probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomChoiceOperation(*args, **kwargs)
Bases:
Operation
Meta-operation that picks one operation from its configuration randomly and executes it. This is particularly useful for image samplers, where we might want to alternate between different ways of sampling the input images.
- Parameters:
operation_specs – List of operation Specs to configure the operations to be added.
operation_weights – Weights associated to the each operation during the sampling process. A higher relative weight given to an operation means that this operation will be sampled more often.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomChoiceOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomChoiceOperation, operation_specs: list[imfusion.machinelearning.Operation.Specs] = [], operation_weights: list[float] = [], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomChoiceOperation, operation_specs: list[tuple[str, imfusion.Properties, imfusion.machinelearning.Phase]], operation_weights: list[float] = []) -> None
Meta-operation that picks one operation from its configuration randomly and executes it. This is particularly useful for image samplers, where we might want to alternate between different ways of sampling the input images.
- Parameters:
operation_specs – List of operation (name, Properties, Phase) casted into Specs to configure the operations to be added.
operation_weights – Weights associated to the each operation during the sampling process. A higher relative weight given to an operation means that this operation will be sampled more often.
__init__(self: imfusion.machinelearning.RandomChoiceOperation, operations: list[imfusion.machinelearning.Operation], operation_weights: list[float]) -> None
Meta-operation that picks one operation from its configuration randomly and executes it. This is particularly useful for image samplers, where we might want to alternate between different ways of sampling the input images.
- Parameters:
operations – List of operations to be added.
operation_weights – Weights associated to the each operation during the sampling process. A higher relative weight given to an operation means that this operation will be sampled more often.
- class imfusion.machinelearning.RandomCropAroundLabelMapOperation(*args, **kwargs)
Bases:
Operation
Crops the input image and label to the bounds of a random label value, and sets the label value to 1 and all other values to zero in the resulting label.
- Parameters:
margin – Margin, in pixels. Default: 1
reorder – Whether label value in result should be mapped to 1. Default: False
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
probability: 1.0
Overloaded function.
__init__(self: imfusion.machinelearning.RandomCropAroundLabelMapOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomCropAroundLabelMapOperation, margin: int = 1, reorder: bool = False, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomCropOperation(*args, **kwargs)
Bases:
Operation
Crop input images and label maps with a matching random size and offset.
- Parameters:
crop_range – List of floats from [0;1] specifying the maximum percentage of the dimension to crop. Default: [0.0, 0.0, 0.0]
probability – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomCropOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomCropOperation, crop_range: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomCutOutOperation(*args, **kwargs)
Bases:
Operation
Apply a random cutout to the image.
- Parameters:
cutout_size_lower – List of doubles specifying the lower bound of the cutout region size for each dimension in mm. Default: [0, 0, 0]
cutout_size_upper – List of doubles specifying the upper bound of the cutout region size for each dimension in mm. Default: [0, 0, 0]
cutout_value_range – List of floats specifying the minimum and maximum fill value for cutout regions. Default: [0, 0]
cutout_number_range – List of integers specifying the minimum and maximum number of cutout regions. Default: [0, 0]
cutout_size_units – Units of the cutout size. Default:
MM
probability – Float in [0;1] defining the probability for the operation to be executed.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomCutOutOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomCutOutOperation, cutout_size_lower: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), cutout_size_upper: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), cutout_value_range: numpy.ndarray[numpy.float32[2, 1]] = array([0., 0.], dtype=float32), cutout_number_range: numpy.ndarray[numpy.int32[2, 1]] = array([0, 0], dtype=int32), cutout_size_units: imfusion.machinelearning.ParamUnit = MM, probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomDeformationOperation(*args, **kwargs)
Bases:
Operation
Apply a deformation to the image using a specified control point grid and random displacements
- Parameters:
num_subdivisions – list specifying the number of subdivisions for each dimension (the number of control points is subdivisions+1). For 2D images, the last component will be ignored. Default: [1, 1, 1]
max_abs_displacement – absolute value of the maximum possible displacement (mm). Default: 1
padding_mode – defines which type of padding is used in [“zero”, “clamp”, “mirror”]. Default:
ZERO
probability – probability of applying this Operation. Default: 1.0
adjust_size – configures whether the resulting image should adjust its size to encompass the deformation. Default: False
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomDeformationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomDeformationOperation, num_subdivisions: numpy.ndarray[numpy.int32[3, 1]] = array([1, 1, 1], dtype=int32), max_abs_displacement: float = 1, padding_mode: imfusion.PaddingMode = <PaddingMode.ZERO: 0>, probability: float = 1.0, adjust_size: bool = False, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomGammaCorrectionOperation(*args, **kwargs)
Bases:
Operation
Apply a random gamma correction to the image intensities. Output = Unnormalize(pow(Normalize(Input), gamma)) where
gamma
is drawn uniformly in [1-random_range; 1+random_range].- Parameters:
random_range – Range of the interval used to draw the gamma correction, typically in [0; 0.5]. Default: 0.2
probability – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomGammaCorrectionOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomGammaCorrectionOperation, random_range: float = 0.2, probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomImageFromLabelOperation(*args, **kwargs)
Bases:
Operation
Creates a random image from a label map, each label is sampled from a Gaussian distribution. Each Gaussian distribution parameters (mean and standard deviation) are uniformly sampled withing the provided intervals (respectively mean_range and standard_dev_range).
- Parameters:
mean_range – Range of means for the intensities’ Gaussian distributions.
standard_dev_range – Range of standard deviations for the intensities’ Gaussian distributions.
output_field – Output field for the generated image.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomImageFromLabelOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomImageFromLabelOperation, mean_range: numpy.ndarray[numpy.float64[2, 1]], standard_dev_range: numpy.ndarray[numpy.float64[2, 1]], output_field: str = ‘ImageFromLabel’, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomInvertOperation(*args, **kwargs)
Bases:
Operation
Invert the intensities of the image: \(\textnormal{output} = -\textnormal{input}\).
- Parameters:
probability – Float in [0;1] defining the probability for the operation to be executed. Default: 0.5
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomInvertOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomInvertOperation, probability: float = 0.5, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomKeypointJitterOperation(*args, **kwargs)
Bases:
Operation
Adds an individually and randomly sampled offset to each keypoint of each KeypointElement.
- Parameters:
offset_std_dev – standard deviation of the normal distribution used to sample the jitter in mm
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomKeypointJitterOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomKeypointJitterOperation, offset_std_dev: float, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomLinearIntensityMappingOperation(*args, **kwargs)
Bases:
Operation
Apply a random linear shift and scale to the image intensities. \(\textnormal{output} = \textnormal{factor}_\textnormal{random} * \textnormal{input} + \textnormal{bias}_\textnormal{random}\) ,
where \(\textnormal{factor}_\textnormal{random}\) is drawn uniformly in \([1-\textnormal{random_range}, 1+\textnormal{random_range}]\)and \(\textnormal{bias}_\textnormal{random}\) is drawn uniformly in \([-\textnormal{random_range}*(\max(\textnormal{input})-\min(\textnormal{input})), \textnormal{random_range}*(\max(\textnormal{input})-\min(\textnormal{input}))]\).- Parameters:
random_range – Perturbation amplitude, typically in [0.0, 1.0]. Default: 0.2
probability – Float in [0, 1] defining the probability for the operation to be executed. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomLinearIntensityMappingOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomLinearIntensityMappingOperation, random_range: float = 0.2, probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomMRIBiasFieldGenerationOperation(*args, **kwargs)
Bases:
Operation
Apply or generate a random multiplicative intensity modulation field. If the output is a field, it is shifted as close to mean 1 as possible while remaining positive everywhere. If the output is not a field, the image intensity is shifted so that the mean intensity of the input image is preserved.
- Parameters:
center_beta_dist_params – Beta distribution parameters for sampling the relative center coordinate locations. Default: [0.0, 1.0]
field_amplitude_random_range – Amplitude of the field. Default: [0.2, 0.5]
length_scale_mm_random_range – Range of length scale of the distance kernel in mm. Default: [50.0, 400.0]
distance_scaling_random_range – Range of relative scaling of scanner space coordinates for anisotropic fields. Default: [0.5, 1.0]
invert_probability – Probability to invert the field (before normalization): field <- 2.0 - field. Default: 0.0
output_is_field – Produce field instead of corrupted image. Default: False
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
probability: 1.0
Overloaded function.
__init__(self: imfusion.machinelearning.RandomMRIBiasFieldGenerationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomMRIBiasFieldGenerationOperation, center_beta_dist_params: numpy.ndarray[numpy.float64[2, 1]] = array([0., 1.]), field_amplitude_random_range: numpy.ndarray[numpy.float64[2, 1]] = array([0.2, 0.5]), length_scale_mm_random_range: numpy.ndarray[numpy.float64[2, 1]] = array([ 50., 400.]), distance_scaling_random_range: numpy.ndarray[numpy.float64[2, 1]] = array([0.5, 1. ]), invert_probability: float = 0.0, output_is_field: bool = False, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomPolyCropOperation(*args, **kwargs)
Bases:
Operation
Masks the image with a random convex polygon as described in Markova et al. 2022 (https://arxiv.org/abs/2205.03439). The convex polygon mask is constructed by sampling random planes, each plane splits the volume in two parts, the part of the image that doesn’t contain the image center is discarded.
- Parameters:
number_range – Range of integers specifying the minimum and maximum number of cutting planes. Default: [5, 10]
min_radius – The minimum distance a cutting plane must have from the center (image coordinates are normalized to [-1, 1]). Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
probability: 1.0
Overloaded function.
__init__(self: imfusion.machinelearning.RandomPolyCropOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomPolyCropOperation, number_range: numpy.ndarray[numpy.int32[2, 1]] = array([ 5, 10], dtype=int32), min_radius: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomROISampler(*args, **kwargs)
Bases:
ImageROISampler
Sampler which randomly samples ROIs from the input image and label map with a target The images will be padded if the target size is larger than the input image.
- Parameters:
roi_size – Target size of the ROIs to be extracted as [Width, Height, Slices]
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
padding_mode: Properties.EnumStringParam(value=”clamp”, admitted_values={“clamp”, “mirror”, “zero”})
label_padding_mode: Properties.EnumStringParam(value=”clamp”, admitted_values={“clamp”, “mirror”, “zero”})
Overloaded function.
__init__(self: imfusion.machinelearning.RandomROISampler, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomROISampler, roi_size: numpy.ndarray[numpy.int32[3, 1]], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomResolutionReductionOperation(*args, **kwargs)
Bases:
Operation
Downsamples the image to a target_spacing and upsamples again to the original spacing to reduce image information. The
target_spacing
is sampled uniformly and independently in each dimension between the corresponding image spacing andmax_spacing
.- Parameters:
max_spacing – maximum spacing per dimension which the target spacing is randomly sampled from. Minimum sampling spacing is the maximum (over all frames of the image set) spacing per dimension of the input SharedImageSet. Default: [0.0, 0.0, 0.0]
probability – probability of applying this Operation. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomResolutionReductionOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomResolutionReductionOperation, max_spacing: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomRotationOperation(*args, **kwargs)
Bases:
Operation
Rotate input images and label maps with random angles.
- Parameters:
angles_range – List of floats specifying the upper bound (in degrees) of the range from with the rotation angles will be drawn uniformly. Only the third component should be non-zero for 2D images. Default: [0, 0, 0]
adjust_size – Increase image size to include the whole rotated image or keep current dimensions. Default: False
apply_now – Bake transformation right way (otherwise, just changes the matrix). Default: False
probability – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomRotationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomRotationOperation, angles_range: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), adjust_size: bool = False, apply_now: bool = False, probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomScalingOperation(*args, **kwargs)
Bases:
Operation
Scale input images and label maps with random factors.
- Parameters:
scales_range (vec3) – List of floats specifying the upper bound of the range from which the scaling ofset will be sampled. The scaling factor will be drawn uniformly within [1-scale, 1+scale]. Scale should be between 0 and 1. Default: [0.5, 0.5, 0.5]
log_scales_range (vec3) – List of floats specifying the upper bound of the range from which the scaling factor will be drawn uniformly in log scale. The scaling will then be distributed within [1/log_scale, log_scale]. Default: [2., 2., 2.]
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
log_parameterization (bool) – If true, uses the log scales range parameterization, otherwise uses the scales range parameterization. Default: False
apply_now (bool) – Bake transformation right way (otherwise, just changes the matrix). Default: False
probability (float) – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0
- Other parameters accepted by configure():
log_parametrization: False
Overloaded function.
__init__(self: imfusion.machinelearning.RandomScalingOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomScalingOperation, scales_range: numpy.ndarray[numpy.float64[3, 1]] = array([0.5, 0.5, 0.5]), log_scales_range: numpy.ndarray[numpy.float64[3, 1]] = array([2., 2., 2.]), log_parameterization: bool = False, apply_now: bool = False, probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomSmoothOperation(*args, **kwargs)
Bases:
Operation
Apply a random smoothing on the image (Gaussian kernel). The kernel can be parameterized either in pixel or in mm, and can be anisotropic. The half kernel size is distributed uniformly between half_kernel_bounds[0] and half_kernel_bounds[1]. \(\textnormal{image_output} = \textnormal{image} * \textnormal{gaussian_kernel}(\sigma)\) , with \(\sigma \sim U(\textnormal{half_kernel_bounds}[0], \textnormal{half_kernel_bounds}[1])\)
- Parameters:
half_kernel_bounds – Bounds for the half kernel size. The final kernel size is 2 times the sampled half kernel size plus one. Default: [1, 1]
kernel_size_in_mm – Interpret kernel size as mm. Otherwise uses pixels. Default: False
isotropic – Forces the randomly drawn kernel size to be isotropic. Default: True
probability – Value in [0.0; 1.0] indicating the probability of this operation to be performed. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomSmoothOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomSmoothOperation, half_kernel_bounds: numpy.ndarray[numpy.float64[2, 1]] = array([1, 1], dtype=int32), kernel_size_in_mm: bool = False, isotropic: bool = True, probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RandomTemplateInpaintingOperation(*args, **kwargs)
Bases:
Operation
Inpaints a template into an image with randomly selected spatial and intensity transformation in a given range.
- Parameters:
template_paths – paths from which a template .imf file is randomly loaded.
rotation_range – rotation of template in degrees per axis randomly sampled from [-rotation_range, rotation_range]. Default: [0, 0, 0]
translation_range – translation of template in degrees per axis randomly sampled from [-translation_range, translation_range]. Default: [0, 0, 0]
template_mult_factor_range – Multiply template intensities with a factor randomly sampled from this range. Default: [0.0, 0.0]
add_values_to_existing – Adding values to input image rather than replacing them. Default: False
probability – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RandomTemplateInpaintingOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RandomTemplateInpaintingOperation, template_paths: list[str] = [], rotation_range: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), translation_range: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), template_mult_factor_range: numpy.ndarray[numpy.float64[2, 1]] = array([0., 0.]), add_values_to_existing: bool = False, probability: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RecombineMode(*args, **kwargs)
Bases:
pybind11_object
Members:
DEFAULT
WEIGHTED
Overloaded function.
__init__(self: imfusion.machinelearning.RecombineMode, value: int) -> None
__init__(self: imfusion.machinelearning.RecombineMode, arg0: str) -> None
- DEFAULT = <RecombineMode.DEFAULT: 0>
- WEIGHTED = <RecombineMode.WEIGHTED: 1>
- property name
- property value
- class imfusion.machinelearning.RecombinePatchesOperation(*args, **kwargs)
Bases:
Operation
Operation to recombine image patches back into a full image.
This operation is typically used in conjunction with
SplitIntoPatchesOperation
to reconstruct a full image from its patches after processing (e.g., after neural network inference).The operation handles overlapping patches by averaging the overlapping regions. For each output pixel, the final value is computed as the weighted average of all patches that contain that pixel. The weighting mode is specified by the
RecombineMode
parameter.It requires input images to have a
PatchesFromImageDataComponent
that stores the location of each patch in the original image. This component is automatically added by theSplitIntoPatchesOperation
or by theSplitROISampler
.Two recombination modes are supported:
DEFAULT
: Simple averaging of overlapping regionsWEIGHTED
: Weighted averaging of overlapping regions (currently identical to Default)
Note: Both GPU and CPU computing devices are supported, via the ComputingDevice parameter in Operation.
Note:
RecombineMode
can be automatically converted from a string. This means you can directly pass a string like “default” or “weighted” to the mode parameter instead of using the enum values.- Args:
mode: The mode for recombining the patches. Default:
DEFAULT
device: Specifies whether this Operation should run on CPU or GPU. seed: Specifies seeding for any randomness that might be contained in this operation. error_on_unexpected_behaviour: Specifies whether to throw an exception instead of warning about unexpected behavior. apply_to: Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
) record_identifier: Unused for this operation as it is not invertible- Other parameters accepted by configure():
device: Properties.EnumStringParam(value=”GPUIfOpenGl”, admitted_values={“ForceGPU”, “GPUIfOpenGl”, “GPUIfGlImage”, “ForceCPU”})
error_on_unexpected_behaviour: False
record_identifier:
mode: Properties.EnumStringParam(value=”Default”, admitted_values={“Weighted”, “Default”})
Overloaded function.
__init__(self: imfusion.machinelearning.RecombinePatchesOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RecombinePatchesOperation, mode: imfusion.machinelearning.RecombineMode = <RecombineMode.DEFAULT: 0>, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RectifyRotationOperation(*args, **kwargs)
Bases:
Operation
Sets the image matrix to the closest xyz-axis aligned rotation, effectively making every rotation angle a multiple of 90 degrees. This is useful when the values of the rotation are unimportant but the axis flips need to be preserved. If used before
BakeTransformationOperation
, this operation will avoid oblique angles and a lot of zero padding.- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RectifyRotationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RectifyRotationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RemoveMaskOperation(*args, **kwargs)
Bases:
Operation
Removes the mask of all input images.
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RemoveMaskOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RemoveMaskOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RemoveOperation(*args, **kwargs)
Bases:
Operation
Removes a set of fields from a data item.
- Parameters:
apply_to – fields to mark as targets (will initialize the underlying
apply_to
parameter)device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RemoveOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RemoveOperation, source: set[str], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RenameOperation(*args, **kwargs)
Bases:
Operation
Renames a set of fields of a data item.
- Parameters:
source – list of the elements to be replaced
target – list of names of the new elements (must match the size of source)
throw_error_on_missing_source – if source field is missing, then throw an error (otherwise warn about unexpected behavior and do nothing). Default: True
throw_error_on_existing_target – if target field already exists, then throw an error (otherwise warn about unexpected behavior and overwrite it). Default: True
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RenameOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RenameOperation, source: list[str], target: list[str], throw_error_on_missing_source: bool = True, throw_error_on_existing_target: bool = True, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ReplaceLabelsValuesOperation(*args, **kwargs)
Bases:
Operation
Replace some label values with other values (only works for integer-typed labels).
For convenience purposes, a default value can be set, in which case all not explicitly defined non-zero input values will be assigned this value.
- Parameters:
old_values – List of integer values to be replaced. All values that are not in this list will remain unchanged.
new_values – List of integer values to replace
old_values
. It must have the same size asold_values
, since there should be a one-to-one mapping.update_labelsdatacomponent – Replaces the old-values in the LabelsDataComponent with the mapped ones. Default: True
default_value – If set, this value will be assigned to all non-zero labels that have not been explicitly assigned. Default: None
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ReplaceLabelsValuesOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ReplaceLabelsValuesOperation, old_values: list[int], new_values: list[int], update_labelsdatacomponent: bool = True, default_value: Optional[int] = None, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ResampleDimsOperation(*args, **kwargs)
Bases:
Operation
Resample the input to fixed target dimensions.
- Parameters:
target_dims – Target dimensions in pixels as [Width, Height, Slices].
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ResampleDimsOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ResampleDimsOperation, target_dims: numpy.ndarray[numpy.int32[3, 1]], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ResampleKeepingAspectRatioOperation(*args, **kwargs)
Bases:
Operation
Resample input to target dimensions while keeping aspect ratio of original images. The target dimensions are specified by either:
one target dimension, i.e.: target_dim_x: 128. In such case the resampling will keep the aspect ratio of dimension y and z wrt x.
two target dimensions, i.e.: target_dim_x: 128, i.e.: target_dim_y: 128 and which dimension to consider for preserving the aspect ratio of the leftover dimension, i.e. keep_aspect_ratio_wrt: x.
- Parameters:
keep_aspect_ratio_wrt – specifies the dimension to which lock the aspect ratio, please assign either of “”, “x”, “y” or “z”. If only one target_dim is specified, then this can be empty (or must match the given target_dim). If all the target_dim args are specified, then this argument must be empty, however, in this case ResampleDims should then be preferred.
target_dim_x – either the target width or None if this dimension will be computed automatically by preserving the aspect ratio.
target_dim_y – either the target height or None if this dimension will be computed automatically by preserving the aspect ratio.
target_dim_z – for 3D images, either the target slices or None if this dimension will be computed automatically by preserving the aspect ratio.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ResampleKeepingAspectRatioOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ResampleKeepingAspectRatioOperation, keep_aspect_ratio_wrt: str = ‘’, target_dim_x: Optional[int] = 1, target_dim_y: Optional[int] = 1, target_dim_z: Optional[int] = 1, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ResampleOperation(*args, **kwargs)
Bases:
Operation
Resample the input to a fixed target resolution.
- Parameters:
resolution – Target spacing in mm.
preserve_extent – Preserve the exact spatial extent of the image, adjusting the output spacing
resolution
accordingly (since the extent is not always a multiple ofresolution
). Default: Truedevice – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ResampleOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ResampleOperation, resolution: numpy.ndarray[numpy.float64[3, 1]], preserve_extent: bool = True, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ResampleToInputOperation(*args, **kwargs)
Bases:
Operation
Resample the input image with respect to the image in
ReferenceImageDataComponent
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ResampleToInputOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ResampleToInputOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.ResetCriterion(*args, **kwargs)
Bases:
pybind11_object
Members:
Fixed
SmallestLoader
LargestLoader
Overloaded function.
__init__(self: imfusion.machinelearning.ResetCriterion, value: int) -> None
__init__(self: imfusion.machinelearning.ResetCriterion, arg0: str) -> None
- Fixed = <ResetCriterion.Fixed: 0>
- LargestLoader = <ResetCriterion.LargestLoader: 2>
- SmallestLoader = <ResetCriterion.SmallestLoader: 1>
- property name
- property value
- class imfusion.machinelearning.ResolutionReductionOperation(*args, **kwargs)
Bases:
Operation
Downsamples the image to the target_spacing and upsamples again to the original spacing to reduce image information.
- Parameters:
target_spacing – spacing per dimension to which the image is resampled before it is resampled back
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ResolutionReductionOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ResolutionReductionOperation, target_spacing: numpy.ndarray[numpy.float64[3, 1]], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RotationOperation(*args, **kwargs)
Bases:
Operation
Rotate input images and label maps with fixed angles.
- Parameters:
angles – Rotation angles in degrees. Only the third component should be non-zero for 2D images. Default: [0, 0, 0]
adjust_size – Increase image size to include the whole rotated image or keep current dimensions. Default: False
apply_now – Bake transformation right way (otherwise, just changes the matrix). Default: False
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RotationOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RotationOperation, angles: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), adjust_size: bool = False, apply_now: bool = False, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.RunModelOperation(*args, **kwargs)
Bases:
Operation
Run a machine learning model on the input item and merge the prediction to the input item. The input field names specified in the model config yaml will be use to determine which fields in the input data item the model is run. If the model doesn’t specify any input field, i.e. is a single input model, the user can either provide an input data item with a single image element, or use the
apply_to
to specify on which field the model should be run. The input item will be populated with the model prediction. The field names are those specified in the model configuration. If no output name is specified (i.e. single output case), the prediction will be associated to the field “Prediction”- Parameters:
config_path – path to the YAML configuration file of the pixelwise model
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.RunModelOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.RunModelOperation, config_path: str, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SISBasedElement
Bases:
DataElement
- to_sis(self: SISBasedElement) SharedImageSet
- property sis
Access to the underlying SharedImageSet.
- class imfusion.machinelearning.ScalingOperation(*args, **kwargs)
Bases:
Operation
Scale input images and label maps with fixed factors.
- Parameters:
scales – Scaling factor applied to each dimension. Default: [1, 1, 1]
apply_now – Bake transformation right way (otherwise, just changes the matrix). Default: True
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ScalingOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ScalingOperation, scales: numpy.ndarray[numpy.float64[3, 1]] = array([1., 1., 1.]), apply_now: bool = True, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SelectChannelsOperation(*args, **kwargs)
Bases:
Operation
Keeps a subset of the input channels specified by the selected channel indices (0-based indexing).
- Parameters:
selected_channels – List of channels to be selected in input. If empty, use all channels.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.SelectChannelsOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.SelectChannelsOperation, selected_channels: list[int], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SetLabelModalityOperation(*args, **kwargs)
Bases:
Operation
Sets the input modality. If the target modality is
LABEL
, warns and skips fields that are not unsigned 8-bit integer. The default processing policy is to apply to targets only.- Parameters:
label_names – List of non-background label names. The label with index zero is assigned the name ‘Background’.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
modality: 8
Overloaded function.
__init__(self: imfusion.machinelearning.SetLabelModalityOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.SetLabelModalityOperation, label_names: list[str], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SetMatrixToIdentityOperation(*args, **kwargs)
Bases:
Operation
Set the matrices of all images to identity (associated landmarks and boxes will be moved accordingly).
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.SetMatrixToIdentityOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.SetMatrixToIdentityOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SetModalityOperation(*args, **kwargs)
Bases:
Operation
Sets the input modality. If the target modality is
LABEL
, warns and skips fields that are not unsigned 8-bit integer. The default processing policy is to apply to all fields.- Parameters:
modality – Modality to set the input to.
label_names – List of non-background label names. The label with index zero is assigned the name ‘Background’. Default: [].
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.SetModalityOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.SetModalityOperation, modality: imfusion.Data.Modality, label_names: list[str] = [], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SetSpacingOperation(*args, **kwargs)
Bases:
Operation
Modify images so that image elements have specified spacing (associated landmarks and boxes will be moved accordingly). :param spacing: Target spacing. :param device: Specifies whether this Operation should run on CPU or GPU. :param seed: Specifies seeding for any randomness that might be contained in this operation. :param error_on_unexpected_behaviour: Specifies whether to throw an exception instead of warning about unexpected behavior. :param apply_to: Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
) :param record_identifier: Unused for this operation as it is not invertibleOverloaded function.
__init__(self: imfusion.machinelearning.SetSpacingOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.SetSpacingOperation, spacing: numpy.ndarray[numpy.float64[3, 1]] = array([1., 1., 1.]), *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SigmoidOperation(*args, **kwargs)
Bases:
Operation
Apply a sigmoid function on the input image. \(\textnormal{output} = 1.0/(1.0 + \exp(- \textnormal{scale} * \textnormal{input}))\)
- Parameters:
scale – Scale parameter within the sigmoid function. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.SigmoidOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.SigmoidOperation, scale: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SmoothOperation(*args, **kwargs)
Bases:
Operation
Run a convolution with a Gaussian kernel on the input image. The kernel can be parameterized either in pixel or in mm, and can be anisotropic.
- Parameters:
half_kernel_size – Half size of the convolution kernel in pixels or mm.
kernel_size_in_mm – Interpret kernel size as mm. Otherwise uses pixels. Default: False
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.SmoothOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.SmoothOperation, half_kernel_size: numpy.ndarray[numpy.float64[3, 1]], kernel_size_in_mm: bool = False, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SoftmaxOperation(*args, **kwargs)
Bases:
Operation
Computes channel-wise softmax on input.
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.SoftmaxOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.SoftmaxOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SplitIntoPatchesOperation(*args, **kwargs)
Bases:
Operation
Operation which splits the input image into overlapping patches for sliding window inference.
The step size is used to compute valid patch positions that cover the full input image. The sampling behavior is controlled by the patch_step_size parameter, which is used to compute the patch offsets in the input image as a fraction of the specified patch size.
- Parameters:
patch_size – Target size of the patches to be extracted as [Width, Height, Slices].
patch_step_size –
Controls the step size between patches as a fraction of the patch size. Range [0, 1]. In cases where the input image is a multiple of the patch size, a step size of 1.0 means no overlapping patches, while a lower step size means a higher number of overlapping patches.
Example: If the input image is 100x100 and the patch size is 50x50, a patch_step_size of 1.0 will result in a 2x2 grid of non-overlapping patches, while a patch_step_size of 0.5 will result in a 3x3 grid of patches, with a 25 pixel overlap between adjacent patches.
In cases where the input image is not a multiple of the ROI size, a step size of 1.0 indicates the optimal way of splitting the image in the least possible number of patches. Example: If the input image is 100x100 and the ROI size is 40x40, a patch_step_size of 1.0 will result in a 3x3 grid of patches, with a 10 pixel overlap between adjacent patches. Conversely, a patch_step_size of 0.5 will result in a 4x4 grid of patches, with a 20 pixel overlap between adjacent patches.
padding_mode – Specifies the padding mode used when the input image is smaller than the specified patch size. In this case, the image is padded to the patch size with the specified padding mode.
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Note: This operation uses the
SplitROISampler
internally.Overloaded function.
__init__(self: imfusion.machinelearning.SplitIntoPatchesOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.SplitIntoPatchesOperation, patch_size: numpy.ndarray[numpy.int32[3, 1]] = array([1, 1, 1], dtype=int32), patch_step_size: float = 0.8, padding_mode: imfusion.PaddingMode = <PaddingMode.MIRROR: 1>, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SplitROISampler(*args, **kwargs)
Bases:
ImageROISampler
Sampler which splits the input image into overlapping ROIs for sliding window inference. This sampler mimics the situation at test-time, when one image needs to be processed in regularly spaced patches.
The step size is used to compute valid ROI positions that cover the full input image. The sampling behavior is controlled by the patch_step_size parameter, which is used to compute the ROIs offsets in the input image as a fraction of the specified ROI size.
- Parameters:
roi_size – Target size of the ROIs to be extracted as [Width, Height, Slices].
[0 (patch_step_size) – Controls the step size between ROIs as a fraction of the ROI size. In cases where the input image is a multiple of the ROI size, a step size of 1.0 means no overlapping patches, while a lower step size means a higher number of overlapping patches. Example: If the input image is 100x100 and the ROI size is 50x50, a patch_step_size of 1.0 will result in a 2x2 grid of non-overlapping patches, while a patch_step_size of 0.5 will result in a 3x3 grid of patches, with a 25 pixel overlap between adjacent patches. In cases where the input image is not a multiple of the ROI size, a step size of 1.0 indicates the optimal way of splitting the image in the least possible number of patches. Example: If the input image is 100x100 and the ROI size is 40x40, a patch_step_size of 1.0 will result in a 3x3 grid of patches, with a 10 pixel overlap between adjacent patches. Conversely, a patch_step_size of 0.5 will result in a 4x4 grid of patches, with a 20 pixel overlap between adjacent patches.
1] – Controls the step size between ROIs as a fraction of the ROI size. In cases where the input image is a multiple of the ROI size, a step size of 1.0 means no overlapping patches, while a lower step size means a higher number of overlapping patches. Example: If the input image is 100x100 and the ROI size is 50x50, a patch_step_size of 1.0 will result in a 2x2 grid of non-overlapping patches, while a patch_step_size of 0.5 will result in a 3x3 grid of patches, with a 25 pixel overlap between adjacent patches. In cases where the input image is not a multiple of the ROI size, a step size of 1.0 indicates the optimal way of splitting the image in the least possible number of patches. Example: If the input image is 100x100 and the ROI size is 40x40, a patch_step_size of 1.0 will result in a 3x3 grid of patches, with a 10 pixel overlap between adjacent patches. Conversely, a patch_step_size of 0.5 will result in a 4x4 grid of patches, with a 20 pixel overlap between adjacent patches.
extract_all_patches (bool) – When true, returns all overlapping patches according to the step size. When false, returns a single randomly selected patch from all possible positions.
allow_dimension_change (bool) – If True, allow padding dimensions equal to 1. This results in changing image dimension e.g. from 2D to 3D. Default: True
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
- Other parameters accepted by configure():
padding_mode: Properties.EnumStringParam(value=”clamp”, admitted_values={“clamp”, “mirror”, “zero”})
label_padding_mode: Properties.EnumStringParam(value=”clamp”, admitted_values={“clamp”, “mirror”, “zero”})
patch_step_size: 0.8
Overloaded function.
__init__(self: imfusion.machinelearning.SplitROISampler, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.SplitROISampler, roi_size: numpy.ndarray[numpy.int32[3, 1]], patch_step_size: float = 0.8, extract_all_patches: bool = False, allow_dimension_change: bool = True, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.StandardizeImageAxesOperation(*args, **kwargs)
Bases:
Operation
Reorganize the memory buffer of a medical image to ensure anatomical consistency. This operation rearranges the axes and orientation of the input image to align with right-handed anatomical coordinate systems.
The coordinate system is specified as a 3-character string where: - 1st character: L (Left, +x) or R (Right, -x) - 2nd character: P (Posterior, +y) or A (Anterior, -y) - 3rd character: S (Superior, +z) or I (Inferior, -z)
Supported right-handed coordinate systems: - LPS: Left-Posterior-Superior (DICOM standard) - {+1, +1, +1} - RAS: Right-Anterior-Superior (neuroimaging) - {-1, -1, +1} - LAI: Left-Anterior-Inferior - {+1, -1, -1} - RPI: Right-Posterior-Inferior - {-1, +1, -1}
The operation uses the rotation matrix of the image and modifies it so that only a non-axis aligned rotation remains.
Note that this operation only re-arranges internal representations but does not modify the actual spatial position and orientation of the image (as opposed to RectifyRotationOperation). This operation differs from BakeTransformationOperation because it only applies axis-based rotations or flips and therefore does not do any kind of interpolation. Unlike BakeTransformationOperation, a residual rotation might remain in the matrix of the output image.
- Parameters:
coordinate_system – Target coordinate system (3-character string). Default: “LPS”
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.StandardizeImageAxesOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.StandardizeImageAxesOperation, coordinate_system: str = ‘LPS’, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SurfaceDistancesMetric(self: SurfaceDistancesMetric, symmetric: bool = True, crop_margin: int = -1)
Bases:
Metric
- class Results(self: Results)
Bases:
pybind11_object
- property all_distances
- property max_absolute_distance
- property mean_absolute_distance
- property mean_signed_distance
- compute_distances(self: SurfaceDistancesMetric, prediction: SharedImageSet, target: SharedImageSet) list[dict[int, Results]]
- class imfusion.machinelearning.SwapImageAndLabelsOperation(*args, **kwargs)
Bases:
Operation
Swaps image and label map.
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.SwapImageAndLabelsOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.SwapImageAndLabelsOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.SyncOperation(*args, **kwargs)
Bases:
Operation
Synchronizes shared memory (CPU <-> OpenGL) of images.
- Parameters:
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.SyncOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.SyncOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.TanhOperation(*args, **kwargs)
Bases:
Operation
Apply a tanh function on the input image. \(\textnormal{output} = \tanh(\textnormal{scale} * \textnormal{input})\)
- Parameters:
scale – Scale parameter within the tanh function. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.TanhOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.TanhOperation, scale: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.TargetTag(self: TargetTag)
Bases:
DataComponentBase
- class imfusion.machinelearning.TemplateInpaintingOperation(*args, **kwargs)
Bases:
Operation
Inpaints a template into an image with specified spatial and intensity transformation.
- Parameters:
template_path – path to load template .imf file.
template_rotation – rotation of template in degrees per axis. Default: [0, 0, 0]
template_translation – translation of template in degrees per axis. Default: [0, 0, 0]
add_values_to_existing – Adding values to input image rather than replacing them. Default: False
template_mult_factor – Multiply template intensities with this factor. Default: 1.0
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.TemplateInpaintingOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.TemplateInpaintingOperation, template_path: str = ‘’, template_rotation: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), template_translation: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), add_values_to_existing: bool = False, template_mult_factor: float = 1.0, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.Tensor(self: Tensor, tensor: Buffer)
Bases:
pybind11_object
Class for managing raw Tensors
This class is meant to have direct control over tensors either passed to, or received from a MachineLearningModel. Unlike the SISBasedElements, there is no inherent stacking/permuting of tensors, and there are no constraints on the order of the Tensor.
Note
The API for this class is experimental and may change soon.
Create an ml.Tensor from a numpy array.
- property shape
Return shape of tensor.
- class imfusion.machinelearning.TensorSet(self: TensorSet, tensors: list[Tensor] = [])
Bases:
Data
Class for managing TensorSets
This class is meant to have direct control over tensors either passed to, or received from a MachineLearningModel. Unlike the SISBasedElements, there is no inherent stacking/permuting of tensors, and there are no constraints on the order of the Tensor.
Note
The API for this class is experimental and may change soon.
Initialize a TensorSet
- Parameters:
tensors – Set of tensors to initialize the TensorSet with.
- class imfusion.machinelearning.TensorSetElement(*args, **kwargs)
Bases:
DataElement
Class for managing raw Tensorsets
This class is meant to have direct control over tensors either passed to, or received from a MachineLearningModel. Unlike the SISBasedElements, there is no inherent stacking/permuting of tensors, and there are no constraints on the order of the Tensor.
Note
The API for this class is experimental and may change soon.
Overloaded function.
__init__(self: imfusion.machinelearning.TensorSetElement, tensorset: imfusion.machinelearning.TensorSet) -> None
Initialize a TensorSetElement from a Tensor.
- Parameters:
tensor (imfusion.Tensor) –
__init__(self: imfusion.machinelearning.TensorSetElement, tensorset: imfusion.machinelearning.TensorSet) -> None
Initialize a TensorSetElement from a TensorSet.
- Parameters:
tensor (imfusion.TensorSet) –
__init__(self: imfusion.machinelearning.TensorSetElement, tensor: imfusion.machinelearning.Tensor) -> None
Initialize a TensorSetElement from a Tensor.
- Parameters:
tensor (imfusion.Tensor) –
- tensor(self: TensorSetElement, index: int = 0) Tensor
Access tensor as certain index.
- Parameters:
index (int) –
- property tensorset
Access to the underlying TensorSet.
- class imfusion.machinelearning.ThresholdOperation(*args, **kwargs)
Bases:
Operation
Threshold the input image to a binary map with only 0 or 1 values.
- Parameters:
value – Threshold value (strictly) above which the pixel will set to 1. Default: 0.0
to_ubyte – Output image must be unsigned byte instead of float. Default: False
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.ThresholdOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.ThresholdOperation, value: float = 0.0, to_ubyte: bool = False, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.UndoPaddingOperation(*args, **kwargs)
Bases:
Operation
Apply the inverse of a previously applied padding operation. This operation requires the input to have an InversionComponent containing padding information. The padding information must have been previously stored with a matching record identifier during the padding operation.
Note: Both GPU and CPU implementations are provided.
Note: If no InversionComponent is present, or no matching record identifier is found, the operation will return the input unchanged and warn about unexpected behavior. The operations throws an error if the number of images has changed since the padding was applied.
- Parameters:
target_identifier – The identifier of the operation to undo. Default: “”
device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current
processing_policy
)record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.UndoPaddingOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.UndoPaddingOperation, target_identifier: str = ‘’, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.UnmarkAsTargetOperation(*args, **kwargs)
Bases:
Operation
Unmark elements from the input data item as learning “target”. This operation is the opposite of
MarkAsTargetOperation
.- Parameters:
apply_to – fields to unmark as targets (will initialize the underlying
apply_to
parameter)device – Specifies whether this Operation should run on CPU or GPU.
seed – Specifies seeding for any randomness that might be contained in this operation.
error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behavior.
record_identifier – Unused for this operation as it is not invertible
Overloaded function.
__init__(self: imfusion.machinelearning.UnmarkAsTargetOperation, *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
__init__(self: imfusion.machinelearning.UnmarkAsTargetOperation, apply_to: list[str], *, device: Optional[imfusion.machinelearning.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None
- class imfusion.machinelearning.VectorElement(self: VectorElement, vectors: SharedImageSet)
Bases:
SISBasedElement
Initialize a VectorElement from a SharedImageSet.
- Parameters:
image (SharedImageSet) – image to be converted to a VectorElement
- from_torch()
- imfusion.machinelearning.available_cpp_engines() list[str]
Returns the list of registered C++ engines available for usage in MachineLearningModel.
- imfusion.machinelearning.available_engines() list[str]
Returns the list of all registered engines available for usage in MachineLearningModel.
- imfusion.machinelearning.available_py_engines() list[str]
Returns the list of registered Python engines available for usage in MachineLearningModel.
- imfusion.machinelearning.is_semantic_segmentation_map(sis: SharedImageSet) bool
- imfusion.machinelearning.is_target(sis: SharedImageSet) bool
- imfusion.machinelearning.propertylist_to_data_loader_specs(properties: list[Properties]) list[DataLoaderSpecs]
Parse a properties object into a vector of DataLoaderSpecs.
- imfusion.machinelearning.register_filter_func(arg0: str, arg1: Callable[[DataItem], bool]) None
Register user-defined function to be used in Dataset.filter decorator function
- imfusion.machinelearning.register_map_func(arg0: str, arg1: Callable[[DataItem], None]) None
Register user-defined function to be used in Dataset.map decorator function
- imfusion.machinelearning.tag_as_target(sis: SharedImageSet) None
- imfusion.machinelearning.to_torch(self: DataElement | SharedImageSet | SharedImage, device: device = None, dtype: dtype = None, same_as: Tensor = None) Tensor
Convert SharedImageSet or a SharedImage to a torch.Tensor.
- Parameters:
self (DataElement | SharedImageSet | SharedImage) – Instance of SharedImageSet or SharedImage (this function bound as a method to SharedImageSet and SharedImage)
device (device) – Target device for the new torch.Tensor
dtype (dtype) – Type of the new torch.Tensor
same_as (Tensor) – Template tensor whose device and dtype configuration should be matched.
device
anddtype
are still applied afterwards.
- Returns:
New torch.Tensor
- Return type:
- imfusion.machinelearning.untag_as_target(sis: SharedImageSet) None
- imfusion.machinelearning.update_model_configuration(input_path: str | ~pathlib.Path, output_path: str | ~pathlib.Path | None = None, verbose: bool = False, default_prediction_output: ~imfusion.machinelearning.PredictionOutput = <PredictionOutput.UNKNOWN: -1>) None
Update an ImFusion ML model configuration file to the latest version.
This function loads a configuration file, upgrades it to the latest version format, and saves it to the specified output path. If no output path is provided, the input file will be overwritten.
- Parameters:
input_path (str | Path) – Path to the input YAML configuration file
output_path (str | Path | None) – Path for the output YAML configuration file. If None, the input file will be overwritten
verbose (bool) – If True, print detailed information about the upgrade process
default_prediction_output (PredictionOutput) – Default prediction output type to use when not specified in the configuration file. This can happen in legacy configurations.
- Return type:
None
imfusion.mesh
Submodules containing routines for pre- and post-processing meshes.
- class imfusion.mesh.PointDistanceResult(self: PointDistanceResult, mean_distance: float, median_distance: float, standard_deviation: float, min_distance: float, max_distance: float, distances: ndarray[numpy.float64[m, 1]])
Bases:
pybind11_object
- property distances
- property max_distance
- property mean_distance
- property median_distance
- property min_distance
- property standard_deviation
- class imfusion.mesh.Primitive(self: Primitive, value: int)
Bases:
pybind11_object
Enumeration of supported mesh primitives.
Members:
SPHERE
CYLINDER
PYRAMID
CUBE
ICOSAHEDRON_SPHERE
CONE
GRID
- CONE = <Primitive.CONE: 5>
- CUBE = <Primitive.CUBE: 3>
- CYLINDER = <Primitive.CYLINDER: 1>
- GRID = <Primitive.GRID: 6>
- ICOSAHEDRON_SPHERE = <Primitive.ICOSAHEDRON_SPHERE: 4>
- PYRAMID = <Primitive.PYRAMID: 2>
- SPHERE = <Primitive.SPHERE: 0>
- property name
- property value
- imfusion.mesh.create(shape: Primitive) Mesh
Create a mesh primitive.
- Args:
shape: The shape of the primitive to create.
- imfusion.mesh.point_distance(*args, **kwargs)
Overloaded function.
point_distance(target: imfusion.Mesh, source: imfusion.Mesh, signed_distance: bool = False, range_of_interest: Optional[tuple[int, int]] = None) -> imfusion.mesh.PointDistanceResult
Compute point-wise distances between: 1. source mesh vertices and target mesh surface, 2. source point cloud and target mesh surface, 3. source mesh vertices and target point cloud vertices, 4. source point cloud and the target point cloud
- Args:
target: Target data, defining the locations to estimate the distance to. source: Source data, defining the locations to estimate the distance from. signed_distance: Whether to compute signed distances (applicable to meshes only). Defaults to False. range_of_interest: Optional range of distances to consider (min, max) in percentage (integer-valued). Distances outside of this range will be set to NaN. Statistics are computed only over non-NaN distances. Defaults to None.
- Returns:
A PointDistanceResult object containing the computed statistics and distances.
point_distance(target: imfusion.PointCloud, source: imfusion.Mesh, signed_distance: bool = False, range_of_interest: Optional[tuple[int, int]] = None) -> imfusion.mesh.PointDistanceResult
point_distance(target: imfusion.PointCloud, source: imfusion.PointCloud, signed_distance: bool = False, range_of_interest: Optional[tuple[int, int]] = None) -> imfusion.mesh.PointDistanceResult
point_distance(target: imfusion.Mesh, source: imfusion.PointCloud, signed_distance: bool = False, range_of_interest: Optional[tuple[int, int]] = None) -> imfusion.mesh.PointDistanceResult
imfusion.registration
This module contains functionality for all kinds of registration tasks. You can find a demonstration of how to perform image registration on our GitHub.
- class imfusion.registration.AbstractImageRegistration
Bases:
BaseAlgorithm
- class imfusion.registration.DescriptorsRegistrationAlgorithm(self: DescriptorsRegistrationAlgorithm, arg0: SharedImageSet, arg1: SharedImageSet)
Bases:
pybind11_object
Class for performing image registration using local feature descriptors.
This algorithm performs the following steps: 1) Preprocess the fixed and moving images to prepare them for feature extraction. This consists of resampling to
spacing
and baking-in the rotation. 2) Extract feature descriptors using either DISAFeaturesAlgorithm or MINDDescriptorAlgorithm depending ondescriptor_type
. 3) Computes the weight for the moving image features. 4) Instantiates and usesFeatureMapsRegistrationAlgorithm
to register the feature descriptors images. The computed registration is then applied to the moving image.- class DescriptorType(self: DescriptorType, value: int)
Bases:
pybind11_object
Members:
DISA : Use the DISA descriptors defined in the paper “DISA: DIfferentiable Similarity Approximation for Universal Multimodal Registration”, Ronchetti et al. 2023
MIND
- DISA = <DescriptorType.DISA: 0>
- MIND = <DescriptorType.MIND: 1>
- property name
- property value
- globalRegistration(self: DescriptorsRegistrationAlgorithm) None
- heatmap(self: DescriptorsRegistrationAlgorithm, point: ndarray[numpy.float64[3, 1]]) SharedImageSet
- initialize_pose(self: DescriptorsRegistrationAlgorithm) None
- localRegistration(self: DescriptorsRegistrationAlgorithm) None
- processed_fixed(self: DescriptorsRegistrationAlgorithm) SharedImageSet
- processed_moving(self: DescriptorsRegistrationAlgorithm) SharedImageSet
- DISA = <DescriptorType.DISA: 0>
- MIND = <DescriptorType.MIND: 1>
- property registration_algorithm
- property spacing
- property type
- property weight
- class imfusion.registration.FeatureMapsRegistrationAlgorithm(self: FeatureMapsRegistrationAlgorithm, fixed: SharedImageSet, moving: SharedImageSet, weight: SharedImageSet = None)
Bases:
pybind11_object
Algorithm for registering feature maps volumes
- class Motion(self: Motion, value: int)
Bases:
pybind11_object
Members:
RIGID
AFFINE
- AFFINE = <Motion.AFFINE: 1>
- RIGID = <Motion.RIGID: 0>
- property name
- property value
- apply_registration(self: FeatureMapsRegistrationAlgorithm, params: ndarray[numpy.float64[m, 1]]) None
- batch_eval(self: FeatureMapsRegistrationAlgorithm, params: list[ndarray[numpy.float64[m, 1]]]) list[ndarray[numpy.float64[m, 1]]]
- bench_eval(self: FeatureMapsRegistrationAlgorithm, params: list[ndarray[numpy.float64[m, 1]]], num: int) None
- compute(self: FeatureMapsRegistrationAlgorithm) None
- eval(self: FeatureMapsRegistrationAlgorithm, params: ndarray[numpy.float64[m, 1]]) ndarray[numpy.float64[m, 1]]
- global_search(*args, **kwargs)
Overloaded function.
global_search(self: imfusion.registration.FeatureMapsRegistrationAlgorithm, lower_bound: numpy.ndarray[numpy.float64[m, 1]], upper_bound: numpy.ndarray[numpy.float64[m, 1]], population_size: int) -> list[tuple[numpy.ndarray[numpy.float64[m, 1]], float]]
global_search(self: imfusion.registration.FeatureMapsRegistrationAlgorithm) -> list[tuple[numpy.ndarray[numpy.float64[m, 1]], float]]
- num_evals(self: FeatureMapsRegistrationAlgorithm) int
- reset_pose(self: FeatureMapsRegistrationAlgorithm) None
- AFFINE = <Motion.AFFINE: 1>
- RIGID = <Motion.RIGID: 0>
- property motion
- property quantize
- class imfusion.registration.ImageRegistrationAlgorithm(self: imfusion.registration.ImageRegistrationAlgorithm, fixed: imfusion.SharedImageSet, moving: imfusion.SharedImageSet, model: imfusion.registration.ImageRegistrationAlgorithm.TransformationModel = <TransformationModel.LINEAR: 0>)
Bases:
BaseAlgorithm
High-level interface for image registration. The image registration algorithm wraps several concrete image registration algorithms (e.g. linear and deformable) and extends them with pre-processing techniques. Available pre-processing options include downsampling and gradient-magnitude used for LC2. On creation, the algorithm tries to find the best settings for the registration problem depending on the modality, size and other properties of the input images. The image registration comes with a default set of different transformation models.
- Parameters:
fixed – Input image that stays fixed during the registration.
moving – Input image that will be moving registration.
model – Defines the registration approach to use. Defaults to rigid / affine registration.
- class PreprocessingOptions(self: PreprocessingOptions, value: int)
Bases:
pybind11_object
Flags to enable/disable certain preprocessing options.
Members:
NO_PREPROCESSING : Disable preprocessing completely (this cannot be ORed with other options)
RESTRICT_MEMORY : Downsamples the images so that the registration will not use more than a given maximum of (video) memory
ADJUST_SPACING : if the spacing difference of both images is large, the spacing of the adjusted to the smaller one
IGNORE_FILTERING : Ignore any PreProcessingFilter required by the AbstractImageRegistration object
CACHE_RESULTS : Store PreProcessing results and only re-compute if necessary
NORMALIZE : Normalize images to float range [0.0, 1.0]
- ADJUST_SPACING = <PreprocessingOptions.ADJUST_SPACING: 2>
- CACHE_RESULTS = <PreprocessingOptions.CACHE_RESULTS: 16>
- IGNORE_FILTERING = <PreprocessingOptions.IGNORE_FILTERING: 4>
- NORMALIZE = <PreprocessingOptions.NORMALIZE: 32>
- NO_PREPROCESSING = <PreprocessingOptions.NO_PREPROCESSING: 0>
- RESTRICT_MEMORY = <PreprocessingOptions.RESTRICT_MEMORY: 1>
- property name
- property value
- class TransformationModel(self: TransformationModel, value: int)
Bases:
pybind11_object
Available transformation models. Each one represents a specific registration approach.
Members:
LINEAR : Rigid or affine DOF registration
FFD : Registration with non-linear Free-Form deformations
TPS : Registration with non-linear Thin-Plate-Splines deformations
DEMONS : Registration with non-linear dense (per-pixel) deformations
GREEDY_DEMONS : Registration with non-linear dense (per-pixel) deformations using patch-based SimilarityMeasures
POLY_RIGID : Registration with poly-rigid (i.e. partially piecewise rigid) deformations.
USER_DEFINED
- DEMONS = <TransformationModel.DEMONS: 3>
- FFD = <TransformationModel.FFD: 1>
- GREEDY_DEMONS = <TransformationModel.GREEDY_DEMONS: 4>
- LINEAR = <TransformationModel.LINEAR: 0>
- POLY_RIGID = <TransformationModel.POLY_RIGID: 5>
- TPS = <TransformationModel.TPS: 2>
- USER_DEFINED = <TransformationModel.USER_DEFINED: 100>
- property name
- property value
- compute_preprocessing(self: ImageRegistrationAlgorithm) bool
Applies the pre-processing options on the input images. Results are cached so this is a no-op if the preprocessing options have not changed. This function is automatically called by the compute method, and therefore does not have to be explicitly called in most cases.
- reset(self: ImageRegistrationAlgorithm) None
Resets the transformation of moving to its initial transformation.
- swap_fixed_and_moving(self: ImageRegistrationAlgorithm) None
Swaps which image is considered fixed and moving.
- ADJUST_SPACING = <PreprocessingOptions.ADJUST_SPACING: 2>
- CACHE_RESULTS = <PreprocessingOptions.CACHE_RESULTS: 16>
- DEMONS = <TransformationModel.DEMONS: 3>
- FFD = <TransformationModel.FFD: 1>
- GREEDY_DEMONS = <TransformationModel.GREEDY_DEMONS: 4>
- IGNORE_FILTERING = <PreprocessingOptions.IGNORE_FILTERING: 4>
- LINEAR = <TransformationModel.LINEAR: 0>
- NORMALIZE = <PreprocessingOptions.NORMALIZE: 32>
- NO_PREPROCESSING = <PreprocessingOptions.NO_PREPROCESSING: 0>
- POLY_RIGID = <TransformationModel.POLY_RIGID: 5>
- RESTRICT_MEMORY = <PreprocessingOptions.RESTRICT_MEMORY: 1>
- TPS = <TransformationModel.TPS: 2>
- USER_DEFINED = <TransformationModel.USER_DEFINED: 100>
- property best_similarity
Returns the best value of the similarity measure after optimization.
- property fixed
Returns input image that is currently considered to be fixed.
- property is_deformable
Indicates whether the current configuration uses a deformable registration
- property max_memory
Restrict the memory used by the registration to the given amount in mebibyte. The value can be set in any case but will only have an effect if the RestrictMemory option is enabled. This will restrict video memory as well. The minimum size is 64 MB (the value will be clamped).
- property moving
Returns input image that is currently considered to be moving.
- property optimizer
Reference to the underlying optimizer.
- property param_registration
Reference to the underlying parametric registration object that actually performs the computation (e.g. parametric registration, deformable registration, etc.). Will return None if the transformation model is not parametric.
- property preprocessing_options
Which options should be enabled for preprocessing. The options are bitwise OR combination of PreprocessingOptions.
- property registration
Reference to the underlying registration object that actually performs the computation (e.g. parametric registration, deformable registration, etc.)
- property transformation_model
Transformation model to be used for the registration. If the transformation model changes, internal objects will be deleted and recreated. The configuration of the current model will be saved and the new model will be configured with any previously saved configuration for that model. Any attached identity deformations are removed from both images.
- property verbose
Indicates whether the algorithm is going to print additional and detailed info messages.
- class imfusion.registration.ParametricImageRegistration
Bases:
BaseAlgorithm
- class imfusion.registration.RegistrationInitAlgorithm(self: RegistrationInitAlgorithm, image1: SharedImageSet, image2: SharedImageSet)
Bases:
BaseAlgorithm
Initialize the registration of two volumes by moving the second one.
- class Mode(self: Mode, value: int)
Bases:
pybind11_object
Specifies how the distance between images should be computed.
Members:
BOUNDING_BOX
CENTER_OF_MASS
- BOUNDING_BOX = <Mode.BOUNDING_BOX: 0>
- CENTER_OF_MASS = <Mode.CENTER_OF_MASS: 1>
- property name
- property value
- BOUNDING_BOX = <Mode.BOUNDING_BOX: 0>
- CENTER_OF_MASS = <Mode.CENTER_OF_MASS: 1>
- property mode
Initialization mode (align bounding box centers, or center of mass).
- class imfusion.registration.RegistrationResults(self: RegistrationResults)
Bases:
pybind11_object
Class responsible for handling and storing results of data registration. Provides functionality to add, remove, apply and manage registration results and their related data. RegistrationResults can be saved and loaded into ImFusion Registration Results (irr) files, potentially including data source information. When data source information is available this class can load the missing data to be able to apply the results. Each result contains a registration matrix and (when applicable) a deformation.
- add(self: RegistrationResults, datalist: list[Data], name: str = '', ground_truth: bool = False) None
- clear(self: RegistrationResults) None
Clears all results.
- load_missing_data(self: RegistrationResults) list[Data]
- remove(self: RegistrationResults, index: int) bool
Removes the result at the given index.
- resolve_data(self: RegistrationResults, datalist: list[Data]) None
- save(self: RegistrationResults, path: str | PathLike) None
- property has_ground_truth
- property some_data_missing
- property source_path
Returns the number of results.
- class imfusion.registration.RegistrationResultsAlgorithm
Bases:
BaseAlgorithm
- property results
- class imfusion.registration.VolumeBasedMeshRegistrationAlgorithm(self: VolumeBasedMeshRegistrationAlgorithm, fixed: Mesh, moving: Mesh, pointcloud: PointCloud = None)
Bases:
BaseAlgorithm
Calculates a deformable registration between two meshes by calculating a deformable registration between distance volumes. Internally, an instance of the DemonsImageRegistration algorithm is used to register the “fixed” distance volume to the “moving” distance volume. As this registration computes the inverse of the mapping from the fixed to the moving volume, this directly yields a registration of the “moving” Mesh to the “fixed” Mesh.
- imfusion.registration.apply_deformation(image: SharedImageSet, adjust_size: bool = True, nearest_interpolation: bool = False) SharedImageSet
Creates a deformed image from the input image and its deformation.
- Parameters:
image (SharedImageSet) – Input image assumed to have a deformation.
adjust_size (bool) – Whether the resulting image should adjust its size to encompass the deformation.
nearest_interpolation (bool) – Whether nearest or linear interpolation is used.
- imfusion.registration.load_registration_results(path: str) RegistrationResults
- imfusion.registration.scan_for_registration_results(directory: str) list[RegistrationResults]
imfusion.anatomy
ImFusion Anatomy Plugin Python Bindings
Core Functionality Areas
Anatomical Data Structures:
AnatomicalStructure
: Individual anatomical structure with keypoints, planes, meshes, and imagesAnatomicalStructureCollection
: Container for multiple anatomical structuresGenericASC
: Generic anatomical structure collection implementation
Registration and Processing:
ASCRegistration
: Registration between anatomical structure collectionsGenerateLinearShapeModel
: Generate linear shape models from anatomical structure collections
Example Usage
Basic anatomical structure access:
>>> import imfusion.anatomy as anatomy
>>> import imfusion
>>> # Load anatomical structure collection
>>> asc = imfusion.open("anatomical_structures.imf")
>>> # Access individual anatomical structures
>>> num_structures = asc.num_anatomical_structures()
>>> print(f"Found {num_structures} anatomical structures")
>>> # Get structure by identifier
>>> liver = asc.anatomical_structure("liver")
>>> print(f"Liver identifier: {liver.identifier}")
>>> # Access keypoints using new interface
>>> keypoints = liver.keypoints2
>>> print(f"Available keypoints: {keypoints.keys()}")
>>> tip_point = keypoints["tip"]
>>> # Access meshes
>>> meshes = liver.meshes
>>> if "surface" in meshes:
... surface_mesh = meshes["surface"]
Working with transformations:
>>> # Get transformation matrices
>>> world_to_local = liver.matrix_from_world
>>> local_to_world = liver.matrix_to_world
>>> # Transform keypoints to world coordinates
>>> world_tip = local_to_world @ tip_point
Registration example:
>>> # Load two anatomical structure collections
>>> fixed_asc = imfusion.open("template.imf")
>>> moving_asc = imfusion.open("patient.imf")
>>> # Create registration algorithm
>>> registration = anatomy.ASCRegistration(fixed_asc, moving_asc)
>>> registration.registration_method = anatomy.ASCRegistration.RegistrationMethod.PointsAndPlanes
>>> # Compute registration
>>> registration.compute()
Creating anatomical structures from label maps:
>>> # Load label image
>>> label_image = imfusion.open("segmentation.nii")
>>> # Define label mappings
>>> label_mapping = {1: "liver", 2: "kidney", 3: "spleen"}
>>> # Create generic ASC from label map
>>> asc = anatomy.generic_asc_from_label_map(label_image, label_mapping)
>>> # Access created structures
>>> liver = asc.anatomical_structure("liver")
Shape model generation:
>>> # Load mean shape template
>>> mean_shape = imfusion.open("mean_template.imf")
>>> # Create shape model generator
>>> shape_model_gen = anatomy.GenerateLinearShapeModel(mean_shape)
>>> # Configure input directory with training data
>>> shape_model_gen.p_inputDirectory = "/path/to/training/data"
>>> # Generate shape model
>>> results = shape_model_gen()
>>> shape_model = results["shape_model"]
>>> updated_mean = results.get("mean")
For detailed documentation of specific classes and functions, use Python’s built-in help() function or access the docstrings directly.
Note: This module requires the ImFusion Anatomy plugin to be properly installed.
- class imfusion.anatomy.ASCRegistration(self: ASCRegistration, fixed: AnatomicalStructureCollection, moving: AnatomicalStructureCollection)
Bases:
BaseAlgorithm
Registration between AnatomicalStructureCollectionObjects
- class RegistrationMethod(self: RegistrationMethod, value: int)
Bases:
pybind11_object
Members:
DeformableMeshRegistration
RigidImages
PointsAndPlanes
PointsRigidScaling
- DeformableMeshRegistration = <RegistrationMethod.DeformableMeshRegistration: 2>
- PointsAndPlanes = <RegistrationMethod.PointsAndPlanes: 1>
- PointsRigidScaling = <RegistrationMethod.PointsRigidScaling: 4>
- RigidImages = <RegistrationMethod.RigidImages: 3>
- property name
- property value
- property registration_method
Registration method
- class imfusion.anatomy.AnatomicalStructure
Bases:
pybind11_object
- get_keypoint(self: AnatomicalStructure, arg0: str) ndarray[numpy.float64[3, 1]]
- remove_keypoint(self: AnatomicalStructure, arg0: str) None
Remove a keypoint, raises KeyError if it does not exist
- set_keypoint(self: AnatomicalStructure, arg0: str, arg1: ndarray[numpy.float64[3, 1]]) None
Set or overwrite an existing keypoint
- property graphs
Key value access to graphs. Assignable from dict.
- property identifier
Returns the identifier of the anatomical structure.
- property images
Key value access to images. Assignable from dict.
- property is_2d
Returns true if the anatomical structure is 2D, false if it is 3D.
- property keypoints
Dictionary getter (of a copy of) and setter access for all keypoints. Use get_keypoint and set_keypoint for access to individual keypoints.
- property keypoints2
Key value access to keypoints. Assignable from dict.
- property matrix_from_world
Access to 4x4 matrix representing transformation from world coordinate space to the local coordinate space of this structure.
- property matrix_to_world
Access to 4x4 matrix representing transformation from local coordinate space of this structure to the world coordinate space.
- property meshes
Key value access to meshes. Assignable from dict.
- property planes
Key value access to planes. Assignable from dict.
- property pointclouds
Key value access to pointclouds. Assignable from dict.
- property valid
Indicate whether the object is still valid, if invalid, member access raises an AnatomicalStructureInvalidException
- class imfusion.anatomy.AnatomicalStructureCollection
Bases:
Data
AnatomicalStructureCollection provides an interface for managing collections of AnatomicalStructure objects.
- anatomical_structure(*args, **kwargs)
Overloaded function.
anatomical_structure(self: imfusion.anatomy.AnatomicalStructureCollection, index: int) -> imfusion.anatomy.AnatomicalStructure
Returns the anatomical structure at the given index
anatomical_structure(self: imfusion.anatomy.AnatomicalStructureCollection, identifier: str) -> imfusion.anatomy.AnatomicalStructure
Returns the anatomical structure with the given identifier
- anatomical_structure_identifiers(self: AnatomicalStructureCollection) list[str]
Returns a list of the names of all anatomical structures in the collection
- clone(self: AnatomicalStructureCollection) AnatomicalStructureCollection
- num_anatomical_structures(self: AnatomicalStructureCollection) int
Returns the number of anatomical structures in the collection
- class imfusion.anatomy.GenerateLinearShapeModel(self: GenerateLinearShapeModel, mean_shape: AnatomicalStructureCollection)
Bases:
BaseAlgorithm
Generate a linear shape model from a set of AnatomicalStructureCollections. Input data are the mean shape and a set of .imf files with AnatomicalStructureCollections located in the input directory. The mean shape defines the anatomical structures of interest and can optionally be updated iteratively in batches before the linear shape model is computed.
- Parameters:
mean_shape – AnatomicalStructureCollection that defines the registration target and structures of interest.
- class imfusion.anatomy.GenericASC(self: GenericASC)
Bases:
AnatomicalStructureCollection
,Data
GenericASC holds the data associated with a generic anatomical structure collection.
- clone(self: GenericASC) GenericASC
- class imfusion.anatomy.KeyValueDataWrapperGraph
Bases:
pybind11_object
KeyValueDataWrapper encapsulates a key-value store KeyValueStore holding data of type T with specific type handling. It provides a Python-friendly interface to access and manipulate data stored in a KeyValueStore using string-based keys. The KeyValueDataWrapper ensures that the data is still valid when accessed and raises an AnatomicalStructureInvalidException if the data is no longer valid. Data are copied or cloned to ensure that the data is still valid when the python object is used, except for shared_ptr value types indicated by the mutable_return_values attribute.
- Parameters:
data_store (KeyValueStore) – A reference to the KeyValueStore that holds the actual data.
anatomical_structure (AnatomicalStructureWrapper) – A reference to an anatomical structure, providing context.
identifier (str) – A unique identifier for this data wrapper instance, used for logging or tracking.
- from_dict(dict_in
dict[str, T], clear: bool = True): Set the key-value store from a dictionary.
- update(dict_in
dict[str, T]): Update the key-value store from a dictionary.
- __getitem__(self: KeyValueDataWrapperGraph, arg0: str) imfusion.graph.Graph
- __iter__(self: KeyValueDataWrapperGraph) Iterator
- __setitem__(self: KeyValueDataWrapperGraph, arg0: str, arg1: imfusion.graph.Graph) None
- asdict(self: KeyValueDataWrapperGraph) dict[str, imfusion.graph.Graph]
Convert the key value store into a dict.
- from_dict(self: KeyValueDataWrapperGraph, dict_in: dict, clear: bool = True) None
Set the key value store from a dictionary.
- keys(self: KeyValueDataWrapperGraph) list[str]
- update(self: KeyValueDataWrapperGraph, dict_in: dict) None
Update the key value store from a dictionary.
- values(self: KeyValueDataWrapperGraph) list[imfusion.graph.Graph]
- property mutable_return_values
Indicate whether the KeyValueDataWrapper returns mutable values
- property valid
Indicate whether the anatomical structure object is still valid.
- class imfusion.anatomy.KeyValueDataWrapperMesh
Bases:
pybind11_object
KeyValueDataWrapper encapsulates a key-value store KeyValueStore holding data of type T with specific type handling. It provides a Python-friendly interface to access and manipulate data stored in a KeyValueStore using string-based keys. The KeyValueDataWrapper ensures that the data is still valid when accessed and raises an AnatomicalStructureInvalidException if the data is no longer valid. Data are copied or cloned to ensure that the data is still valid when the python object is used, except for shared_ptr value types indicated by the mutable_return_values attribute.
- Parameters:
data_store (KeyValueStore) – A reference to the KeyValueStore that holds the actual data.
anatomical_structure (AnatomicalStructureWrapper) – A reference to an anatomical structure, providing context.
identifier (str) – A unique identifier for this data wrapper instance, used for logging or tracking.
- from_dict(dict_in
dict[str, T], clear: bool = True): Set the key-value store from a dictionary.
- update(dict_in
dict[str, T]): Update the key-value store from a dictionary.
- __getitem__(self: KeyValueDataWrapperMesh, arg0: str) Mesh
- __iter__(self: KeyValueDataWrapperMesh) Iterator
- __setitem__(self: KeyValueDataWrapperMesh, arg0: str, arg1: Mesh) None
- asdict(self: KeyValueDataWrapperMesh) dict[str, Mesh]
Convert the key value store into a dict.
- from_dict(self: KeyValueDataWrapperMesh, dict_in: dict, clear: bool = True) None
Set the key value store from a dictionary.
- keys(self: KeyValueDataWrapperMesh) list[str]
- update(self: KeyValueDataWrapperMesh, dict_in: dict) None
Update the key value store from a dictionary.
- values(self: KeyValueDataWrapperMesh) list[Mesh]
- property mutable_return_values
Indicate whether the KeyValueDataWrapper returns mutable values
- property valid
Indicate whether the anatomical structure object is still valid.
- class imfusion.anatomy.KeyValueDataWrapperPointCloud
Bases:
pybind11_object
KeyValueDataWrapper encapsulates a key-value store KeyValueStore holding data of type T with specific type handling. It provides a Python-friendly interface to access and manipulate data stored in a KeyValueStore using string-based keys. The KeyValueDataWrapper ensures that the data is still valid when accessed and raises an AnatomicalStructureInvalidException if the data is no longer valid. Data are copied or cloned to ensure that the data is still valid when the python object is used, except for shared_ptr value types indicated by the mutable_return_values attribute.
- Parameters:
data_store (KeyValueStore) – A reference to the KeyValueStore that holds the actual data.
anatomical_structure (AnatomicalStructureWrapper) – A reference to an anatomical structure, providing context.
identifier (str) – A unique identifier for this data wrapper instance, used for logging or tracking.
- from_dict(dict_in
dict[str, T], clear: bool = True): Set the key-value store from a dictionary.
- update(dict_in
dict[str, T]): Update the key-value store from a dictionary.
- __getitem__(self: KeyValueDataWrapperPointCloud, arg0: str) PointCloud
- __iter__(self: KeyValueDataWrapperPointCloud) Iterator
- __setitem__(self: KeyValueDataWrapperPointCloud, arg0: str, arg1: PointCloud) None
- asdict(self: KeyValueDataWrapperPointCloud) dict[str, PointCloud]
Convert the key value store into a dict.
- from_dict(self: KeyValueDataWrapperPointCloud, dict_in: dict, clear: bool = True) None
Set the key value store from a dictionary.
- keys(self: KeyValueDataWrapperPointCloud) list[str]
- update(self: KeyValueDataWrapperPointCloud, dict_in: dict) None
Update the key value store from a dictionary.
- values(self: KeyValueDataWrapperPointCloud) list[PointCloud]
- property mutable_return_values
Indicate whether the KeyValueDataWrapper returns mutable values
- property valid
Indicate whether the anatomical structure object is still valid.
Bases:
pybind11_object
KeyValueDataWrapper encapsulates a key-value store KeyValueStore holding data of type T with specific type handling. It provides a Python-friendly interface to access and manipulate data stored in a KeyValueStore using string-based keys. The KeyValueDataWrapper ensures that the data is still valid when accessed and raises an AnatomicalStructureInvalidException if the data is no longer valid. Data are copied or cloned to ensure that the data is still valid when the python object is used, except for shared_ptr value types indicated by the mutable_return_values attribute.
- Parameters:
data_store (KeyValueStore) – A reference to the KeyValueStore that holds the actual data.
anatomical_structure (AnatomicalStructureWrapper) – A reference to an anatomical structure, providing context.
identifier (str) – A unique identifier for this data wrapper instance, used for logging or tracking.
Indicates whether the encapsulated data is still valid.
- Type:
Indicates whether the KeyValueDataWrapper returns mutable values.
- Type:
Get a list of all keys.
Get a list of all values.
Convert the key-value store to a dictionary.
- from_dict(dict_in
dict[str, T], clear: bool = True): Set the key-value store from a dictionary.
- update(dict_in
dict[str, T]): Update the key-value store from a dictionary.
- asdict(self: KeyValueDataWrapperSharedImageSet) dict[str, SharedImageSet]
Convert the key value store into a dict.
Set the key value store from a dictionary.
- keys(self: KeyValueDataWrapperSharedImageSet) list[str]
Update the key value store from a dictionary.
- values(self: KeyValueDataWrapperSharedImageSet) list[SharedImageSet]
- property mutable_return_values
Indicate whether the KeyValueDataWrapper returns mutable values
- property valid
Indicate whether the anatomical structure object is still valid.
- class imfusion.anatomy.KeyValueDataWrapperVec3
Bases:
pybind11_object
KeyValueDataWrapper encapsulates a key-value store KeyValueStore holding data of type T with specific type handling. It provides a Python-friendly interface to access and manipulate data stored in a KeyValueStore using string-based keys. The KeyValueDataWrapper ensures that the data is still valid when accessed and raises an AnatomicalStructureInvalidException if the data is no longer valid. Data are copied or cloned to ensure that the data is still valid when the python object is used, except for shared_ptr value types indicated by the mutable_return_values attribute.
- Parameters:
data_store (KeyValueStore) – A reference to the KeyValueStore that holds the actual data.
anatomical_structure (AnatomicalStructureWrapper) – A reference to an anatomical structure, providing context.
identifier (str) – A unique identifier for this data wrapper instance, used for logging or tracking.
- from_dict(dict_in
dict[str, T], clear: bool = True): Set the key-value store from a dictionary.
- update(dict_in
dict[str, T]): Update the key-value store from a dictionary.
- __getitem__(self: KeyValueDataWrapperVec3, arg0: str) ndarray[numpy.float64[3, 1]]
- __iter__(self: KeyValueDataWrapperVec3) Iterator
- __setitem__(self: KeyValueDataWrapperVec3, arg0: str, arg1: ndarray[numpy.float64[3, 1]]) None
- asdict(self: KeyValueDataWrapperVec3) dict[str, ndarray[numpy.float64[3, 1]]]
Convert the key value store into a dict.
- from_dict(self: KeyValueDataWrapperVec3, dict_in: dict, clear: bool = True) None
Set the key value store from a dictionary.
- keys(self: KeyValueDataWrapperVec3) list[str]
- update(self: KeyValueDataWrapperVec3, dict_in: dict) None
Update the key value store from a dictionary.
- values(self: KeyValueDataWrapperVec3) list[ndarray[numpy.float64[3, 1]]]
- property mutable_return_values
Indicate whether the KeyValueDataWrapper returns mutable values
- property valid
Indicate whether the anatomical structure object is still valid.
- class imfusion.anatomy.KeyValueDataWrapperVec4
Bases:
pybind11_object
KeyValueDataWrapper encapsulates a key-value store KeyValueStore holding data of type T with specific type handling. It provides a Python-friendly interface to access and manipulate data stored in a KeyValueStore using string-based keys. The KeyValueDataWrapper ensures that the data is still valid when accessed and raises an AnatomicalStructureInvalidException if the data is no longer valid. Data are copied or cloned to ensure that the data is still valid when the python object is used, except for shared_ptr value types indicated by the mutable_return_values attribute.
- Parameters:
data_store (KeyValueStore) – A reference to the KeyValueStore that holds the actual data.
anatomical_structure (AnatomicalStructureWrapper) – A reference to an anatomical structure, providing context.
identifier (str) – A unique identifier for this data wrapper instance, used for logging or tracking.
- from_dict(dict_in
dict[str, T], clear: bool = True): Set the key-value store from a dictionary.
- update(dict_in
dict[str, T]): Update the key-value store from a dictionary.
- __getitem__(self: KeyValueDataWrapperVec4, arg0: str) ndarray[numpy.float64[4, 1]]
- __iter__(self: KeyValueDataWrapperVec4) Iterator
- __setitem__(self: KeyValueDataWrapperVec4, arg0: str, arg1: ndarray[numpy.float64[4, 1]]) None
- asdict(self: KeyValueDataWrapperVec4) dict[str, ndarray[numpy.float64[4, 1]]]
Convert the key value store into a dict.
- from_dict(self: KeyValueDataWrapperVec4, dict_in: dict, clear: bool = True) None
Set the key value store from a dictionary.
- keys(self: KeyValueDataWrapperVec4) list[str]
- update(self: KeyValueDataWrapperVec4, dict_in: dict) None
Update the key value store from a dictionary.
- values(self: KeyValueDataWrapperVec4) list[ndarray[numpy.float64[4, 1]]]
- property mutable_return_values
Indicate whether the KeyValueDataWrapper returns mutable values
- property valid
Indicate whether the anatomical structure object is still valid.
- imfusion.anatomy.generic_asc_from_label_map(arg0: SharedImageSet, arg1: dict[int, str]) GenericASC
imfusion.spine
ImFusion Spine Plugin Python Bindings
Core Functionality Areas
Spine Data Structures:
SpineData
: Container for complete spine with multiple vertebraeOrientedVertebra
: Individual vertebra representation with keypoints, planes, and splines
Spine Algorithms:
SpineBaseAlgorithm
: Main algorithm for CT spine localization, classification, and segmentationSpineLocalization2DAlgorithm
: 2D X-ray vertebra detection and localizationSpinePolyRigidDeformation
: Poly-rigid registration and deformation calculations given an existing SpineData
Example Usage
Basic spine analysis workflow:
>>> import imfusion.spine as spine
>>> import imfusion
>>> # Load CT image
>>> ct_image = imfusion.open("spine_ct.nii")
>>> # Create spine analysis algorithm
>>> alg = spine.SpineBaseAlgorithm(ct_image)
>>> # Set spine bounds automatically
>>> alg.set_bounds()
>>> # Localize and classify vertebrae
>>> status = alg.localize()
>>> # Get spine data with all vertebrae
>>> spine_data = alg.take_spine_data()
>>> print(f"Found {spine_data.num_vertebrae()} vertebrae")
>>> # Access individual vertebrae
>>> l1_vertebra = spine_data.vertebra("L1")
>>> position = l1_vertebra.calculate_position()
>>> # Segment specific vertebra
>>> l1_segmentation = alg.segment("L1")
2D X-ray analysis:
>>> # Load X-ray image
>>> xray = imfusion.open("spine_xray.dcm")
>>> # Create 2D localization algorithm
>>> alg_2d = spine.SpineLocalization2DAlgorithm(xray)
>>> # Run detection
>>> alg_2d.compute()
Poly-rigid deformation example:
>>> # Load CT volume and spine data
>>> ct_volume = imfusion.open("spine_ct.nii")
>>> spine_data = imfusion.open("spine_data.imf") # :class:`~imfusion.anatomy.AnatomicalStructureCollection`
>>> # Create poly-rigid deformation algorithm
>>> deform_alg = spine.SpinePolyRigidDeformation(ct_volume, spine_data)
>>> # Configure deformation parameters
>>> deform_alg.chamfer_distance = True
>>> deform_alg.mode = spine.PolyRigidDeformationMode.BACKWARD
>>> deform_alg.inversion_steps = 50
>>> # Compute the deformation
>>> deform_alg.compute()
For detailed documentation of specific classes and functions, use Python’s built-in help() function or access the docstrings directly.
Note: This module requires the ImFusion Spine plugin to be properly installed.
- class imfusion.spine.OrientedVertebra
Bases:
AnatomicalStructure
Individual vertebra with spatial orientation and anatomical features.
Represents a single vertebra in 3D space with complete anatomical information including keypoints, orientation planes, splines, and associated imaging data. Each vertebra has a unique name (e.g., “L1”, “T12”) and can be classified by type (cervical, thoracic, lumbar).
The vertebra maintains keypoints for anatomical landmarks (body center, pedicles, etc.), orientation information derived from these landmarks, and can store associated segmentation masks and other imaging data.
Example
>>> vertebra = spine_data.vertebra("L1") >>> position = vertebra.calculate_position() >>> orientation = vertebra.orientation >>> print(f"L1 at position: {position}")
- calculate_position(self: OrientedVertebra) ndarray[numpy.float64[3, 1]]
Sets and returns the position of the vertebra using the body center and the left and right pedicle centers if available, otherwise set it to the body center or return NaN if none are available.
- clone(self: OrientedVertebra) OrientedVertebra
Create a deep copy of the vertebra.
- Returns:
Independent copy of this vertebra with all properties
- Return type:
- name(self: OrientedVertebra) str
Get the name of the vertebra.
- Returns:
Vertebra name (e.g., “L1”, “T12”, “C7”)
- Return type:
- property orientation
Returns the 3x3 rotation matrix of the orientation of the vertebra using the body center and the left and right pedicle centers if available, otherwise returns an identity transformation.
- property pinned_type_id
The pinned type id of the vertebra
- property type_id
The type id of the vertebra
- property type_probability
The type class probabilities of the vertebra
- class imfusion.spine.PolyRigidDeformationMode(self: PolyRigidDeformationMode, value: int)
Bases:
pybind11_object
Mode for poly-rigid deformation computation.
Defines how the deformation is computed and applied: - BACKWARD: Compute deformation based on forward model, then invert it - FORWARD: Compute deformation based on backwards model directly - ONLYRIGID: Only deform rigid sections, implicitly masks nonrigid sections
Members:
BACKWARD : Compute forward model then invert (default)
FORWARD : Compute backwards model directly
ONLYRIGID : Only deform rigid sections
- BACKWARD = <PolyRigidDeformationMode.BACKWARD: 0>
- FORWARD = <PolyRigidDeformationMode.FORWARD: 1>
- ONLYRIGID = <PolyRigidDeformationMode.ONLYRIGID: 2>
- property name
- property value
- class imfusion.spine.SpineBaseAlgorithm(self: SpineBaseAlgorithm, source: SharedImageSet, label: SharedImageSet = None)
Bases:
BaseAlgorithm
Comprehensive spine analysis algorithm for CT images.
Main algorithm for spine localization, classification, and segmentation in CT volumes. Provides a complete pipeline for detecting vertebrae, classifying their types (cervical, thoracic, lumbar), and segmenting individual anatomical structures including vertebrae, sacrum, and ilium.
The algorithm works with calibrated CT images and uses machine learning models for accurate spine analysis. It maintains a collection of detected vertebrae that can be accessed, modified, and segmented individually.
- Typical workflow:
Initialize with CT image
Set spine bounds (automatic or manual)
Localize vertebrae
Segment individual structures
Extract spine data for further analysis
Example
>>> ct = imfusion.open("spine_ct.nii")[0] >>> alg = spine.SpineBaseAlgorithm(ct) >>> alg.set_bounds() >>> alg.localize() >>> l1_seg = alg.segment("L1") >>> spine_data = alg.take_spine_data()
Initialize the spine analysis algorithm.
- Parameters:
source – Input CT image set for spine analysis
label – Optional label image set for guided analysis (default: None)
- available_model_names(self: SpineBaseAlgorithm) list[str]
Get list of all available model names.
- Returns:
Names of all registered models
- Return type:
List[str]
- current_model_name(self: SpineBaseAlgorithm) str
Get the name of the currently active model.
- Returns:
Name of the current model
- Return type:
- localize(self: SpineBaseAlgorithm) Status
Perform complete vertebra localization and classification.
Clears any existing vertebrae, then localizes all vertebrae in the CT image and classifies them by type (cervical, thoracic, lumbar). This is the main processing method that should be called after setting bounds.
- Returns:
- Success if localization and classification succeeded,
otherwise an error status indicating what failed
- Return type:
Algorithm.Status
Note
Call set_bounds() before this method for best results.
- reset(self: SpineBaseAlgorithm, arg0: SharedImageSet, arg1: SharedImageSet, arg2: list[ndarray[numpy.float64[3, 1]]], arg3: list[ndarray[numpy.float64[3, 1]]], arg4: list[ndarray[numpy.float64[3, 1]]], arg5: bool) None
Reset the algorithm with new data and parameters.
- Parameters:
source – New source image set
label – New label image set (can be None)
bounds_min – Minimum bounds for spine region
bounds_max – Maximum bounds for spine region
bounds_center – Center point for spine region
clear_vertebrae – Whether to clear existing vertebrae (default: True)
- segment(*args, **kwargs)
Overloaded function.
segment(self: imfusion.spine.SpineBaseAlgorithm, index: int) -> imfusion.SharedImageSet
Segment a specific vertebra by index.
- Args:
index: Zero-based index of the vertebra to segment
- Returns:
SharedImageSet: Segmentation mask for the specified vertebra
- Raises:
IndexError: If index is out of range
segment(self: imfusion.spine.SpineBaseAlgorithm, name: str) -> imfusion.SharedImageSet
Segment a specific vertebra by name.
- Args:
name: Name of the vertebra to segment (e.g., “L1”, “T12”)
- Returns:
SharedImageSet: Segmentation mask for the specified vertebra
- Raises:
KeyError: If no vertebra with the given name is found
- segment_all_vertebrae(self: SpineBaseAlgorithm) Status
Segment all detected vertebrae.
- Returns:
Combined segmentation mask containing all vertebrae
- Return type:
- segment_discs(self: SpineBaseAlgorithm) Status
Segment intervertebral discs.
- Returns:
Segmentation mask for all intervertebral discs
- Return type:
- segment_ilium(self: SpineBaseAlgorithm) bool
Segment the ilium bones.
- Returns:
Segmentation mask for both left and right ilium
- Return type:
- segment_pelvis(self: SpineBaseAlgorithm, join_left_and_right_pelvis: bool = False) bool
Segment the pelvis structures.
- Parameters:
join_left_and_right_pelvis – Whether to combine left and right pelvis into single mask (default: False)
- Returns:
Segmentation mask for pelvis structures
- Return type:
- segment_sacrum(self: SpineBaseAlgorithm) bool
Segment the sacrum.
- Returns:
Segmentation mask for the sacrum
- Return type:
- set_bounds(self: SpineBaseAlgorithm) None
Predicts and sets vertebra column bounds in the input image.
- set_model_by_name(self: SpineBaseAlgorithm, arg0: str) bool
Set the active model by name.
- Parameters:
model_name – Name of the model to use for spine analysis
- take_spine_data(self: SpineBaseAlgorithm) SpineData
Extract and take ownership of the spine data.
Transfers the complete spine data structure containing all detected vertebrae and their properties to the caller. After calling this method, the algorithm no longer owns the spine data.
- Returns:
Complete spine data structure with all detected vertebrae
- Return type:
- class imfusion.spine.SpineData
Bases:
AnatomicalStructureCollection
SpineData holds the data associated with a single spine.
- add_vertebra(self: SpineData, oriented_vertebra: OrientedVertebra) None
Add a copy of a vertebra to the spine.
- Parameters:
oriented_vertebra – OrientedVertebra object to add (will be cloned)
Note
A deep copy of the vertebra is added to preserve the original.
- clone(self: SpineData) SpineData
Create a deep copy of the spine data.
- Returns:
Independent copy with all vertebrae and properties
- Return type:
- get_keypoint(self: SpineData, arg0: str) ndarray[numpy.float64[3, 1]]
Get a specific keypoint by name.
- Parameters:
key – Name of the keypoint to retrieve
- Returns:
3D coordinates of the keypoint
- Return type:
vec3
- Raises:
KeyError – If the keypoint does not exist
- has_ilium(self: SpineData) bool
Check if ilium data is present.
- Returns:
True if ilium structures are detected, False otherwise
- Return type:
- has_sacrum(self: SpineData) bool
Check if sacrum data is present.
- Returns:
True if sacrum structures are detected, False otherwise
- Return type:
- num_vertebrae(self: SpineData) int
Get the number of vertebrae in the spine.
- Returns:
Total number of detected vertebrae
- Return type:
- remove_keypoint(self: SpineData, key: str) None
Remove a keypoint from the spine data.
- Parameters:
key – Name of the keypoint to remove
- Raises:
KeyError – If the keypoint does not exist
- remove_vertebra(*args, **kwargs)
Overloaded function.
remove_vertebra(self: imfusion.spine.SpineData, name: str, check_unique: bool = True) -> None
Remove a vertebra by name.
- Args:
name: Name of the vertebra to remove (e.g., “L1”) check_unique: Whether to verify the name is unique before removal (default: True)
- Raises:
KeyError: If no vertebra with the given name exists KeyError: If check_unique is True and multiple vertebrae have the same name
remove_vertebra(self: imfusion.spine.SpineData, index: int) -> None
Remove a vertebra by index.
- Args:
index: Zero-based index of the vertebra to remove
- Raises:
IndexError: If the index is out of range
- set_keypoint(self: SpineData, key: str, value: ndarray[numpy.float64[3, 1]]) None
Set or overwrite a keypoint.
- Parameters:
key – Name of the keypoint to set
value – 3D coordinates for the keypoint
- vertebra(*args, **kwargs)
Overloaded function.
vertebra(self: imfusion.spine.SpineData, index: int) -> imfusion.spine.OrientedVertebra
Direct read-write access to an OrientedVertebra by name or index. Raises IndexError for out of range indices or KeyError if the name does not exist. If the vertebra object becomes invalid (i.e. removed from the spine), member access to the object raises an expection.
vertebra(self: imfusion.spine.SpineData, name: str, check_unique: bool = True) -> imfusion.spine.OrientedVertebra
Access a vertebra by name.
- Args:
name: Name of the vertebra to access (e.g., “L1”) check_unique: Whether to verify the name is unique (default: True)
- Returns:
OrientedVertebra: Reference to the vertebra object
- Raises:
KeyError: If no vertebra with the given name exists KeyError: If check_unique is True and multiple vertebrae have the same name
- Note:
The returned object becomes invalid if the vertebra is removed from the spine.
- property keypoints
Dictionary access to all keypoints in the spine data.
Getter returns a copy of all keypoints as a dictionary mapping keypoint names to their 3D coordinates. Setter allows bulk assignment of keypoints from a dictionary. For individual keypoint access, use get_keypoint() and set_keypoint() methods.
- Returns:
Dictionary of keypoint names to 3D coordinates
- Return type:
Dict[str, vec3]
- property vertebrae_names
Returns a list of vertebra names. The order is the same as the order of the vertebrae in the spine when accessed via index.
- class imfusion.spine.SpineLocalization2DAlgorithm(self: SpineLocalization2DAlgorithm, image: SharedImageSet)
Bases:
BaseAlgorithm
2D spine localization algorithm for X-ray images.
Detects and localizes vertebrae, femurs, and clavicles in 2D X-ray images using machine learning models. The algorithm can work with multiple model sets and provides configurable keypoint sensitivity.
The resulting detections are stored in the algorithm’s output as SpineData objects containing OrientedVertebra structures with detected keypoints and anatomical features.
Example
>>> xray = imfusion.open("spine_xray.dcm")[0] >>> alg = spine.SpineLocalization2DAlgorithm(xray) >>> alg.add_model("model_v1", "body.pt", "femur.pt", "clavicle.pt", 0.5) >>> alg.compute() >>> results = alg.output()
Initialize the 2D spine localization algorithm.
- Parameters:
image – Input X-ray image set for spine localization
- add_model(self: SpineLocalization2DAlgorithm, arg0: str, arg1: str, arg2: str, arg3: str, arg4: float) None
Add a machine learning model for spine structure detection.
Registers a new model set with the algorithm and configures it for use in subsequent compute() calls. The model can detect vertebrae, femurs, and clavicles depending on which model paths are provided.
- Parameters:
model_name – Unique identifier for the model set
body_detection_path – Path to PyTorch model file for vertebra detection (can be empty)
femur_detection_path – Path to PyTorch model file for femur detection (can be empty)
clavicle_detection_path – Path to PyTorch model file for clavicle detection (can be empty)
keypoint_sensitivity – Sensitivity threshold for keypoint detection (0.0-1.0)
Note
At least one detection path should be provided. Empty paths will skip detection for that anatomical structure.
- class imfusion.spine.SpinePolyRigidDeformation(*args, **kwargs)
Bases:
BaseAlgorithm
Set up a poly-rigid deformation on a volume and one or two AnatomicalStructureCollection objects.
The distance volumes are computed from the vertebrae stored in the source AnatomicalStructureCollection, which are used to define the rigid regions. This algorithm initializes a PolyRigidDeformation on the input CT volume based on the computed distance volumes with as many control points as the number of vertebrae.
If two AnatomicalStructureCollection objects are provided, the first AnatomicalStructureCollection object is registered to the second AnatomicalStructureCollection object and this is used to set the initial parameters of the poly-rigid deformation.
Overloaded function.
__init__(self: imfusion.spine.SpinePolyRigidDeformation, image: imfusion.SharedImageSet, spine_source: imfusion.anatomy.AnatomicalStructureCollection) -> None
Constructor with a single volume and AnatomicalStructureCollection.
- Args:
image: Input CT volume to set deformation on spine_source: AnatomicalStructureCollection containing vertebrae for rigid regions
__init__(self: imfusion.spine.SpinePolyRigidDeformation, image: imfusion.SharedImageSet, spine_source: imfusion.anatomy.AnatomicalStructureCollection, spine_destination: imfusion.anatomy.AnatomicalStructureCollection) -> None
Constructor with volume and source/destination AnatomicalStructureCollections.
- Args:
image: Input CT volume to set deformation on spine_source: Source AnatomicalStructureCollection containing vertebrae for rigid regions spine_destination: Optional destination AnatomicalStructureCollection for initial transformations
- class imfusion.spine.VertebraType(self: VertebraType, value: int)
Bases:
pybind11_object
Enumeration of vertebra types in the spine.
Used to classify vertebrae into anatomical regions: - Cervical: Neck vertebrae (C1-C7) - Thoracic: Chest vertebrae (T1-T12) - Lumbar: Lower back vertebrae (L1-L5)
Members:
None : No specific vertebra type
Lumbar : Lumbar vertebra (L1-L5)
Thoracic : Thoracic vertebra (T1-T12)
Cervical : Cervical vertebra (C1-C7)
- Cervical = <VertebraType.Cervical: 1>
- Lumbar = <VertebraType.Lumbar: 3>
- None = <VertebraType.None: 0>
- Thoracic = <VertebraType.Thoracic: 2>
- property name
- property value
imfusion.stream
- class imfusion.stream.AlgorithmExecutorStream
Bases:
ImageStream
- class imfusion.stream.BasicProcessingStream
Bases:
ImageStream
- class imfusion.stream.FakeImageStream
Bases:
ImageStream
- class imfusion.stream.FakePolyDataStream
Bases:
PolyDataStream
- class imfusion.stream.FakeTrackingStream
Bases:
TrackingStream
- class imfusion.stream.ImageOutStream(self: ImageOutStream, output_connection: OutputConnection, name: str)
Bases:
OutStream
- class imfusion.stream.OfflineStream
Bases:
ImageStream
- class imfusion.stream.OutputConnection
Bases:
pybind11_object
- close_connection(self: OutputConnection) None
- is_compatible(self: OutputConnection, kind: Kind) bool
- open_connection(self: OutputConnection) None
- send_data(self: OutputConnection, data: Data) None
- property is_connected
- class imfusion.stream.PlaybackStream
Bases:
ImageStream
- class imfusion.stream.PlaybackTrackingStream
Bases:
TrackingStream
- class imfusion.stream.SpacingAttachedImageStream
Bases:
ImageStream
- class imfusion.stream.Stream
Bases:
Data
- class State(self: State, value: int)
Bases:
pybind11_object
Members:
CLOSED
OPENING
OPEN
STARTING
RUNNING
PAUSING
PAUSED
RESUMING
STOPPING
CLOSING
- CLOSED = <State.CLOSED: 0>
- CLOSING = <State.CLOSING: 9>
- OPEN = <State.OPEN: 2>
- OPENING = <State.OPENING: 1>
- PAUSED = <State.PAUSED: 6>
- PAUSING = <State.PAUSING: 5>
- RESUMING = <State.RESUMING: 7>
- RUNNING = <State.RUNNING: 4>
- STARTING = <State.STARTING: 3>
- STOPPING = <State.STOPPING: 8>
- property name
- property value
- configuration(self: Stream) Properties
Retrieves the configuration of the object.
- configure(self: Stream, arg0: Properties) None
Configures the object.
- CLOSED = <State.CLOSED: 0>
- CLOSING = <State.CLOSING: 9>
- OPEN = <State.OPEN: 2>
- OPENING = <State.OPENING: 1>
- PAUSED = <State.PAUSED: 6>
- PAUSING = <State.PAUSING: 5>
- RESUMING = <State.RESUMING: 7>
- RUNNING = <State.RUNNING: 4>
- STARTING = <State.STARTING: 3>
- STOPPING = <State.STOPPING: 8>
- property current_state
- property supports_pausing
- property uuid
- class imfusion.stream.StreamRecorderAlgorithm(self: StreamRecorderAlgorithm, arg0: list[Stream])
Bases:
BaseAlgorithm
- class CaptureMode(self: CaptureMode, value: int)
Bases:
pybind11_object
Members:
CAPTURE_ALL
ON_REQUEST
- CAPTURE_ALL = <CaptureMode.CAPTURE_ALL: 0>
- ON_REQUEST = <CaptureMode.ON_REQUEST: 1>
- property name
- property value
- class DataCombinationMode(self: DataCombinationMode, value: int)
Bases:
pybind11_object
Members:
INDIVIDUAL
ALL
FIRST_TRACKING
ONE_ON_ONE
- ALL = <DataCombinationMode.ALL: 1>
- FIRST_TRACKING = <DataCombinationMode.FIRST_TRACKING: 2>
- INDIVIDUAL = <DataCombinationMode.INDIVIDUAL: 0>
- ONE_ON_ONE = <DataCombinationMode.ONE_ON_ONE: 3>
- property name
- property value
- set_capture_next_sample(self: StreamRecorderAlgorithm) None
- start(self: StreamRecorderAlgorithm) None
- stop(self: StreamRecorderAlgorithm) None
- ALL = <DataCombinationMode.ALL: 1>
- CAPTURE_ALL = <CaptureMode.CAPTURE_ALL: 0>
- FIRST_TRACKING = <DataCombinationMode.FIRST_TRACKING: 2>
- INDIVIDUAL = <DataCombinationMode.INDIVIDUAL: 0>
- ONE_ON_ONE = <DataCombinationMode.ONE_ON_ONE: 3>
- ON_REQUEST = <CaptureMode.ON_REQUEST: 1>
- property capture_mode
- property compress_save
- property data_combination_mode
- property image_samples_limit
- property image_stream
- property is_recording
- property limit_reached
- property num_recorded_bytes
- property num_recorded_frames
- property num_recorded_tracking_data
- property num_recorders
- property number_of_data_to_keep
- property passed_time
- property patient_name
- property record_both_timestamps
- property recorded_bytes_limit
- property save_as_dicom
- property save_path
- property save_to_file
- property stop_on_image_size_changed
- property system_mem_limit
- property time_limit
- property tracking_quality_threshold
- property tracking_samples_limit
- property tracking_stream
- property use_device_time_for_image_stream
- property use_device_time_for_tracking_stream
- class imfusion.stream.TrackingOutStream(self: TrackingOutStream, output_connection: OutputConnection, name: str)
Bases:
OutStream
- class imfusion.stream.VideoCameraStream
Bases:
ImageStream
imfusion.ultrasound
- class imfusion.ultrasound.CalibrationMultisweepMode(self: CalibrationMultisweepMode, value: int)
Bases:
pybind11_object
Mode for handling multiple sweeps in ultrasound calibration.
Members:
CONCATENATE : The first half of sweeps are used to reconstruct frames from the second half, and vice versa. Useful for expanding the lateral field of view (e.g., with two shifted acquisitions for each orientation).
SUCCESSIVE_PAIRS : Each pair of successive sweeps is calibrated together and included in the same cost function. Useful for imaging different calibration objects with pairs of sweeps, improving stability by joint optimization.
- CONCATENATE = <CalibrationMultisweepMode.CONCATENATE: 0>
- SUCCESSIVE_PAIRS = <CalibrationMultisweepMode.SUCCESSIVE_PAIRS: 1>
- property name
- property value
- class imfusion.ultrasound.CalibrationSimilarityMeasureConfig(self: CalibrationSimilarityMeasureConfig, mode: int, patch_size: int = 9)
Bases:
pybind11_object
Configuration for similarity measure used in ultrasound calibration.
- property mode
Mode of similarity measure used for ultrasound calibration (int): SAD (0): Sum of Absolute Differences. Measures similarity by summing the absolute differences between corresponding pixel values. SSD (1): Sum of Squared Differences. Measures similarity by summing the squared differences between corresponding pixel values. NCC (2): Normalized Cross-Correlation. Measures similarity by computing the normalized correlation between image patches. LNCC (3): Local Normalized Cross-Correlation. Measures similarity using normalized cross-correlation computed over local regions.
- property patch_size
Patch size.
- class imfusion.ultrasound.CompoundingBoundingBoxMode(self: CompoundingBoundingBoxMode, value: int)
Bases:
pybind11_object
Bounding box calculation mode for the compound_sweep function.
Members:
GLOBAL_COORDINATES : Bounding box in global coordinates
FRAME_NORMAL : Mean frame normal vector
HEURISTIC_ALIGNMENT : Combination of frame normal and PCA of center points
FIT_BOUNDING_BOX : Fitted minimal bounding box
- FIT_BOUNDING_BOX = <CompoundingBoundingBoxMode.FIT_BOUNDING_BOX: 3>
- FRAME_NORMAL = <CompoundingBoundingBoxMode.FRAME_NORMAL: 1>
- GLOBAL_COORDINATES = <CompoundingBoundingBoxMode.GLOBAL_COORDINATES: 0>
- HEURISTIC_ALIGNMENT = <CompoundingBoundingBoxMode.HEURISTIC_ALIGNMENT: 2>
- property name
- property value
- class imfusion.ultrasound.CompoundingMode(self: CompoundingMode, value: int)
Bases:
pybind11_object
Compounding method for the compound_sweep function.
Members:
GPU : GPU-based direct compounding with linear interpolation.
GPU_NEAREST : GPU-based direct compounding with nearest neighbor interpolation.
GPU_BACKWARD : GPU-based backward compounding with linear interpolation.
- GPU = <CompoundingMode.GPU: 0>
- GPU_BACKWARD = <CompoundingMode.GPU_BACKWARD: 8>
- GPU_NEAREST = <CompoundingMode.GPU_NEAREST: 1>
- property name
- property value
- class imfusion.ultrasound.CoordinateSystem(self: CoordinateSystem, value: int)
Bases:
pybind11_object
Coordinate system the geometry is defined in. See Coordinate Systems.
Members:
PIXELS
IMAGE
- IMAGE = <CoordinateSystem.IMAGE: 1>
- PIXELS = <CoordinateSystem.PIXELS: 0>
- property name
- property value
- class imfusion.ultrasound.FrameGeometry
Bases:
pybind11_object
Represents the (fan) geometry of an ultrasound frame.
See the C++ documentation for details on geometry types and coordinate systems.
- class OrientationIndicatorPosition(self: OrientationIndicatorPosition, value: int)
Bases:
pybind11_object
Position of the external orientation indicator (e.g. colored knob) on the probe.
Members:
NEARSIDE : Indicator is at the near side of the US frame (close to the first beam)
FARSIDE : Indicator is at the far side of the US frame (close to the last beam)
- FARSIDE = <OrientationIndicatorPosition.FARSIDE: 1>
- NEARSIDE = <OrientationIndicatorPosition.NEARSIDE: 0>
- property name
- property value
- class TransformationMode(self: TransformationMode, value: int)
Bases:
pybind11_object
Used in transform_point() to specify which geometric transformation to apply.
Members:
NORM_PRESCAN_TO_SCAN_CONVERTED : Transformation from normalized pre-scanconverted coordinates to scan-converted coordinates.
SCAN_CONVERTED_TO_NORM_PRESCAN : Transformation from scan-converted coordinates to normalized pre-scanconverted coordinates.
- NORM_PRESCAN_TO_SCAN_CONVERTED = <TransformationMode.NORM_PRESCAN_TO_SCAN_CONVERTED: 0>
- SCAN_CONVERTED_TO_NORM_PRESCAN = <TransformationMode.SCAN_CONVERTED_TO_NORM_PRESCAN: 1>
- property name
- property value
- clone(self: FrameGeometry) FrameGeometry
Clones the current frame geometry, including the image descriptor.
- contains(self: FrameGeometry, coordinate: ndarray[numpy.float64[2, 1]]) bool
Returns true if the given point is within the fan.
- convert_to(self: FrameGeometry, coordinate_system: CoordinateSystem) FrameGeometry
Returns a copy where internal values were converted to new units.
- is_similar(self: FrameGeometry, other: FrameGeometry, ignore_offset: bool = False, eps: float = 0.1) bool
True if the given frame geometry is similar to this one, within a given tolerance.
- transform_point(self: FrameGeometry, p: ndarray[numpy.float64[2, 1]], mode: TransformationMode) ndarray[numpy.float64[2, 1]]
Applies the specified geometric transformation to a point.
- FARSIDE = <OrientationIndicatorPosition.FARSIDE: 1>
- NEARSIDE = <OrientationIndicatorPosition.NEARSIDE: 0>
- NORM_PRESCAN_TO_SCAN_CONVERTED = <TransformationMode.NORM_PRESCAN_TO_SCAN_CONVERTED: 0>
- SCAN_CONVERTED_TO_NORM_PRESCAN = <TransformationMode.SCAN_CONVERTED_TO_NORM_PRESCAN: 1>
- property coordinate_system
Coordinate system used in the frame geometry.
- property depth
Depth of the frame geometry, in mm or pixels, depending on the coordinate system.
- property frame_center
Returns the center of the frame, in mm or pixels, depending on the coordinate system.
- property img_desc
ImageDescriptor
for the frame geometry.
- property img_desc_prescan
ImageDescriptor
for the image before scan conversion (scanlines).
- property indicator_pos
Position of the external orientation indicator (e.g. colored knob).
- property is_circular
True if geometry is circular type.
- property is_convex
True if geometry is convex type.
- property is_linear
True if geometry is linear type.
- property is_sector
True if geometry is sector type.
- property offset
Offset of the geometry within the image.
- property top_down
top-down or bottom-up.
- Type:
Orientation of the geometry
- class imfusion.ultrasound.FrameGeometryCircular(self: FrameGeometryCircular, coord_sys: CoordinateSystem)
Bases:
FrameGeometry
FrameGeometry specialization for circular frame geometries.
- property depth
Depth of the frame annulus, in mm or pixels, depending on the coordinate system.
- property long_radius
Long radius of the frame annulus, in mm or pixels, depending on the coordinate system.
- property short_radius
Inner radius of the frame annulus, in mm or pixels, depending on the coordinate system.
- class imfusion.ultrasound.FrameGeometryConvex(self: FrameGeometryConvex, coord_sys: CoordinateSystem)
Bases:
FrameGeometry
FrameGeometry specialization for convex frame geometries.
- apex(self: FrameGeometryConvex) ndarray[numpy.float64[2, 1]]
The virtual point beyond the probe surface where all the rays would intersect.
- property depth
Depth of the frame sector, in mm or pixels, depending on the coordinate system. Adapts the long radius.
- property long_radius
Outer radius of the frame sector, in mm or pixels, depending on the coordinate system.
- property opening_angle
Opening angle of the frame sector [deg]. Measured from the vertical line.
- property short_radius
Inner radius of the frame sector, in mm or pixels, depending on the coordinate system.
- class imfusion.ultrasound.FrameGeometryLinear(self: FrameGeometryLinear, coord_sys: CoordinateSystem)
Bases:
FrameGeometry
FrameGeometry specialization for linear frame geometries.
- property depth
Depth of the frame sector, in mm or pixels, depending on the coordinate system.
- property steering_angle
Steering angle of the frame sector [deg]. Positive when tilted to the right.
- property width
Width of the frame sector, in mm or pixels, depending on the coordinate system.
- class imfusion.ultrasound.FrameGeometryMetadata(self: FrameGeometryMetadata)
Bases:
DataComponentBase
Holds metadata for a frame geometry, including configuration and reference to the geometry object.
- property frame_geometry
Returns the associated FrameGeometry object.
- class imfusion.ultrasound.FrameGeometrySector(self: FrameGeometrySector, coord_sys: CoordinateSystem)
Bases:
FrameGeometry
FrameGeometry specialization for sector frame geometries.
- apex(self: FrameGeometrySector) ndarray[numpy.float64[2, 1]]
The virtual point beyond the probe surface where all the rays would intersect.
- property bottom_curvature
Curvature of the bottom line (0.0 is flat, 1.0 full circle).
- property depth
Depth of the frame sector, in mm or pixels, depending on the coordinate system. Adapts the long radius.
- property long_radius
Outer radius of the frame sector, in mm or pixels, depending on the coordinate system.
- property opening_angle
Opening angle of the frame sector [deg]. Measured from the vertical line.
- property short_radius
Inner radius of the frame sector, in mm or pixels, depending on the coordinate system.
- class imfusion.ultrasound.GlProbeDeformation(self: GlProbeDeformation, dist_img: SharedImageSet = None)
Bases:
Deformation
Deformation model for a radial compression emanating from an ultrasound probe.
- class imfusion.ultrasound.ProcessUltrasound(self: ProcessUltrasound)
Bases:
Configurable
Processes ultrasound data, applying various corrections and enhancements.
- setRemoveDuplicates(self: ProcessUltrasound, arg0: bool) None
If true, removes duplicate frames during processing.
- updateGeometry(self: ProcessUltrasound, geom: FrameGeometry, depth: float) None
Updates the geometry with a new FrameGeometry and depth.
- property parameters
Processing parameters.
- class imfusion.ultrasound.ProcessUltrasoundParameters(self: ProcessUltrasoundParameters)
Bases:
pybind11_object
Parameters for processing ultrasound data, such as cropping, masking, and depth adjustment.
- property applyCrop
If true, cropping is applied.
- property applyDepth
If true, depth adjustment is applied.
- property applyMask
If true, masking is applied.
- property depth
Depth value for processing.
- property extraCrop
Extra cropping parameters.
- property extraCropAbsolute
Absolute extra cropping parameters.
- property inpaint
If true, inpainting is applied.
- property removeColorThreshold
If > 0, color pixels are set to zero with given threshold.
- class imfusion.ultrasound.ProcessedUltrasoundStream(self: ProcessedUltrasoundStream, image_stream: ImageStream)
Bases:
ImageStream
Processes live ultrasound streams, providing real-time enhancements and geometry detection.
- apply_preset(self: ProcessedUltrasoundStream, name: str) bool
- detect_geometry(self: ProcessedUltrasoundStream) None
Detect the FrameGeometry and set it as current, requires 10 frames
- init_from_cache(self: ProcessedUltrasoundStream) None
Try to initialize the ProcessUltrasound instance from stream cache if stopped
- init_geometry(self: ProcessedUltrasoundStream) None
Initializes the geometry to average values if an image has already been streamed in.
- remove_preset(self: ProcessedUltrasoundStream, name: str) bool
- save_preset(self: ProcessedUltrasoundStream, name: str) bool
- set_geometry_to_whole_image(self: ProcessedUltrasoundStream) None
Initialization to cover the whole image if an image has already been streamed in
- property automatic_preset_change
- property detecting_geometry
Indicates if the UltrasoundStream is sampling frames to perform the automatic Geometry detection
- property preset_names
- property processing_parameters
- class imfusion.ultrasound.SweepCalibrator(self: SweepCalibrator)
Bases:
Configurable
Performs calibration of tracked ultrasound sweeps.
- add_tip_of_probe_calibration(*args, **kwargs)
Overloaded function.
add_tip_of_probe_calibration(self: imfusion.ultrasound.SweepCalibrator, matrix: numpy.ndarray[numpy.float64[4, 4]], probe_name: str = ‘’) -> None
Adds a tip-of-probe calibration matrix for a given probe name.
add_tip_of_probe_calibration(self: imfusion.ultrasound.SweepCalibrator, sweep: imfusion.ultrasound.UltrasoundSweep) -> None
Adds a tip-of-probe calibration from a sweep.
- calibrate(self: SweepCalibrator, sweep: UltrasoundSweep) bool
Performs calibration on the given sweep.
- calibration_data_count(self: SweepCalibrator, probe_name: str = '') int
Returns the number of calibration data entries for a given probe name.
- static find_depth(sweep: UltrasoundSweep) float
Finds the imaging depth for a given sweep.
- static find_probe_name(sweep: UltrasoundSweep) str
Finds the probe name for a given sweep.
- remove_calibration_data(self: SweepCalibrator, probe_name: str) None
Removes calibration data for a given probe name.
- rename_calibration_data(self: SweepCalibrator, old_name: str, new_name: str) None
Renames calibration data from old_name to new_name.
- tip_of_probe_calibration(self: SweepCalibrator, probe_name: str = '') ndarray[numpy.float64[4, 4]] | None
Returns the tip-of-probe calibration matrix for a given probe name.
- property forces_no_probe_names
If true, disables probe name checks during calibration.
- property known_probes
List of known probe names with calibration data.
- class imfusion.ultrasound.SweepRecorderAlgorithm(self: SweepRecorderAlgorithm, streams: list[Stream])
Bases:
StreamRecorderAlgorithm
Algorithm for recording an ImageStream with modality ULTRASOUND and zero, one, or multiple TrackingStreams into an UltrasoundSweep.
The algorithm calibrates the sweep, sets the FrameGeometry, and selects the TrackingSequence (optionally relative to another one). Data can be stored either during or after the recording process. The algorithm returns the recorded UltrasoundSweep.
- compute_current_bounding_box(self: SweepRecorderAlgorithm) ndarray[numpy.float64[3, 1]]
- reconnect(self: SweepRecorderAlgorithm) bool
- property device_name
- property initial_ref_pos
- property live_stream_recording
- property probe_mesh_tracking_device_index
- property probe_model_path
- property probe_model_reg
- property reference_coordinate_system
- property registration
- property relative_tracking
- property relative_tracking_name
- property sweep_calibrator
- property temporal_offset
- property tracking_ref_pos
- class imfusion.ultrasound.UltrasoundDISARegistrationAlgorithm(self: UltrasoundDISARegistrationAlgorithm, arg0: UltrasoundSweep, arg1: SharedImageSet)
Bases:
BaseAlgorithm
Performs deep learning-based registration of ultrasound sweeps.
- class Mode(self: Mode, value: int)
Bases:
pybind11_object
Members:
LOCAL_REG
GLOBAL_REG
- GLOBAL_REG = <Mode.GLOBAL_REG: 1>
- LOCAL_REG = <Mode.LOCAL_REG: 0>
- property name
- property value
- initialize_pose(self: UltrasoundDISARegistrationAlgorithm) None
- prepare(self: UltrasoundDISARegistrationAlgorithm) bool
- GLOBAL_REG = <Mode.GLOBAL_REG: 1>
- LOCAL_REG = <Mode.LOCAL_REG: 0>
- property mode
- property probe_deformation
- property spacing
- class imfusion.ultrasound.UltrasoundMetadata(self: UltrasoundMetadata)
Bases:
DataComponentBase
Holds metadata for an ultrasound frame, such as scan mode, device, probe, and imaging parameters.
- class ScanMode(self: ScanMode, value: int)
Bases:
pybind11_object
Members:
BMODE
PDI
PWD
CFM
THI
MMODE
OTHER
- BMODE = <ScanMode.BMODE: 0>
- CFM = <ScanMode.CFM: 3>
- MMODE = <ScanMode.MMODE: 5>
- OTHER = <ScanMode.OTHER: 6>
- PDI = <ScanMode.PDI: 1>
- PWD = <ScanMode.PWD: 2>
- THI = <ScanMode.THI: 4>
- property name
- property value
- BMODE = <ScanMode.BMODE: 0>
- CFM = <ScanMode.CFM: 3>
- MMODE = <ScanMode.MMODE: 5>
- OTHER = <ScanMode.OTHER: 6>
- PDI = <ScanMode.PDI: 1>
- PWD = <ScanMode.PWD: 2>
- THI = <ScanMode.THI: 4>
- property brightness
Brightness setting.
- property depth
Returns the imaging depth in [mm].
- property device
Device name or identifier.
- property dynamic_range
Dynamic range setting.
- property end_depth
End depth of the imaging region in [mm].
- property focal_depth
Focal depth of the imaging region in [mm].
- property frequency
Imaging frequency.
- property image_enhanced
True if the image is enhanced.
- property number_of_beams
Number of beams in the frame.
- property preset
Imaging preset used for acquisition.
- property probe
Probe name or identifier.
- property samples_per_beam
Number of samples per beam.
- property scan_converted
True if the image is scan-converted.
- property scan_mode
Scan mode of the ultrasound frame.
- property start_depth
Start depth of the imaging region in [mm].
- class imfusion.ultrasound.UltrasoundRegistrationAlgorithm(self: UltrasoundRegistrationAlgorithm, us_volume_or_sweep: SharedImageSet, tomographic_volume: SharedImageSet, distance_volume: SharedImageSet = None)
Bases:
BaseAlgorithm
Registration of an ultrasound sweep or volume to a tomographic scan(CT or MRI).
- class InitializationMode(self: InitializationMode, value: int)
Bases:
pybind11_object
Members:
None
PredictionMaps
DISAGlobal
- DISAGlobal = <InitializationMode.DISAGlobal: 2>
- None = <InitializationMode.None: 0>
- PredictionMaps = <InitializationMode.PredictionMaps: 1>
- property name
- property value
- num_evals(self: UltrasoundRegistrationAlgorithm) int
- output_pre_processed(self: UltrasoundRegistrationAlgorithm) DataList
- prepare_data(self: UltrasoundRegistrationAlgorithm) bool
- property deformation
- property initialization_mode
- property probe_compression
- property relative_sweep_spacing
- property slice_based
- property target_volume_spacing
- property ultrasound_is_moving
- class imfusion.ultrasound.UltrasoundSweep(self: UltrasoundSweep)
Bases:
TrackedSharedImageSet
Set of 2D ultrasound images constituting a 3D (freehand) ultrasound sweep, effectively a clip of 2D ultrasound images with arbitrarily sampled tracking data and additional ultrasound-specific metadata.
- calibrate_tracking(self: UltrasoundSweep, calibration: Properties) None
Performs tracking calibration using the provided calibration data.
- fill_with_slices(self: UltrasoundSweep, image: SharedImage, axis: int) None
Fill sweep with slices from a volume along given axis direction. Axis 0 is unsupported and does nothing. Axis 1 slices along the height axis producing frames of width × slices. Axis 2 slices along the slice axis producing frames of width × height.
- frame_geometry(self: UltrasoundSweep) FrameGeometry
Returns the FrameGeometry associated with this sweep.
- global_bounding_box(self: UltrasoundSweep, use_selection: bool, use_frame_geometry: bool) object
Compute global bounding box in world coordinates. Either only the selected frames or all frames are included (useSelection) and each frame is sized using either the full image dimensions or the frame geometry extent (useFrameGeometry). If the sweeps’s Selection is empty or useSelection is False, all frames will be considered for the bounding box.
- local_bounding_box(self: UltrasoundSweep, use_selection: bool, use_frame_geometry: bool) object
Compute local bounding box in sweep-aligned coordinates considering the image orientation and sweep direction. Either only the selected frames or all frames are included (useSelection) and each frame is sized using either the full image dimensions or the frame geometry extent (useFrameGeometry). If the sweeps’s Selection is empty or useSelection is False, all frames will be considered for the bounding box.
- metadata(self: UltrasoundSweep, num: int = -1) UltrasoundMetadata
Returns the metadata for the specified frame (num >= 0) or the focused frame (num == -1).
- property bounds
Returns the axis-aligned bounding box tuple (center, extent, volume, corners, is_valid) of all frames in world space considering the frame geometry, where is_valid is True if none of the LLF/URB components is NaN.
- imfusion.ultrasound.calibrate_ultrasound(sweeps: list[imfusion.ultrasound.UltrasoundSweep], *, optimizer: Optional[imfusion.Optimizer] = None, similarity_measure_config: Optional[imfusion.ultrasound.CalibrationSimilarityMeasureConfig] = <CalibrationSimilarityMeasureConfig mode=2, patch_size=9>, multisweep_mode: int = <CalibrationMultisweepMode.CONCATENATE: 0>, sweep_selection: int = -1, use_zero_mask: bool = True, max_frames: int = 20, min_overlap: int = 0, use_backward_compounding: bool = False) None
Performs freehand ultrasound calibration using overlapping sweeps.
Pairs of ultrasound sweeps are reconstructed into each other’s frames to compute 2D image similarities. The calibration matrix and, optionally, the temporal calibration offset are optimized.
- Reference:
Wolfgang Wein and Ali Khamene, “Image-based method for in-vivo freehand ultrasound calibration.” Medical Imaging 2008: Ultrasonic Imaging and Signal Processing, Vol. 6920, SPIE, 2008. DOI: https://doi.org/10.1117/12.769948 PDF: https://campar.in.tum.de/pub/wein2008uscal/wein2008uscal.pdf
Note
The function updates the sweeps’ calibrations directly (in-place).
The algorithm relies on image similarities, so its capture range is limited similarly to intensity-based
image registration. Use with pre-aligned sweeps.
- Parameters:
sweeps – A list of ultrasound sweeps.
optimizer – Optimization algorithm used for non-linear optimization. Default is BBQA.
similarity_measure_config – Similarity measure configuration.
multisweep_mode – Mode of handling multiple sweeps.
sweep_selection –
Specifies which sweeps are iterated over during computation: - 0..n-1: Selects a specific sweep by index. Only this sweep is iterated over, and the other sweep(s) are reconstructed
into the location of its frames to compute the 2D image similarity.
-1: All sweeps are used. For two sweeps, both of their frames are reconstructed into each other’s locations. This is the default and helps to average out the anisotropic point spread function in ultrasound.
-2: Uses a heuristic initialization based on the assumed geometry of the two sweeps.
use_zero_mask – If true, an internal mask is used for the compounded slices to ignore zero intensities.
max_frames – Maximum number of frames per sweep. (0 is all frames.)
min_overlap – Minimum overlap in percent for penalty.
use_backward_compounding – If true, GPU backward compounding mode is used (is slower, but more precise). Otherwise, orthogonal compounding is used.
- Returns:
None. (Calibration is changed inplace and can be accessed with sweeps[i].tracking().calibration).
- imfusion.ultrasound.compound_sweep(*args, **kwargs)
Overloaded function.
compound_sweep(sweep: imfusion.ultrasound.UltrasoundSweep, *, mode: imfusion.ultrasound.CompoundingMode = <CompoundingMode.GPU: 0>, bounding_box_mode: imfusion.ultrasound.CompoundingBoundingBoxMode = <CompoundingBoundingBoxMode.HEURISTIC_ALIGNMENT: 2>, vram_limit: Optional[int] = None, spacing: Optional[float] = None, background_intensity: float = 0.0, verbose: bool = False, blur_sweep_border: Optional[float] = None, container_image: imfusion.SharedImageSet = None, **kwargs) -> imfusion.SharedImageSet
Reconstruct a voxel-based 3D volume from an ultrasound sweep.
- Parameters:
sweep – The ultrasound sweep to compound.
mode – The method used for compounding.
bounding_box_mode – The method used for calculating the bounding box.
vram_limit – OpenGL video memory limit in MB for the compounded volume. (Default: 512MB.)
spacing – Isotropic spacing. (If not provided, it will be automatically set.)
background_intensity – Intensity value for the background in the compounded volume.
verbose – If true, enables verbose output during compounding.
blur_sweep_border – Sets the distance from the border over which the compounded sweep is blurred.
container_image – Optional container image to store the compounded volume.
frame_thickness – Frame thickness (mm) in GPU backward compounding mode. (If not provided, it will be automatically set.)
- Returns:
The compounded ultrasound sweep as a SharedImageSet.
compound_sweep(sweeps: list[imfusion.ultrasound.UltrasoundSweep], *, mode: imfusion.ultrasound.CompoundingMode = <CompoundingMode.GPU: 0>, bounding_box_mode: imfusion.ultrasound.CompoundingBoundingBoxMode = <CompoundingBoundingBoxMode.HEURISTIC_ALIGNMENT: 2>, vram_limit: Optional[int] = None, spacing: Optional[float] = None, background_intensity: float = 0.0, verbose: bool = False, blur_sweep_border: Optional[float] = None, container_image: imfusion.SharedImageSet = None, individual_compoundings: bool = False, **kwargs) -> list[imfusion.SharedImageSet]
Reconstructs a joint or individual voxel-based 3D volume(s) from a list of ultrasound sweeps.
- Parameters:
sweeps – The ultrasound sweep to compound.
mode – The method used for compounding.
bounding_box_mode – The method used for calculating the bounding box.
vram_limit – OpenGL video memory limit in MB for the compounded volume.
spacing – Isotropic spacing, automatically set if zero.
background_intensity – Intensity value for the background in the compounded volume.
verbose – If true, enables verbose output during compounding.
blur_sweep_border – Sets the distance from the border over which the compounded sweep is blurred.
container_image – Optional container image to store the compounded volume.
individual_compoundings – If true, compounds each sweep individually; otherwise, compounds all sweeps into a single volume.
frame_thickness – Frame thickness (mm) in GPU backward compounding mode.
- Returns:
A list of compounded ultrasound sweeps.
- imfusion.ultrasound.process_ultrasound_clip(clip: SharedImageSet, tracking_sequence: TrackingSequence = None, *, frame_geometry: FrameGeometry | None = None, convert_to_sweep: bool = True, use_frame_geometry_calib_correction: bool = False, set_ultrasound_processing_parameters: ProcessUltrasoundParameters | None = None) SharedImageSet
Process an ultrasound clip according to specified parameters.
- Parameters:
image_set – Ultrasound image set (clip) to process.
tracking_sequence – Tracking data used when converting to a sweep, optional.
frame_geometry – New frame geometry to apply, optional.
convert_to_sweep – If True, convert the processed clip to an UltrasoundSweep.
use_frame_geometry_calib_correction – If True, apply calibration correction to both frame geometry and tracking.
set_ultrasound_processing_parameters – Optional processing parameters.
- Returns:
The processed ultrasound clip or sweep.
imfusion.vision
The vision module provides methods used for acquiring, processing, analyzing and understanding digital images. This includes (but is not limited to): camera calibration, image and point cloud filtering, feature detection, mesh reconstruction, etc…
It is strongly recommended to refer to data structures used in the vision module, namely Mesh
, PointCloud
and SharedImageSet
.
Camera Calibration
The algorithm performs camera calibration based on a set of 2D images. By estimating the camera intrinsic matrix, distortion coefficients and transformation vectors by solving the equations that map 3D object points to their corresponding 2D image points in the image plane.
>>> from imfusion import vision
>>> image_set, *_ = imfusion.load(...)
>>> marker_config = vision.marker_configuration.ChessboardInfo()
>>> marker_config.grid_size = (9, 7)
>>> marker_config.cell_size = (30, 30)
>>> vision.calibrate_camera(image_set, marker_config)
If the function executes successfully, it will assign a valid CameraCalibrationDataComponent
to the image_set
. It can be extracted in our example by running the following line:
>>> calibration_data = image_set.components.camera_calibration
More information about the marker configuration can be found in the documentation of ImFusion Suite
Mesh Alignment (ICP)
The Mesh/PointCloud algorithm minimizes the difference between the source and the target by iteratively refining the transformation (rotation and translation) applied to the source points.
>>> mesh_source, *_ = imfusion.load(...)
>>> mesh_target, *_ = imfusion.load(...)
>>> meshes = [mesh_source, mesh_target]
>>> vision.align(meshes, use_gpu=False, icp_algorithm=vision.AlignmentIcpAlgorithm.FAST_GLOBAL_REGISTRATION_OPEN3D)
[(-1.0, 2.7128763582966438e-17)]
The function align()
updates the transformation matrix of the source mesh if ICP is run successfully:
>>> registration_matrix = mesh_source.matrix_to_world()
Ball Pivoting Surface Reconstruction
The function run the ball pivoting surface reconstruction [1] to create a mesh out of a point cloud.
[1] - Bernardini Fausto, Joshua Mittleman, Holly Rushmeier, Cláudio Silva, and Gabriel Taubin. “The ball-pivoting algorithm for surface reconstruction.” IEEE transactions on visualization and computer graphics 5, no. 4 (1999): 349-359.
>>> pc, *_ = imfusion.load(...)
>>> mesh = vision.ball_pivoting_surface_reconstruction(pc)
>>> imfusion.save(mesh, tmp_path / "mesh.ply")
- class imfusion.vision.AlignmentIcpAlgorithm(self: AlignmentIcpAlgorithm, value: int)
Bases:
pybind11_object
Members:
PROJECTIVE_POINT_TO_PLANE_ICP
POINT_TO_POINT_ICP_PCL
POINT_TO_PLANE_ICP_PCL
GENERALIZED_ICP
RANSAC_GLOBAL_REGISTRATION_OPEN3D
FAST_GLOBAL_REGISTRATION_OPEN3D
MANUAL_CORRESPONDENCES
- FAST_GLOBAL_REGISTRATION_OPEN3D = <AlignmentIcpAlgorithm.FAST_GLOBAL_REGISTRATION_OPEN3D: 11>
- GENERALIZED_ICP = <AlignmentIcpAlgorithm.GENERALIZED_ICP: 4>
- MANUAL_CORRESPONDENCES = <AlignmentIcpAlgorithm.MANUAL_CORRESPONDENCES: 12>
- POINT_TO_PLANE_ICP_PCL = <AlignmentIcpAlgorithm.POINT_TO_PLANE_ICP_PCL: 3>
- POINT_TO_POINT_ICP_PCL = <AlignmentIcpAlgorithm.POINT_TO_POINT_ICP_PCL: 2>
- PROJECTIVE_POINT_TO_PLANE_ICP = <AlignmentIcpAlgorithm.PROJECTIVE_POINT_TO_PLANE_ICP: 1>
- RANSAC_GLOBAL_REGISTRATION_OPEN3D = <AlignmentIcpAlgorithm.RANSAC_GLOBAL_REGISTRATION_OPEN3D: 10>
- property name
- property value
- class imfusion.vision.CameraCalibrationDataComponent(self: CameraCalibrationDataComponent)
Bases:
DataComponentBase
A data component storing the intrinsic calibration of a pinhole camera.
- property distortion
Distortion coefficients for the distortion model in OpenCV
<https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html>
[k1, k2, p1, p2, k3].
- property image_size
Image size [Width, Height].
- property intrinsic_matrix
Camera intrinsic calibration matrix.
- property mre
mean reprojection errors.
- property registration
Transformation matrix from the camera to the world coordinate system.
- property stdDistortion
standard deviations estimated for camera distortion k1, k2, p1, p2, k3.
- property stdK
Standard deviations estimated for intrinsic parameters fx, fy, cx and cy.
- class imfusion.vision.CameraCalibrationLensModel(self: CameraCalibrationLensModel, value: int)
Bases:
pybind11_object
Members:
STANDARD
FISHEYE
- FISHEYE = <CameraCalibrationLensModel.FISHEYE: 1>
- STANDARD = <CameraCalibrationLensModel.STANDARD: 0>
- property name
- property value
- class imfusion.vision.CameraCalibrationResult
Bases:
pybind11_object
Holds camera calibration results. Cannot be modified.
- property camera_poses
Camera poses (from world to camera) for each frame used in the calibration.
- property distortion_vector
The camera distortion parameters [k1, k2, p1, p2, k3]. Can also be accessed via
CameraCalibrationDataComponent
.
- property image_points
Markers detected corner points on the image.
- property intrinsic_matrix
The camera intrinsic matrix. Can also be accessed via
CameraCalibrationDataComponent
.
- property marker_points
List of the board corner points in 3D.
- property mre
Mean projection errors for calibration.
- property reprojected_image_points
Marker reprojected corners points on the image.
- property reprojection_errors
Reprojection errors for all given images.
- property selected_boards
Contains the indices of the selected images if
use_auto_selection
isTrue
. Otherwise, it contains all indices.
- property std_distortion
Standard deviation for camera distortion.
- property std_intrinsic
Standard deviation for camera intrinsics parameters.
- class imfusion.vision.CameraCalibrationSettings(self: imfusion.vision.CameraCalibrationSettings, *, lens_model: imfusion.vision.CameraCalibrationLensModel = <CameraCalibrationLensModel.STANDARD: 0>, fix_principal_point: bool = False, fix_aspect_ratio: bool = False, zero_radial_distortion: bool = False, fix_radial_distortion: Annotated[list[bool], FixedSize(3)] = [False, False, False], zero_tangential_distortion: bool = False, recalibrate_with_inliers: bool = True, stereo_same_focal_length: bool = False, use_auto_selection: bool = True, reprojection_error_threshold: float = 2.0, min_detections: int = 8, max_selection: int = 50)
Bases:
Configurable
Specifies parameters for camera calibration.
- Parameters:
lens_model – The lens model to use for calibration.
fix_principal_point – Specifies whether the principal point should remain unchanged during the calibration (stays in the image center if no initial intrinsics provided).
fix_aspect_ratio – The ratio fx/fy stays the same as in the initial intrinsics.
zero_radial_distortion – Radial distortion coefficients [k1, k2, k3] are set to zero and stay zero if this is enabled.
fix_radial_distortion – Specifies which radial distortion coefficients [k1, k2, k3] should remain unchanged during calibration.
zero_tangential_distortion – If True, tangential distortion coefficients [p1, p2] are set to zero and remain unchanged.
recalibrate_with_inliers – If True, recalibration is performed using images with a mean reprojection error below the threshold specified by reprojection_error_threshold.
stereo_same_focal_length – Enforces that both cameras have the same focal length in x and y directions during stereo calibration.
use_auto_selection – If True, a subset of images will be automatically selected, instead of using all images.
reprojection_error_threshold – Maximum reprojection error for inliers used in recalibration when
recalibrate_with_inliers
is enabled.min_detections – The minimum number of detected points required per image for calibration.
max_selection – Maximum number of images to select when automatic selection (
use_auto_selection
) is enabled.
- property fix_aspect_ratio
The ratio fx/fy stays the same as in the initial intrinsics.
- property fix_principal_point
Specifies whether the principal point should remain unchanged during the calibration (stays in the image center if no initial intrinsics provided).
- property fix_radial_distortion
Specifies which radial distortion coefficients [k1, k2, k3] should remain unchanged during calibration.
- property lens_model
The lens model to use for calibration.
- property max_selection
Maximum number of images to select when automatic selection (
use_auto_selection
) is enabled.
- property min_detections
The minimum number of detected points required per image for calibration.
- property recalibrate_with_inliers
If True, recalibration is performed using images with a mean reprojection error below the threshold specified by reprojection_error_threshold.
- property reprojection_error_threshold
Maximum reprojection error for inliers used in recalibration when
recalibrate_with_inliers
is enabled.
- property stereo_same_focal_length
Enforces that both cameras have the same focal length in x and y directions during stereo calibration.
- property use_auto_selection
If True, a subset of images will be automatically selected, instead of using all images.
- property zero_radial_distortion
Radial distortion coefficients [k1, k2, k3] are set to zero and stay zero if this is enabled.
- property zero_tangential_distortion
If True, tangential distortion coefficients [p1, p2] are set to zero and remain unchanged.
- class imfusion.vision.HandEyeCalibrationMethod(self: HandEyeCalibrationMethod, value: int)
Bases:
pybind11_object
Members:
TSAI_LENZ
GLOBAL_MLSL
- GLOBAL_MLSL = <HandEyeCalibrationMethod.GLOBAL_MLSL: 1>
- TSAI_LENZ = <HandEyeCalibrationMethod.TSAI_LENZ: 0>
- property name
- property value
- class imfusion.vision.HandEyeCalibrationStreamInfo(self: HandEyeCalibrationStreamInfo, value: int)
Bases:
pybind11_object
Members:
CALIB_TO_CAM
CAM_TO_CALIB
HAND_TO_BASE
BASE_TO_HAND
- BASE_TO_HAND = <HandEyeCalibrationStreamInfo.BASE_TO_HAND: 3>
- CALIB_TO_CAM = <HandEyeCalibrationStreamInfo.CALIB_TO_CAM: 0>
- CAM_TO_CALIB = <HandEyeCalibrationStreamInfo.CAM_TO_CALIB: 1>
- HAND_TO_BASE = <HandEyeCalibrationStreamInfo.HAND_TO_BASE: 2>
- property name
- property value
- class imfusion.vision.HandEyeCalibrationType(self: HandEyeCalibrationType, value: int)
Bases:
pybind11_object
Members:
EYE_IN_HAND
EYE_ON_BASE
- EYE_IN_HAND = <HandEyeCalibrationType.EYE_IN_HAND: 0>
- EYE_ON_BASE = <HandEyeCalibrationType.EYE_ON_BASE: 1>
- property name
- property value
- class imfusion.vision.MLModelType(self: MLModelType, value: int)
Bases:
pybind11_object
The type of the runtime Machine Learning model, used in the vision module.
Members:
TORCH_FP32
ONNX_FP32
TENSORRT_FP32
TORCH_FP16
ONNX_FP16
TENSORRT_FP16
- ONNX_FP16 = <MLModelType.ONNX_FP16: 4>
- ONNX_FP32 = <MLModelType.ONNX_FP32: 1>
- TENSORRT_FP16 = <MLModelType.TENSORRT_FP16: 5>
- TENSORRT_FP32 = <MLModelType.TENSORRT_FP32: 2>
- TORCH_FP16 = <MLModelType.TORCH_FP16: 3>
- TORCH_FP32 = <MLModelType.TORCH_FP32: 0>
- property name
- property value
- class imfusion.vision.MarkerDetectionResult
Bases:
pybind11_object
Holds results of marker detection. Cannot be modified.
- property image_points
Detected corner points on each image.
- property object_points
object points for each image.
- property poses
World to camera transformations for each camera.
- property reprojected_image_points
Reprojected detected points.
- class imfusion.vision.MeshGroupCollisionDetection(self: MeshGroupCollisionDetection)
Bases:
pybind11_object
Check for collision between two groups of meshes. This class allows to define groups of Mesh objects, and to verify if any Mesh from a group collides with any from another group. It is also possible to specify a minimum safety distance that must be kept between the two models to avoid a collision. When the meshes are passed to the algorithm, their format is converted (this is an expensive operation). Subsequent calls to the algorithm make use of this optimized format. For optimal performance, it is advised to specify a new position for each mesh if the respective object has moved, without resetting the mesh itself. It is also possible to check for collision between a single mesh and a whole group, or between a single mesh and another mesh. Each particular case is optimized. To avoid false positives, the mesh should not belong to the other group, or to the group of the other mesh.
- add_mesh_to_group(self: MeshGroupCollisionDetection, mesh: Mesh, id: int) int
Adds a Mesh to a group.
- Parameters:
mesh –
The mesh to be added (its lifetime must be longer than the class)
id: An integer ID that identifies the group
- do_groups_collide(self: MeshGroupCollisionDetection, group_id1: int, group_id2: int, safety_distance: float = 0) bool
Check for compenetration between the two mesh groups.
- do_meshes_collide(self: MeshGroupCollisionDetection, mesh_id1: int, mesh_id2: int, safety_distance: float = 0) bool
Check for compenetration between two single meshes.
- mesh_min_distance(self: MeshGroupCollisionDetection, mesh_id: int, group_id: int) float
Returns the minimum distance between two groups.
- class imfusion.vision.PointCloudsOverlap(self: PointCloudsOverlap, clouds: list[PointCloud], compute_bidirectional_overlap: bool = True, compute_rms: bool = False, matching_distance_threshold: float = 5.0, matching_angle_threshold: float = 180.0)
Bases:
BaseAlgorithm
Computes the pairwise overlap of a set of dense point clouds.
- Parameters:
clouds – A list of point clouds to compute the overlap for.
compute_bidirectional_overlap – If set to False, then the oer lap between point clouds ‘i’ to ‘j’, will be the same as ‘j’ to ‘i’.
compute_rms – If set to True then root mean square error is also calculate when building the overlap map.
matching_distance_threshold – Maximum distance in mm for points in different point clouds to be considered the same one.
matching_angle_threshold – Maximum normal angle in degrees for points in different point clouds to be considered the same one.
- compute_overlap(self: PointCloudsOverlap, export_overlap_point_clouds: bool = False) list[PointCloud]
Builds the internal overlap map, also return the overlapping points as point clouds.
- Parameters:
export_overlap_point_clouds – if True, return a list of overlap point_clouds.
- Returns:
overlap point_clouds.
- pair_overlap(self: PointCloudsOverlap, idx_source: int, idx_target: int) tuple[float, list[bool], float]
Return the overlap between the source and target point clouds.
compute_overlap
must be called before this function.- Parameters:
idx_source – index of source point cloud.
idx_target – index of target point cloud.
- Returns:
overlap factor in range [0;1].
contains true for each source point cloud point which overlaps with the target point cloud.
RMS error, return -1 if
compute_rms
was set to False.
- property compute_bidirectional_overlap
If set to False, then the oer lap between point clouds ‘i’ to ‘j’, will be the same as ‘j’ to ‘i’.
- property compute_rms
If set to True then root mean square error is also calculate when building the overlap map.
- property matching_angle_threshold
Maximum normal angle in degrees for points in different point clouds to be considered the same one.
- property matching_distance_threshold
Maximum distance in mm for points in different point clouds to be considered the same one.
- class imfusion.vision.PoissonReconstructionColorSource(self: PoissonReconstructionColorSource, value: int)
Bases:
pybind11_object
Members:
COLOR : Take the color information from point-cloud.
NO_COLOR : Do not use any color information.
DENSITY : Colormap the density information.
- COLOR = <PoissonReconstructionColorSource.COLOR: 0>
- DENSITY = <PoissonReconstructionColorSource.DENSITY: 2>
- NO_COLOR = <PoissonReconstructionColorSource.NO_COLOR: 1>
- property name
- property value
- class imfusion.vision.PoissonReconstructionDensityThresholdMode(self: PoissonReconstructionDensityThresholdMode, value: int)
Bases:
pybind11_object
Members:
NONE : No thresholding is applied on the vertices, all contribute to the final mesh.
QUANTILE_DENSITY : Applies alpha trimming on the vertices that are below a certain quantile in the density histogram. The quantile value is controlled by
PoissonParams.densityThreshold
.MEDIAN_DENSITY : Applies binary thresholding on the vertices that have a lower density than a certain percentage of the median value of the density histogram. The percentage value is controlled by
PoissonParams.medianDensityPercentageThreshold
.ABSOLUTE_DENSITY : Uses absolute density threshold. The value is controlled by PoissonParams.absoluteDensityThreshold.
- ABSOLUTE_DENSITY = <PoissonReconstructionDensityThresholdMode.ABSOLUTE_DENSITY: 3>
- MEDIAN_DENSITY = <PoissonReconstructionDensityThresholdMode.MEDIAN_DENSITY: 2>
- NONE = <PoissonReconstructionDensityThresholdMode.NONE: 0>
- QUANTILE_DENSITY = <PoissonReconstructionDensityThresholdMode.QUANTILE_DENSITY: 1>
- property name
- property value
- class imfusion.vision.PoseGraphOptimization
Bases:
pybind11_object
- class Constraint
Bases:
pybind11_object
Measurements to constrain estimation targets.
- property from
- property information
- property to
- property transform
- class LeastSquaresSolution(self: imfusion.vision.PoseGraphOptimization.LeastSquaresSolution, *, robust_kernel: imfusion.vision.PoseGraphOptimization.LeastSquaresSolution.RobustKernel = <RobustKernel.HUBER: 5>, robust_kernel_delta: float = 2.7955321496988725, apply_spanning_tree_initialization: bool = True)
Bases:
PoseGraphOptimization
Pose graph optimization with Least Squares solution based on G2O’s implementation.
- Parameters:
robust_kernel – Robust kernel for the least squares method.
robust_kernel_delta – set the window size of the error. A squared error above delta^2 is considered as outlier in the data.
apply_spanning_tree_initialization – if set to True, apply spanning tree for initialization.
- class RobustKernel(self: RobustKernel, value: int)
Bases:
pybind11_object
Members:
NONE
CAUCHY
DCS
GEMAN_MC_CLURE
FAIR
HUBER
PSEUDO_HUBER
SATURATED
SCALE_DELTA
TUKEY
WELSCH
- CAUCHY = <RobustKernel.CAUCHY: 1>
- DCS = <RobustKernel.DCS: 2>
- FAIR = <RobustKernel.FAIR: 4>
- GEMAN_MC_CLURE = <RobustKernel.GEMAN_MC_CLURE: 3>
- HUBER = <RobustKernel.HUBER: 5>
- NONE = <RobustKernel.NONE: 0>
- PSEUDO_HUBER = <RobustKernel.PSEUDO_HUBER: 6>
- SATURATED = <RobustKernel.SATURATED: 7>
- SCALE_DELTA = <RobustKernel.SCALE_DELTA: 8>
- TUKEY = <RobustKernel.TUKEY: 9>
- WELSCH = <RobustKernel.WELSCH: 10>
- property name
- property value
- CAUCHY = <RobustKernel.CAUCHY: 1>
- DCS = <RobustKernel.DCS: 2>
- FAIR = <RobustKernel.FAIR: 4>
- GEMAN_MC_CLURE = <RobustKernel.GEMAN_MC_CLURE: 3>
- HUBER = <RobustKernel.HUBER: 5>
- NONE = <RobustKernel.NONE: 0>
- PSEUDO_HUBER = <RobustKernel.PSEUDO_HUBER: 6>
- SATURATED = <RobustKernel.SATURATED: 7>
- SCALE_DELTA = <RobustKernel.SCALE_DELTA: 8>
- TUKEY = <RobustKernel.TUKEY: 9>
- WELSCH = <RobustKernel.WELSCH: 10>
- property apply_spanning_tree_initialization
- property robust_kernel
- property robust_kernel_delta
- add_constraint(*args, **kwargs)
Overloaded function.
add_constraint(self: imfusion.vision.PoseGraphOptimization, arg0: imfusion.vision.PoseGraphOptimization.Constraint) -> None
Add constraint between keyframe nodes. Nodes must exist. Raises an exception if constraint was not added.
add_constraint(self: imfusion.vision.PoseGraphOptimization, from: int, to: int, transform: numpy.ndarray[numpy.float64[4, 4]], information: numpy.ndarray[numpy.float64[6, 6]]) -> None
Add constraint between keyframe nodes. Nodes must exist. Raises an exception if constraint was not added.
- Parameters:
from – source node.
to – target node.
transform – Transformation between “from” and “to” nodes.
information – Inverse covariance matrix of the pose represented in the format (tx,ty,tz,qx,qy,qz) with (qx,qy,qz) being a unit quaternion.
- add_pose(self: PoseGraphOptimization, id: int, transform: ndarray[numpy.float64[4, 4]], fixed: bool = False) None
Add keyframe node with given pose. Raises an exception if node already existed.
- Parameters:
id – id of the node.
transform – transformation matrix.
fixed – if set to True, then the added pose is fixed during optimization.
- compute_connected_components(self: PoseGraphOptimization) list[int]
Gives an index for each pose/node indicating to which component the pose/node belongs (Disjoint Set Union).
- is_complete(self: PoseGraphOptimization) bool
Tests whether the pose graph is complete, i.e. there is only one connected component.
- num_connected_components(self: PoseGraphOptimization) int
Gives the number of independent components of the added graph by applying Disjoint Set Union to search for the number of connected components.
- optimize(self: PoseGraphOptimization) dict[int, ndarray[numpy.float64[4, 4]]]
Run pose graph optimization and returns a dict of updated updated poses and their Ids.
- property constraints
Get the list of constraints.
- property gain_threshold
Error improvement across iterations below which to abort the optimization.
- property max_iter
Maximum number of iterations to run the optimizer for.
- class imfusion.vision.PoseGraphOptimizationMode(self: PoseGraphOptimizationMode, value: int)
Bases:
pybind11_object
Members:
MOTION_AVERAGING
LEAST_SQUARES
- LEAST_SQUARES = <PoseGraphOptimizationMode.LEAST_SQUARES: 1>
- MOTION_AVERAGING = <PoseGraphOptimizationMode.MOTION_AVERAGING: 0>
- property name
- property value
- class imfusion.vision.TrackingSequenceStatisticsArray2(self: TrackingSequenceStatisticsArray2, *, mean: ndarray[numpy.float64[2, 1]] = array([0., 0.]), rmse: ndarray[numpy.float64[2, 1]] = array([0., 0.]), median: ndarray[numpy.float64[2, 1]] = array([0., 0.]), std: ndarray[numpy.float64[2, 1]] = array([0., 0.]), min: ndarray[numpy.float64[2, 1]] = array([0., 0.]), max: ndarray[numpy.float64[2, 1]] = array([0., 0.]))
Bases:
pybind11_object
Rotation and translation tracking sequence comparison statistic measures.
- abs_values(self: TrackingSequenceStatisticsArray2) TrackingSequenceStatisticsArray2
- to_string(self: TrackingSequenceStatisticsArray2) str
- property max
- property mean
- property median
- property min
- property rmse
- property std
- class imfusion.vision.TrackingSequenceStatisticsFloat(self: TrackingSequenceStatisticsFloat, *, mean: float = 0.0, rmse: float = 0.0, median: float = 0.0, std: float = 0.0, min: float = 0.0, max: float = 0.0)
Bases:
pybind11_object
Basic tracking sequence comparison statistic measures.
- abs_values(self: TrackingSequenceStatisticsFloat) TrackingSequenceStatisticsFloat
- to_string(self: TrackingSequenceStatisticsFloat) str
- property max
- property mean
- property median
- property min
- property rmse
- property std
- imfusion.vision.align(point_clouds_or_meshes: list[imfusion.Data], *, use_gpu: bool = True, max_icp_iterations: int = 30, icp_algorithm: imfusion.vision.AlignmentIcpAlgorithm = <AlignmentIcpAlgorithm.POINT_TO_PLANE_ICP_PCL: 3>, use_reciprocal_correspondences: bool = False, max_correspondence_distance: float = 50.0, max_correspondence_angle: float = 70.0, abort_parameter_tolerance: float = 1e-06, overlap_ratio: float = 1.0, voxel_size: float = 30.0, correspondences: list[list[numpy.ndarray[numpy.float64[3, 1]]]] = []) list[tuple[float, float]]
This algorithm performs point_cloud to point_cloud, mesh to point_cloud, point_cloud to mesh and mesh to mesh 3D rigid registration. The algorithm internally updates data matrices such that input point clouds/meshes align better in an error metric sense. The last mesh/point cloud is taken as a reference, and all the others are sequentially aligned to it (on example of 3 objects: 2nd to 3rd, 1st to 2nd). For more info, refer to this page.
- Parameters:
point_clouds_or_meshes – list of input point_clouds or meshes.
use_gpu – option attempts to improve performance by moving some of the computation onto the GPU.
max_icp_iterations – Defines a maximum number of ICP iterations.
icp_algorithm – type of ICP algorithm to use.
use_reciprocal_correspondences – If true, the point pair p1i and p2j will be considered a match only if the best correspondence for p1i is p2j and the best correspondence for p2j is p1i.
max_correspondence_distance – Defines a distance threshold for the points to be matched.
max_correspondence_angle – Defines a threshold for the angle between the point normals for the points to be matched.
abort_parameter_tolerance – Defines a threshold for the matrix update. If the update is too small, the iterative procedure stops.
overlap_ratio – If the overlap ratio r is >0 and <1, the Max corresp. distance will be re-computed by taking the distance of the r-th match from the list of sorted initial matches.
voxel_size – Defines a voxel size for downsampling the input data.
correspondences – Set corresponding points for each point cloud. In the mode
MANUAL_CORRESPONDENCES
point clouds will be aligned based on alignment of these corresponding points.
- Returns:
RMS errors before and after alignment for each pointCloud and mesh.
- imfusion.vision.ball_pivoting_surface_reconstruction(point_cloud: PointCloud) Mesh
Algorithm for computing a mesh from point cloud with normals, based on the Open3D Ball-Pivoting-Implementation.
- Parameters:
point_cloud – input point cloud with normals.
- Returns:
result mesh.
- imfusion.vision.calibrate_camera(*args, **kwargs)
Overloaded function.
calibrate_camera(images: imfusion.SharedImageSet, marker_configuration: imfusion.vision.marker_configuration.ChessboardInfo, calibration_settings: imfusion.vision.CameraCalibrationSettings = None) -> imfusion.vision.CameraCalibrationResult
calibrate_camera(images: imfusion.SharedImageSet, marker_configuration: imfusion.vision.marker_configuration.CharucoBoardInfo, calibration_settings: imfusion.vision.CameraCalibrationSettings = None) -> imfusion.vision.CameraCalibrationResult
calibrate_camera(images: imfusion.SharedImageSet, marker_configuration: imfusion.vision.marker_configuration.ArucoBoardInfo, calibration_settings: imfusion.vision.CameraCalibrationSettings = None) -> imfusion.vision.CameraCalibrationResult
calibrate_camera(images: imfusion.SharedImageSet, marker_configuration: imfusion.vision.marker_configuration.CircleBoardInfo, calibration_settings: imfusion.vision.CameraCalibrationSettings = None) -> imfusion.vision.CameraCalibrationResult
calibrate_camera(images: imfusion.SharedImageSet, marker_configuration: list[imfusion.vision.marker_configuration.AprilTagInfo], calibration_settings: imfusion.vision.CameraCalibrationSettings = None) -> imfusion.vision.CameraCalibrationResult
calibrate_camera(images: imfusion.SharedImageSet, marker_configuration: imfusion.vision.marker_configuration.AprilTagBoardInfo, calibration_settings: imfusion.vision.CameraCalibrationSettings = None) -> imfusion.vision.CameraCalibrationResult
calibrate_camera(images: imfusion.SharedImageSet, marker_configuration: list[imfusion.vision.marker_configuration.STagInfo], calibration_settings: imfusion.vision.CameraCalibrationSettings = None) -> imfusion.vision.CameraCalibrationResult
calibrate_camera(images: imfusion.SharedImageSet, marker_configuration: str, calibration_settings: imfusion.vision.CameraCalibrationSettings = None) -> imfusion.vision.CameraCalibrationResult
Performs camera calibration and generates intrinsic matrix, distortion coefficient and other useful information. Supports multiple types of board. For an overview, please refer to the documentation of the
marker_configuration
module. This function assignscamera_calibration
data component to the inputshared_image_set
, upon successful calibration.- Parameters:
images – Image set showing a camera calibration target as input. Only 8-bit grayscale and RGB images are supported.
marker_configuration – specifies parameters for different types of single markers or marker boards. Or a path to a valid xml configuration file.
calibration_settings – parameters used to configure the settings required for calibrating a camera.
- Returns:
camera calibration result
- imfusion.vision.compare_tracking_sequences(reference: TrackingSequence, input: TrackingSequence, *, iter_delta_rpe: int = 1, use_auto_alignment: bool = True, use_interpolation: bool = True, use_timestamp_filtering_ate: bool = True, use_timestamp_filtering_rpe: bool = True, timestamp_filtering_ate_max_distance: float = 20, allow_duplicates: bool = False) tuple[TrackingSequenceStatisticsFloat, TrackingSequenceStatisticsArray2]
Algorithm to compare two TrackingSequences in terms of relative pose error (RPE) and absolute trajectory error (ATE), the ATE is well-suited for measuring the performance of visual SLAM systems. In contrast, the RPE is well-suited for measuring the drift of a visual odometry system, for example the drift per second. References: TUM-Benchmark-Tools. TUM-Benchmark-Article.
- Parameters:
reference – reference tracking sequence.
input – target tracking sequence, which is being compared to the reference.
iter_delta_rpe – only consider pose pairs that have a distance of delta (in terms of number of frames).
use_auto_alignment – align the tracking sequences before comparing.
use_interpolation – compared with interpolated poses, instead of closest match.
use_timestamp_filtering_ate – when searching for ate correspondences between two tracking sequences, filter the ones with big timestamp differences.
use_timestamp_filtering_rpe – when searching for rpe correspondences between two tracking sequences, filter the ones with timestamp differences that are greater the max_diff_two_consecutive_timestamps from source tracking sequence.
timestamp_filtering_ate_max_distance – threshold used for filtering correspondences when
use_timestamp_filtering_ate
is set to True.allow_duplicates – allow different poses to have the same timestamps, only the closest would be used.
- Returns:
absolute trajectory error statistics.
relative pose error statistics array 2 (angle error, translational error).
- imfusion.vision.crop_meshes(meshes: list[Mesh], discard_inside: bool = False, close_resulting_holes: bool = False, box_center: ndarray[numpy.float64[3, 1]] | None = array([0., 0., 0.]), box_extent: ndarray[numpy.float64[3, 1]] | None = array([0., 0., 0.]), box_transform: ndarray[numpy.float64[4, 4]] | None = array([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])) None
Removes vertices and polygons in a box-shaped region of a mesh. Operates in-place.
- Parameters:
meshes – list of mesh objects.
discard_inside – if True, discard the points inside of the Cropping box instead.
close_resulting_holes – if True, closes the mesh on the cutting planes, only affects meshes.
box_center – center of the cropping box.
box_extent – extent of the cropping box.
box_transform – transformation of the cropping box.
- imfusion.vision.crop_point_clouds(point_clouds: list[PointCloud], discard_inside: bool = False, box_center: ndarray[numpy.float64[3, 1]] | None = None, box_extent: ndarray[numpy.float64[3, 1]] | None = None, box_transform: ndarray[numpy.float64[4, 4]] | None = None) None
Removes vertices in a box-shaped region of a point_cloud. Operates in-place.
- Parameters:
point_clouds – list of point_cloud objects.
discard_inside – if True, discard the points inside of the Cropping box instead.
box_center – center of the cropping box.
box_extent – extent of the cropping box.
box_transform – transformation of the cropping box.
- imfusion.vision.detect_markers(*args, **kwargs)
Overloaded function.
detect_markers(images: imfusion.SharedImageSet, marker_configuration: imfusion.vision.marker_configuration.ChessboardInfo) -> imfusion.vision.MarkerDetectionResult
detect_markers(images: imfusion.SharedImageSet, marker_configuration: list[imfusion.vision.marker_configuration.ArucoMarkerInfo]) -> imfusion.vision.MarkerDetectionResult
detect_markers(images: imfusion.SharedImageSet, marker_configuration: imfusion.vision.marker_configuration.CharucoBoardInfo) -> imfusion.vision.MarkerDetectionResult
detect_markers(images: imfusion.SharedImageSet, marker_configuration: imfusion.vision.marker_configuration.ArucoBoardInfo) -> imfusion.vision.MarkerDetectionResult
detect_markers(images: imfusion.SharedImageSet, marker_configuration: imfusion.vision.marker_configuration.CircleBoardInfo) -> imfusion.vision.MarkerDetectionResult
detect_markers(images: imfusion.SharedImageSet, marker_configuration: list[imfusion.vision.marker_configuration.AprilTagInfo]) -> imfusion.vision.MarkerDetectionResult
detect_markers(images: imfusion.SharedImageSet, marker_configuration: imfusion.vision.marker_configuration.AprilTagBoardInfo) -> imfusion.vision.MarkerDetectionResult
detect_markers(images: imfusion.SharedImageSet, marker_configuration: list[imfusion.vision.marker_configuration.STagInfo]) -> imfusion.vision.MarkerDetectionResult
detect_markers(images: imfusion.SharedImageSet, marker_configuration: str) -> imfusion.vision.MarkerDetectionResult
The algorithm performs detection of calibration markers. Takes a single 2D image set showing a camera calibration target as input. Only 8-bit grayscale and RGB images are supported. To compute the camera poses, the algorithm requires camera intrinsics of the SharedImageSet, in the form of
CameraCalibrationDataComponent
.- Parameters:
images – input image set.
marker_configuration – specifies parameters for different types of single markers or marker boards. Or a path to a valid xml configuration file.
- Returns:
marker detection result class
- imfusion.vision.detect_mesh_collision(mesh1: Mesh, mesh2: Mesh, safety_distance: float = 0) bool
Checks for collision between two meshes.
- Parameters:
mesh1 – input first mesh.
mesh2 – input second mesh.
safety_distance – minimum distance between each point of the two Mesh objects to avoid a collision state.
- Returns:
returns True if collision occurred.
- imfusion.vision.estimate_image_sharpness(images: SharedImageSet, blur_kernel_half_size: int = 4) list[float]
Performs image sharpness estimation on a set of 2D images, by calculating blur annoyance coefficient, ranging between 0 and 1 for each input image, where 0 indicates the sharpest quality (least blur annoyance) and 1 indicates the worst quality (highest blur annoyance).
- Parameters:
images – input images.
blur_kernel_half_size – Defines half the width of the blur filter kernel, which determines how many neighboring pixels are considered when applying the blur.
- Returns:
Blur annoyance coefficients for each input image.
- imfusion.vision.extract_thresholding_fiducials(images: SharedImageSet, *, use_blob_detector: bool = False, use_auto_threshold: bool = False, threshold: float = 0.7, min_blob_size: int = 4, export_cc_images: bool = False) tuple[list[list[ndarray[numpy.float64[2, 1]]]], SharedImageSet]
Extract fiducials from 2D IR images by thresholding and weighted averaging.
- Parameters:
images – input images.
use_blob_detector – sets whether to use blob detector. The blob detector is based on the OpenCV blob detector and does a multiple level thresholding.
use_auto_threshold – set whether to use auto thresholding. In this case the threshold parameter is multiplied with the maximum image intensity to obtain the threshold, otherwise it is multiplied with the image data type max value. This value is not used with the blob detector.
threshold – set threshold in range [0;1] as percentage of maximum intensity value. See
use_auto_threshold
for how the maximum is selected.min_blob_size – set minimum blob size in pixels. Blobs smaller than this size are discarded.
export_cc_images – set whether to export connected components images.
- Returns:
detected fiducials.
cc_images if export_cc_images is set to True, otherwise
None
- imfusion.vision.generate_markers(*args, **kwargs)
Overloaded function.
generate_markers(config: imfusion.vision.marker_configuration.ChessboardInfo, *, padding: int = 0, dpi: float = 96.0, output_svg_path: str = ‘’) -> imfusion.SharedImageSet
generate_markers(config: list[imfusion.vision.marker_configuration.ArucoMarkerInfo], *, padding: int = 0, dpi: float = 96.0, output_svg_path: str = ‘’) -> imfusion.SharedImageSet
generate_markers(config: imfusion.vision.marker_configuration.CharucoBoardInfo, *, padding: int = 0, dpi: float = 96.0, output_svg_path: str = ‘’) -> imfusion.SharedImageSet
generate_markers(config: imfusion.vision.marker_configuration.ArucoBoardInfo, *, padding: int = 0, dpi: float = 96.0, output_svg_path: str = ‘’) -> imfusion.SharedImageSet
generate_markers(config: list[imfusion.vision.marker_configuration.AprilTagInfo], *, padding: int = 0, dpi: float = 96.0, output_svg_path: str = ‘’) -> imfusion.SharedImageSet
generate_markers(config: imfusion.vision.marker_configuration.AprilTagBoardInfo, *, padding: int = 0, dpi: float = 96.0, output_svg_path: str = ‘’) -> imfusion.SharedImageSet
generate_markers(config: list[imfusion.vision.marker_configuration.STagInfo], *, padding: int = 0, dpi: float = 96.0, output_svg_path: str = ‘’) -> imfusion.SharedImageSet
The algorithm generates images and SVGs of calibration markers/boards.
- Parameters:
config – configuration of the marker used
padding – allows to add a white border around the marker/board from each side (in millimeters).
dpi – specifies dots per inch for the generated marker/board image. Not taken into consideration when generating an SVG file.
output_svg_path – if specified, save the generated marker/board as an SVG.
- Returns:
output image board
generate_markers(config: imfusion.vision.marker_configuration.CircleBoardInfo, *, padding: int = 0, dpi: float = 96.0, circle_diameter: float = 0.0, output_svg_path: str = ‘’) -> imfusion.SharedImageSet
generate_markers(config: str, *, padding: int = 0, dpi: float = 96.0, circle_diameter: float = 0.0, output_svg_path: str = ‘’) -> imfusion.SharedImageSet
The algorithm generates images and SVGs of calibration markers/boards.
- Parameters:
config – configuration of the marker used
padding – allows to add a white border around the marker/board from each side (in millimeters).
dpi – specifies dots per inch for the generated marker/board image. Not taken into consideration when generating an SVG file.
circle_diameter – circle diameter in centimeters (only relevant for circle boards).
output_svg_path – if specified, save the generated marker/board as an SVG.
- Returns:
output image board
- imfusion.vision.hand_eye_calibration(tracking_sequence1: imfusion.TrackingSequence, tracking_sequence2: imfusion.TrackingSequence, *, calibration_type: imfusion.vision.HandEyeCalibrationType = <HandEyeCalibrationType.EYE_IN_HAND: 0>, sequence_direction_1: imfusion.vision.HandEyeCalibrationStreamInfo = <HandEyeCalibrationStreamInfo.CAM_TO_CALIB: 1>, sequence_direction_2: imfusion.vision.HandEyeCalibrationStreamInfo = <HandEyeCalibrationStreamInfo.HAND_TO_BASE: 2>, method: imfusion.vision.HandEyeCalibrationMethod = <HandEyeCalibrationMethod.TSAI_LENZ: 0>, reset_matrices_before_computing: bool = False, min_relative_angle_sample_selection: float = 0.0, min_relative_translation_sample_selection: float = 0.0, min_relative_angle: float = 20.0, pose_inlier_threshold: float = 5.0, use_ransac: bool = False, ransac_iterations: int = 1000, refine_non_linear: bool = False) tuple[TrackingSequence, TrackingSequence, ndarray[numpy.float64[4, 4]], ndarray[numpy.float64[4, 4]]]
The hand-eye calibration is used to find the relation between the coordinate system of the moving object (for example camera, or eye) and a coordinate system of a hand (for example a robot arm) which are moving rigidly together. The two input tracking streams must have the same number of samples. It’s expected that the corresponding samples are matching.
- Parameters:
tracking_sequence1 – input tracking sequence 1.
tracking_sequence2 – input tracking sequence 2.
calibrationType – Calibration type.
sequence_direction_1 – Direction of the transformations of the input sequence 1.
sequence_direction_2 – Direction of the transformations of the input sequence 2.
method – Calibration method.
reset_matrices_before_computing – Reset the matrices of the input streams before starting the computation.
min_relative_angle_sample_selection – Minimum angle between current sample and a previously selected sample for the current sample to be selected.
min_relative_translation_sample_selection – Minimum translation between current sample and a previously selected sample for the current sample to be selected.
min_relative_angle – Minimum angle in degrees between pose pairs selected for optimization.
pose_inlier_threshold – Inlier threshold for pose pair errors.
use_ransac – Use RANSAC scheme: select 3 random pairs of poses and compute the calibration, repeat number of iterations and select the best result.
ransac_iterations – Number of RANSAC iterations.
refine_non_linear – Refine Lenz-Tsai output using the non-linear optimizer.
- Returns:
MovingToFixed, First tracking sequence as it was passed to alignment with non-corresponding samples removed.
HandToBase, Second tracking sequence as it was passed to alignment with non-corresponding samples removed with applied calibration.
moving_to_hand matrix, the transformation between the moving and the hand coordinate systems, where the moving object is the one rigidly attached to the (robot) hand, being it the camera (in the case of EyeInHand calibration) or the calibration object (in the case of EyeOnBase calibration).
fixed_to_base matrix, the transformation between the fixed coordinate system and the base coordinate system, where the fixed coordinate system is the one of the object positioned in a fixed location in the scene, being it the calibration object (in the case of EyeInHand calibration) or the camera (in the case of EyeOnBase calibration).
- imfusion.vision.optimize_poses(*, point_clouds: list[imfusion.PointCloud] = [], poses: dict[int, numpy.ndarray[numpy.float64[4, 4]]] = {}, constraints: list[imfusion.vision.PoseGraphOptimization.Constraint] = [], mode: imfusion.vision.PoseGraphOptimizationMode = <PoseGraphOptimizationMode.MOTION_AVERAGING: 0>, iterations: int = 4, optimizer_iterations: int = 400, export_pose_update_map: bool = False, recompute_overlap: bool = True, compute_bidirectional_relative_motion: bool = True, max_match_dist: float = 50.0, max_match_angle: float = 45.0, min_overlap: float = 0.35, robust_kernel: imfusion.vision.PoseGraphOptimization.LeastSquaresSolution.RobustKernel = <RobustKernel.HUBER: 5>, robust_kernel_delta: float = 2.7955321496988725, apply_spanning_tree_initialization: bool = True, gain_threshold: float = 0.01) list[SharedImageSet]
The algorithm optimizes pose graphs using motion averaging or graph-based least-squares solver. The input can be a set of point clouds or nothing. In the first case the algorithm will try to construct a pose graph using ICP. In the second case, it will expect the user to load a pose graph by providing
poses
andconstraints
.- Parameters:
point_clouds – input point clouds.
poses – Set poses (nodes) explicitly.
constraints – Set constraints (edges) explicitly .
mode – Set pose graph optimization solution.
iterations – Number of iterations to perform for ICP.
optimizer_iterations – Maximum number of iterations to run the optimization for.
export_pose_update_map – Set whether to export pose-update map.
recompute_overlap – Set whether to recompute overlap every iteration, otherwise it is only computed once.
compute_bidirectional_relative_motion – Set whether to compute constraints (relative pose) bi-directionally for constraint (relative pose) calculation in ICP.
max_match_dist – max matching distance for relative constraint (pose calculation) in ICP.
max_match_angle – max matching angle for constraint (relative pose) calculation in ICP.
min_overlap – Set min overlap between pointClouds for constraint (relative pose) calculation in ICP.
robust_kernel – robustness kernel for the graph-based solution.
robust_kernel_delta – Set window size of the error, a squared error above delta^2 is considered as outlier in the data.
apply_spanning_tree_initialization – Set whether to apply spanning tree initialization for the least squares pose-graph optimization solution.
gain_threshold – set multiplicative delta for aborting optimization for pose-graph optimization.
- Returns:
Returns the pose update map if the corresponding flag is set. returns empty if
export_pose_update_map
is set to false.
Example
>>> imfusion.vision.optimize_poses(point_clouds=pcs, mode=vision.PoseGraphOptimizationMode.motion_averaging, robust_kernel_delta=7.815) >>> for idx, pc in enumerate(pcs): >>> print(f"point cloud {idx} matrix:") >>> print(pc.matrix)
- imfusion.vision.point_cloud_calibration(point_cloud: PointCloud, use_all_points: bool = False) ndarray[numpy.float64[4, 4]]
Algorithm for computing the projection matrix from a dense point cloud. Does not work on sparse point clouds.
- Parameters:
point_cloud – input dense point cloud
use_all_points – Specify whether to use all points in the point cloud for computing the projection matrix.
- Returns:
transformation directly applied to points during conversion.
- imfusion.vision.poisson_reconstruction(point_cloud: imfusion.PointCloud, *, level: int = 7, spacing: float = 0.0, surface_dist_r1: float = 2.0, surface_dist_r2: float = 4.0, max_angle_degrees: float = 180.0, density_threshold_mode: imfusion.vision.PoissonReconstructionDensityThresholdMode = <PoissonReconstructionDensityThresholdMode.NONE: 0>, density_threshold: float = 0.0, samples_per_node: float = 1.5, color_source: imfusion.vision.PoissonReconstructionColorSource = <PoissonReconstructionColorSource.COLOR: 0>) Mesh
This algorithm implements surface reconstruction from point clouds using the (Screened) Poisson Surface reconstruction by Kazhdan, Bolitho, and Hoppe. The point cloud can have colors, which will be included in the reconstruction, but those are optional. By design Poisson Reconstruction always creates a closed, hole-free surface. If this is not desired, parts of the surface that are too far from the input point cloud can be removed again by setting
surface_dist_r1
,surface_dist_r2
andmax_angle_degrees
to an appropriate value. Triangles whose distance to the original point cloud is in the range [0, r1] are all kept. Triangles that are in the range [r1, r2] are kept if their normals are similar to the normals of the closest input point. Triangles that are bigger than r2 are all removed.- Parameters:
point_cloud – input point cloud with normals, colors are optional.
level – Maximum depth of the underlying octree.
spacing – Minimum side length of an octree leaf node, i.e. scale for the minimum side length of generated triangles.
surface_dist_r1 – Filtering distance r1.
surface_dist_r2 – Filtering distance r2.
max_angle_degrees – This value affects the newly generated triangle falls in the range [r1, r2].
density_threshold_mode – Allows to exclude certain vertices from the computation based on their density.
density_threshold – The actual threshold used for determining whether a vertex is skipped from computation.
samples_per_node – minimum number of sample points that should fall within an octree node as the octree construction is adapted to sampling density.
color_source – Source of the color of the resulting mesh.
boundary_type – Boundary type for the finite elements.
- Returns:
Output a single Mesh approximating the surface that was sampled by the point cloud.
- imfusion.vision.triangulate_point_cloud(point_cloud: PointCloud, *, search_radius: float = 10.0, max_nn: int = 20, max_surface_angle: float = 45.0, normal_consistency: bool = False) Mesh
Computes a mesh from a given point cloud by greedily triangulating in the local tangent plane.
- Parameters:
point_cloud – input point cloud
search_radius – The 3D radius to search for possible neighbors of a point.
max_nn – A number of neighbors after which the search can be aborted.
max_surface_angle – The maximum allowed angle between two triangles.
normal_consistency – Controls whether to enforce consistent normal orientation.
- Returns:
A single mesh that interpolates the given point cloud.
- imfusion.vision.undistort_images(images: imfusion.SharedImageSet, interpolation_mode: imfusion.InterpolationMode = <InterpolationMode.LINEAR: 1>, use_gpu: bool = False, in_place: bool = False) SharedImageSet
Algorithm for undistorting images. Intrinsics and distortion parameters are retrieved from
CameraCalibrationDataComponent
.- Parameters:
images – input image set with
CameraCalibrationDataComponent
.interpolation_mode – undistortion algorithm Interpolation Mode (different to image.interpolation_mode).
use_gpu – use GPU for faster processing.
in_place – if set to True input images are overwritten.
- Returns:
undistorted image set. Returns None if
in_place
is set to True.
imfusion.vision.marker_configuration
Specifies parameters for different types of single markers or marker boards.
- class imfusion.vision.marker_configuration.AprilTagBoardInfo(self: AprilTagBoardInfo)
Bases:
pybind11_object
- property family
- property grid_size
- property marker_separation
- property marker_size
- class imfusion.vision.marker_configuration.AprilTagFamily(self: AprilTagFamily, value: int)
Bases:
pybind11_object
Members:
Tag16h5
Tag25h9
Tag36h10
Tag36h11
TagCircle21h7
TagCircle49h12
TagCustom48h12
TagStandard41h12
TagStandard52h13
- Tag16h5 = <AprilTagFamily.Tag16h5: 0>
- Tag25h9 = <AprilTagFamily.Tag25h9: 1>
- Tag36h10 = <AprilTagFamily.Tag36h10: 2>
- Tag36h11 = <AprilTagFamily.Tag36h11: 3>
- TagCircle21h7 = <AprilTagFamily.TagCircle21h7: 4>
- TagCircle49h12 = <AprilTagFamily.TagCircle49h12: 5>
- TagCustom48h12 = <AprilTagFamily.TagCustom48h12: 6>
- TagStandard41h12 = <AprilTagFamily.TagStandard41h12: 7>
- TagStandard52h13 = <AprilTagFamily.TagStandard52h13: 8>
- property name
- property value
- class imfusion.vision.marker_configuration.AprilTagInfo(self: AprilTagInfo)
Bases:
pybind11_object
- property family
- property id
- property marker_size
- property transformation
- class imfusion.vision.marker_configuration.ArucoBoardInfo(self: ArucoBoardInfo)
Bases:
pybind11_object
- requested_markers(self: ArucoBoardInfo) int
- property cell_size
- property detector_params
- property dictionary
- property grid_size
- property marker_separation
- property starting_marker
- class imfusion.vision.marker_configuration.ArucoDictionary(self: ArucoDictionary, value: int)
Bases:
pybind11_object
Members:
DICT_4X4_50
DICT_4X4_100
DICT_4X4_250
DICT_4X4_1000
DICT_5X5_50
DICT_5X5_100
DICT_5X5_250
DICT_5X5_1000
DICT_6X6_50
DICT_6X6_100
DICT_6X6_250
DICT_6X6_1000
DICT_7X7_50
DICT_7X7_100
DICT_7X7_250
DICT_7X7_1000
DICT_ARUCO_ORIGINAL
- DICT_4X4_100 = <ArucoDictionary.DICT_4X4_100: 1>
- DICT_4X4_1000 = <ArucoDictionary.DICT_4X4_1000: 3>
- DICT_4X4_250 = <ArucoDictionary.DICT_4X4_250: 2>
- DICT_4X4_50 = <ArucoDictionary.DICT_4X4_50: 0>
- DICT_5X5_100 = <ArucoDictionary.DICT_5X5_100: 5>
- DICT_5X5_1000 = <ArucoDictionary.DICT_5X5_1000: 7>
- DICT_5X5_250 = <ArucoDictionary.DICT_5X5_250: 6>
- DICT_5X5_50 = <ArucoDictionary.DICT_5X5_50: 4>
- DICT_6X6_100 = <ArucoDictionary.DICT_6X6_100: 9>
- DICT_6X6_1000 = <ArucoDictionary.DICT_6X6_1000: 11>
- DICT_6X6_250 = <ArucoDictionary.DICT_6X6_250: 10>
- DICT_6X6_50 = <ArucoDictionary.DICT_6X6_50: 8>
- DICT_7X7_100 = <ArucoDictionary.DICT_7X7_100: 13>
- DICT_7X7_1000 = <ArucoDictionary.DICT_7X7_1000: 15>
- DICT_7X7_250 = <ArucoDictionary.DICT_7X7_250: 14>
- DICT_7X7_50 = <ArucoDictionary.DICT_7X7_50: 12>
- DICT_ARUCO_ORIGINAL = <ArucoDictionary.DICT_ARUCO_ORIGINAL: 16>
- property name
- property value
- class imfusion.vision.marker_configuration.ArucoMarkerInfo(self: ArucoMarkerInfo)
Bases:
pybind11_object
- property detectorParams
- property dictionary
- property id
- property size
- property transform
- class imfusion.vision.marker_configuration.CharucoBoardInfo(self: CharucoBoardInfo)
Bases:
pybind11_object
- requested_markers(self: CharucoBoardInfo) int
- property cell_size
- property detector_params
- property dictionary
- property grid_size
- property marker_size
- property min_adjacent_markers
- property starting_marker
- class imfusion.vision.marker_configuration.ChessboardInfo(self: ChessboardInfo)
Bases:
pybind11_object
- property cell_size
- property corner_refinement_win_size
- property grid_size
- class imfusion.vision.marker_configuration.CircleBoardDetectorParameters(self: CircleBoardDetectorParameters)
Bases:
pybind11_object
- property blob_color
- property filterByArea
- property filter_by_circularity
- property filter_by_color
- property filter_by_convexity
- property filter_by_inertia
- property max_area
- property max_circularity
- property max_convexity
- property max_inertia_ratio
- property max_threshold
- property min_area
- property min_circularity
- property min_convexity
- property min_dist_between_blobs
- property min_inertia_ratio
- property min_repeatability
- property min_threshold
- property threshold_step
- class imfusion.vision.marker_configuration.CircleBoardInfo(self: CircleBoardInfo)
Bases:
pybind11_object
- property circle_spacing
- property detector_params
- property diameter
- property grid_size
- class imfusion.vision.marker_configuration.DetectorParameters(self: DetectorParameters)
Bases:
pybind11_object
- class CornerRefineMethod(self: CornerRefineMethod, value: int)
Bases:
pybind11_object
Members:
APRIL_TAG
CONTOUR
NONE
SUBPIXEL
- APRIL_TAG = <CornerRefineMethod.APRIL_TAG: 3>
- CONTOUR = <CornerRefineMethod.CONTOUR: 2>
- NONE = <CornerRefineMethod.NONE: 0>
- SUBPIXEL = <CornerRefineMethod.SUBPIXEL: 1>
- property name
- property value
- property adaptive_thresh_constant
- property adaptive_thresh_win_size_max
- property adaptive_thresh_win_size_min
- property adaptive_thresh_win_size_step
- property corner_refinement_max_iterations
- property corner_refinement_method
- property corner_refinement_min_accuracy
- property corner_refinement_win_size
- property error_correction_rate
- property marker_border_bits
- property max_erroneous_bits_in_border_rate
- property max_marker_perimeter_rate
- property min_corner_distance_rate
- property min_distance_to_border
- property min_marker_distance_rate
- property min_marker_perimeter_rate
- property min_otsu_std_dev
- property perspective_remove_ignored_margin_per_cell
- property perspective_remove_pixel_per_cell
- property polygonal_approx_accuracy_rate
- class imfusion.vision.marker_configuration.STagInfo(self: STagInfo)
Bases:
pybind11_object
- class HD(self: HD, value: int)
Bases:
pybind11_object
Members:
HD11
HD13
HD15
HD17
HD19
HD21
HD23
- HD11 = <HD.HD11: 11>
- HD13 = <HD.HD13: 13>
- HD15 = <HD.HD15: 15>
- HD17 = <HD.HD17: 17>
- HD19 = <HD.HD19: 19>
- HD21 = <HD.HD21: 21>
- HD23 = <HD.HD23: 23>
- property name
- property value
- property diameter
- property id
- property library_hd
- property transformation
- imfusion.vision.marker_configuration.load_marker(file_path: str) ChessboardInfo | list[ArucoMarkerInfo] | CharucoBoardInfo | ArucoBoardInfo | CircleBoardInfo | list[AprilTagInfo] | AprilTagBoardInfo | list[STagInfo]
Loads a marker configuration from the specified xml file.
- Raises:
IOError – if the file cannot be opened
ValueError – if the loaded configuration does not represent any supported marker type.
- Parameters:
file_path – Path to input file. File path must end with .xml.
- imfusion.vision.marker_configuration.save_marker(*args, **kwargs)
Overloaded function.
save_marker(marker_configuration: imfusion.vision.marker_configuration.ChessboardInfo, file_path: str) -> None
save_marker(marker_configuration: list[imfusion.vision.marker_configuration.ArucoMarkerInfo], file_path: str) -> None
save_marker(marker_configuration: imfusion.vision.marker_configuration.CharucoBoardInfo, file_path: str) -> None
save_marker(marker_configuration: imfusion.vision.marker_configuration.ArucoBoardInfo, file_path: str) -> None
save_marker(marker_configuration: imfusion.vision.marker_configuration.CircleBoardInfo, file_path: str) -> None
save_marker(marker_configuration: list[imfusion.vision.marker_configuration.AprilTagInfo], file_path: str) -> None
save_marker(marker_configuration: imfusion.vision.marker_configuration.AprilTagBoardInfo, file_path: str) -> None
save_marker(marker_configuration: list[imfusion.vision.marker_configuration.STagInfo], file_path: str) -> None
Save a marker configuration to the specified file path as a xml file.
- Parameters:
marker_configuration – specifies parameters for different types of single markers or marker boards.
file_path – Path to output file. File path must end with .xml.
imfusion.vision.point_cloud_filtering
Performs various operations on point clouds.
- imfusion.vision.point_cloud_filtering.additive_noise_filter(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, noise_standard_deviation_mm: float = 2.0, add_noise_in_normal_direction: bool = True) list[PointCloud]
Adds Gaussian noise to the position of all points.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
noise_standard_deviation_mm – The standard deviation of the used Gaussian probability distribution, in Millimeters.
add_noise_in_normal_direction – If this is set and the point cloud has point normals, each point is displaced along its normals by a length sampled from the distribution.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.clustered_hierarchical_subsampling(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, max_points_per_cluster: int = 30, max_variation: float = 0.1) list[PointCloud]
Downsamples the point cloud by grouping nearby points into clusters, then replacing them with the average over each cluster. Clusters are created by first putting the entire point cloud into a cluster, then successively splitting all clusters that do not meet the conditions imposed by the settings below along the axis of their most significant principal component.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
max_points_per_cluster – If a cluster has more than this number of points, it will be split.
max_variation – If the variation (the variance along the least significant principal component, divided by the sum of variances across all 3 principal components) of a cluster is above this value, it will be split.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.clustered_iterative_subsampling(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, max_points_per_cluster: int = 30, max_variation: float = 0.1, cluster_radius: float = 5.0) list[PointCloud]
Downsamples the point cloud by grouping nearby points into clusters, then replacing them with the average over each cluster. Clusters are created by iteratively adding nearby points that meet the conditions specified by the 3 configuration parameters.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
max_points_per_cluster – When a cluster already includes this many points, a new one is created instead of adding further points to it.
max_variation – If adding a point to a cluster would push that clusters variation (the variance along the least significant principal component, divided by the sum of variances across all 3 principal components) above this value, the point is not added.
cluster_radius – The maximum distance in Millimeters for points to be considered for the same cluster.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.compute_normals(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, window_size: int = 5, radius: float = 5.0, distance_threshold: float = 30.0, remove_points: bool = True, use_nn: bool = False) list[PointCloud]
Computes a normal for each point from the relative positions of itself and other nearby points. If the point cloud is dense and has both non-zero camera intrinsics and a non-identity transform matrix, the resulting normals are chosen to point towards the camera.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
window_size – if the point cloud is dense and
use_nn
is disabled, this parameter determines the range around each point included in the computation.radius – if the point cloud is either not dense or
use_nn
is enabled, this parameter determines the radius in which neighboring points are considered for the computation.distance_threshold – if the point cloud is dense and
use_nn
is disabled, this parameter determines whether a pair of points is too far apart to be allowed for the computation.remove_points – if enabled, points where the normal computation fails are removed. Otherwise their normal is set to NAN.
use_nn – if enabled, dense point clouds will use the same computational approach as sparse ones (see Radius). Does not have any effect on sparse point clouds.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.connected_component_removal(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, min_size: int = 10, max_number: int = 2147483647, dilation_radius: int = 0) list[PointCloud]
If the point cloud is dense, this filter groups the point clouds index map into connected components and discards all points with indices whose component does not reach the specified minimum size. Otherwise this filter does nothing.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
min_size – Minimum size of the connected component, in pixels of the dense point cloud.
max_number – Maximum number of connected components to remain.
dilation_radius – if greater than 0, the point cloud’s index map is first being dilated with the specified radius before running a connected component analysis on it.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.denoise(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, window_size: int = 9, radius: int = 5.0) list[PointCloud]
Fits a plane at the position of each point, using randomly chosen points from the vicinity of that target point. The target point is then displaced onto that plane. Any points where only 5 or less neighbors can be found within this range will not be processed.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
window_size – if
use_gpu
is enabled and the point cloud is dense, this value determines the search range for finding neighbor voxels.radius – if
use_gpu
is disabled or the point cloud is not dense, this value is the search range in Millimeters for finding the nearest neighbor voxels when estimating the local plane
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.distance_subsampling(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, radius_mm: float = 1.0) list[PointCloud]
Removes all points where one of their nearest neighbors is closer than a specified distance.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
radius_mm – The minimum distance in Millimeters, all points with neighbors below this distance are removed.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.estimate_curvature(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, num_neighbors: int = 100) list[PointCloud]
Computes the direction of maximum curvature at each point, then creates 3 output point clouds that encode the 3 components of this direction vector as heat map (red > blue) stored in the point colors.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
num_neighbors – Amount of other nearby points to consider when computing the curvature at each point.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.flying_pixel_filter(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, window_size: int = 3, max_allowed_depth_var: float = 5.0) list[PointCloud]
If the point cloud is dense this filter converts it to a depth image, then for each pixel of that depth image computes the mean of the absolute difference between the pixel and all other pixels in a given search window centered on it. If this value exceeds a given threshold, the associated point on the dense cloud is discarded. For non-dense point clouds this filter does nothing.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
window_size – Diameter of the search window around each pixel on the depth image. This value must be odd, if it is not the algorithm will subtract 1.
max_allowed_depth_var – Threshold for the mean absolute difference between a depth image pixel and its neighbors.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.invert_normals(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True) list[PointCloud]
Flips all normals.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.normalize(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, scale: float = 1.0, same_scale: bool = False, use_sphere: bool = True) list[PointCloud]
Moves the point clouds such that the center is at the coordinate origin. Afterwards the point positions are multiplied with a scaling factor chosen to meet a criterium depending on whether Sphere or Bounding Box mode is active. * Sphere mode: In this mode the point most distant from the center ends up with a distance of half the Scale value. * Bounding Box mode: In this mode the largest direction on the axis-aligned bounding box ends up with a length of Scale.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
scale – Target size in Millimeters.
same_scale – If this is enabled, the same multiplier is applied to all point clouds in the input, which typically means that only the largest one will meet the conditions outlined above afterwards. Otherwise each point cloud is normalized separately.
use_sphere – use shpere mode instead of bounding box mode.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.quadric_based_subsampling(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, voxel_grid_size: float = 5.0, consolidating_radius: float = 30.0, std: float = 0.1, max_angle: float = 90.0, min_color_dot: float = 0.99, use_colors: bool = True) list[PointCloud]
This algorithm creates a probabilistic plane quadric at each point, converts the point cloud to a sparse voxel grid, then averages the quadrics and normals of voxels within a certain radius. Afterwards the averaged quadrics and normals on each voxel are used to construct a point cloud again.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
voxel_grid_size – Cell size of the voxel grid, in Millimeters.
consolidating_radius – Radius for averaging, in Millimeters.
std – Variance for constructing the quadric.
max_angle – Neighboring voxels are only considered if the angle between their normal and the normal of the target voxels exceeds this value.
min_color_dot – If Use color weighting is enabled, neighboring voxels are only considered if the dot product of their and the target voxel’s normalized(!) RGB color exceeds this value.
use_colors – See above.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.radius_outlier_removal(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, radius: float = 5.0, min_num_neighbors: int = 3) list[PointCloud]
Removes all points that have less than the required number of neighbors inside the given radius.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
radius – Search radius around each point, in Millimeters.
min_num_neighbors – Amount of other points required inside the search volume to keep the target point.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.random_subsampling(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, discard_probability: bool = 0.5, final_points_num: bool = 65536) list[PointCloud]
Discards points from the point cloud at random.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
discard_probability – When this mode is active, each point is independently discarded with the given probability (between 0 and 1).
final_points_num – If it is originally larger, the surplus is discarded by randomly selecting points. If it is smaller instead, the missing number of points is created by duplicating random existing points. In cases where the number of missing points exceeds the size of the original point cloud, the entire point cloud is duplicated (potentially multiple times).
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.remove_borders(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, border_pixels: int = 2, border_crop: ndarray[numpy.int32[4, 1]] = array([0, 0, 0, 0], dtype=int32)) list[PointCloud]
If the point cloud is dense, this removes the boundary by an erosion operation. Otherwise it does nothing.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
border_pixels – Size of the erosion, in pixels of the dense point cloud.
border_crop – Vector of 4 indices for additional cropping after the erosion operation. Specifies the thickness of the boundary area to be removed in indices of the dense point cloud, in the format low x, high x, low y, high y.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.sor_filter(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, neighbors: int = 7, std_dev_multiplier: float = 2.5) list[PointCloud]
This mode removes all points where the average distance to the
neighbors
nearest neighbors exceeds a threshold. The value of this threshold is the average of the distance to theneighbors
nearest neighbors across all points of the point cloud, plus the standard deviation of this measurement multiplied bystd_dev_multiplier
.- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
neighbors – equivalent to k+1 in the threshold equation above, so how many neighbor points to consider when computing the distance statistic on each target point.
std_dev_multiplier – Standard Deviation Multiplier, the parameter M in the threshold equation above, lower values cause a stricter filtering.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.uniform_subsampling(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, leaf_size: float = 2.0) list[PointCloud]
Converts the point cloud to a voxel grid, then creates a new point cloud by keeping the points closest to each voxel center and discarding the rest.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
leaf_size – The size of a voxel in Millimeters.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.viewing_angle_filter(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, max_viewing_angle: float = 70) list[PointCloud]
Removes all points where the dot product between a points’ normal vector and the normalized vector from the origin to that point falls below a given threshold. Does not do anything on point clouds without point normals.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
max_viewing_angle – The maximum allowed angle between the point position and point normal, in degrees.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
- imfusion.vision.point_cloud_filtering.voxel_grid_subsampling(point_clouds: list[PointCloud], *, use_gpu: bool = True, in_place: bool = True, leaf_size: float = 2.0) list[PointCloud]
Converts the point cloud to a voxel grid, then creates a new point cloud by averaging the points inside each voxel.
- Parameters:
point_clouds – input point cloud.
use_gpu – attempts to improve performance by moving some of the computation onto the GPU.
in_place – if set to True, input point clouds are overriden, instead of generating output.
leaf_size – The size of a voxel in Millimeters.
- Returns:
list of filtered point clouds, return Empty list if
in_place
is True.- Return type:
list [ pointCloud ]
imfusion.vision.stereo
Provides foundational tools for processing and analyzing stereo images, including stereo rectification, camera registration, stereo reconstruction, and more.
- class imfusion.vision.stereo.BlockMatchingParameters(self: BlockMatchingParameters, block_size: int = 21, *, disp12_max_diff: int = -1, min_disparity: int = 0, num_disparities: int = 64, speckle_range: int = 0, speckle_window_size: int = 0, pre_filter_cap: int = 31, pre_filter_size: int = 9, pre_filter_type: int = 1, smaller_bock_size: int = 0, texture_threshold: int = 10, uniqueness_ratio: int = 15)
Bases:
StereoReconstructionParameters
Settings for using stereo block-matching algorithm for computing disparity maps from stereo image pairs. It estimates depth by comparing small block regions between the left and right images to find pixel correspondences.
- Parameters:
block_size – the linear size of the blocks compared by the algorithm. The size should be odd. Larger block size implies smoother, though less accurate disparity map.
disp12_max_diff – maximum allowed difference in computed disparity between left-to-right and right-to-left checks. Higher values Allows more mismatches (less strict), while lower values result in more consistency, but possible missing disparities.
min_disparity – minimum possible disparity value. Normally, it is zero but sometimes rectification algorithms can shift images, so this parameter needs to be adjusted accordingly.
num_disparities – the disparity search range. For each pixel algorithm will find the best disparity from 0 (default minimum disparity) to numDisparities.
speckle_range – the maximum difference between neighbor disparities to consider them as part of the same region.
speckle_window_size – the window size used to filter out speckles (small disparity regions).
pre_filter_cap – The pixel intensity values are clipped before computing the SAD (Sum of Absolute Differences) for matching. Helps normalize bright/dark regions for better matching, but also affects noise and texture handling.
pre_filter_size – Size of the pre-filter window used before computing disparity.
pre_filter_type – Determines how pixels are pre-filtered before computing disparities. If set to 0, normalizes pixel intensities. If set to 1, then use a Sobel filter for edge enhancement.
smaller_bock_size – An alternative, smaller block size for some optimizations.
texture_threshold – Minimum sum of absolute differences (SAD) between pixels to be considered valid.
uniqueness_ratio – Ensures the best disparity match is significantly better than the second-best.
- property block_size
the linear size of the blocks compared by the algorithm. The size should be odd. Larger block size implies smoother, though less accurate disparity map.
- property disp12_max_diff
maximum allowed difference in computed disparity between left-to-right and right-to-left checks. Higher values Allows more mismatches (less strict), while lower values result in more consistency, but possible missing disparities.
- property min_disparity
minimum possible disparity value. Normally, it is zero but sometimes rectification algorithms can shift images, so this parameter needs to be adjusted accordingly.
- property name
Name of the stereo reconstruction method.
- property num_disparities
the disparity search range. For each pixel algorithm will find the best disparity from 0 (default minimum disparity) to numDisparities.
- property pre_filter_cap
The pixel intensity values are clipped before computing the SAD (Sum of Absolute Differences) for matching. Helps normalize bright/dark regions for better matching, but also affects noise and texture handling.
- property pre_filter_size
Size of the pre-filter window used before computing disparity.
- property pre_filter_type
Determines how pixels are pre-filtered before computing disparities. If set to 0, normalizes pixel intensities. If set to 1, then use a Sobel filter for edge enhancement.
- property smaller_bock_size
An alternative, smaller block size for some optimizations.
- property speckle_range
the maximum difference between neighbor disparities to consider them as part of the same region.
- property speckle_window_size
the window size used to filter out speckles (small disparity regions).
- property texture_threshold
Minimum sum of absolute differences (SAD) between pixels to be considered valid.
- property uniqueness_ratio
Ensures the best disparity match is significantly better than the second-best.
- class imfusion.vision.stereo.SemiGlobalBlockMatchingParameters(self: SemiGlobalBlockMatchingParameters, block_size: int = 3, *, disp12_max_diff: int = 0, min_disparity: int = 0, num_disparities: int = 16, speckle_range: int = 0, speckle_window_size: int = 0, mode: int = 0, p1: int = 0, p2: int = 0, pre_filter_cap: int = 0, uniqueness_ratio: int = 0)
Bases:
StereoReconstructionParameters
Settings for using stereo semi-global block-matching algorithm that computes disparity maps. It balances local block matching with global smoothness constraints, reducing noise and handling textureless areas more effectively.
- Parameters:
block_size – the linear size of the blocks compared by the algorithm. The size should be odd. Larger block size implies smoother, though less accurate disparity map.
disp12_max_diff – maximum allowed difference in computed disparity between left-to-right and right-to-left checks. Higher values Allows more mismatches (less strict), while lower values result in more consistency, but possible missing disparities.
min_disparity – minimum possible disparity value. Normally, it is zero but sometimes rectification algorithms can shift images, so this parameter needs to be adjusted accordingly.
num_disparities – the disparity search range. For each pixel algorithm will find the best disparity from 0 (default minimum disparity) to numDisparities.
speckle_range – the maximum difference between neighbor disparities to consider them as part of the same region.
speckle_window_size – the window size used to filter out speckles (small disparity regions).
mode – Define the mode of the matching algorithm: 0: Standard Semi-Global Block Matching, 1: Uses full dynamic programming for optimization (more accurate, but slower), 2: Three-way matching for better accuracy (faster than mode 1 but more memory-intensive), 4: Variant of mode 1 using 4-pass dynamic programming..
p1 – The first penalty parameter for controlling smoothness..
p2 – The second penalty parameter for controlling smoothness (stronger than p1)
pre_filter_cap – The pixel intensity values are clipped before computing the SAD (Sum of Absolute Differences) for matching. Helps normalize bright/dark regions for better matching, but also affects noise and texture handling.
uniqueness_ratio – Ensures the best disparity match is significantly better than the second-best.
- property block_size
the linear size of the blocks compared by the algorithm. The size should be odd. Larger block size implies smoother, though less accurate disparity map.
- property disp12_max_diff
maximum allowed difference in computed disparity between left-to-right and right-to-left checks. Higher values Allows more mismatches (less strict), while lower values result in more consistency, but possible missing disparities.
- property min_disparity
minimum possible disparity value. Normally, it is zero but sometimes rectification algorithms can shift images, so this parameter needs to be adjusted accordingly.
- property mode
Standard Semi-Global Block Matching, 1: Uses full dynamic programming for optimization (more accurate, but slower), 2: Three-way matching for better accuracy (faster than mode 1 but more memory-intensive), 4: Variant of mode 1 using 4-pass dynamic programming..
- Type:
Define the mode of the matching algorithm
- Type:
0
- property name
Name of the stereo reconstruction method.
- property num_disparities
the disparity search range. For each pixel algorithm will find the best disparity from 0 (default minimum disparity) to numDisparities.
- property p1
The first penalty parameter for controlling smoothness..
- property p2
The second penalty parameter for controlling smoothness (stronger than p1)
- property pre_filter_cap
The pixel intensity values are clipped before computing the SAD (Sum of Absolute Differences) for matching. Helps normalize bright/dark regions for better matching, but also affects noise and texture handling.
- property speckle_range
the maximum difference between neighbor disparities to consider them as part of the same region.
- property speckle_window_size
the window size used to filter out speckles (small disparity regions).
- property uniqueness_ratio
Ensures the best disparity match is significantly better than the second-best.
- class imfusion.vision.stereo.StereoCGIParameters(self: StereoCGIParameters)
Bases:
StereoReconstructionParameters
CGI-Stereo is a real-time stereo matching network with high accuracy and strong generalization, powered by the Context and Geometry Fusion (CGF) block for better cost aggregation and feature learning.
- property name
Name of the stereo reconstruction method.
- class imfusion.vision.stereo.StereoCalibrationDataComponent(self: StereoCalibrationDataComponent)
Bases:
DataComponentBase
A data component storing the calibrated transformation between a pair of stereo pinhole cameras
- property left_to_right_registration
Transformation matrix from the left to the right camera.
- class imfusion.vision.stereo.StereoImage(self: StereoImage, left: SharedImage, right: SharedImage)
Bases:
pybind11_object
This class stores a pair of images, taken by a stereo camera at the same time point
- property left
Left image.
- property right
Right image.
- class imfusion.vision.stereo.StereoRAFTParameters(self: StereoRAFTParameters, initialize_from_previous_computation: bool = False, num_refinements: int = 0, resize_to_default_resolution: bool = True, use_smaller_model: bool = False)
Bases:
StereoReconstructionParameters
Learning based approach to compute optical flow, based on RAFT-Stereo paper. The underlying ML architecture is very similar to RAFT.
- Parameters:
initialize_from_previous_computation – Setting to true will store the computed disparity internally and subsequent call will initialize the computation from the previously stored one. Setting to true can help in better quality results in case of temporally smooth videos. Note that the interally stored disparity low is in different resolution than the one returned.
num_refinements – Setting this to a value n > 0 will perform n disparity refinement steps, each refining the disparity computed at the previous (n-1’th) step
resize_to_default_resolution – Resize images to match the dimensions used during training (default values are recommended).
use_smaller_model – Use smaller (and faster) ML model with lower runtime and possibly worse quality.
- property initialize_from_previous_computation
Setting to true will store the computed disparity internally and subsequent call will initialize the computation from the previously stored one. Setting to true can help in better quality results in case of temporally smooth videos. Note that the interally stored disparity low is in different resolution than the one returned.
- property name
Name of the stereo reconstruction method.
- property num_refinements
Setting this to a value n > 0 will perform n disparity refinement steps, each refining the disparity computed at the previous (n-1’th) step
- property resize_to_default_resolution
Resize images to match the dimensions used during training (default values are recommended).
- property use_smaller_model
Use smaller (and faster) ML model with lower runtime and possibly worse quality.
- class imfusion.vision.stereo.StereoReconstructionParameters
Bases:
pybind11_object
Base class for stereo reconstruction parameters.
- class imfusion.vision.stereo.StereoReconstructionResult
Bases:
pybind11_object
- property depth
Estimated depth map.
- property disparity
Estimated disparity map.
- property left_rectified
Left image rectified, set to ‘None’ if ‘export_rectified_images’ is set to False.
- property mask_rectified
Mask rectified, set to ‘None’ if ‘export_rectified_images’ is set to False.
- property point_clouds
Estimated point cloud.
- property right_rectified
Right image rectified, set to ‘None’ if ‘export_rectified_images’ is set to False.
Bases:
Data
This class is the main high-level container for stereo image data set. It contains a pair of
SharedImageSet
. A stereo image set could be generated from a pair of image sets which have the same size or recorded from a stereo camera. It is mainly used in stereo-image-based algorithms and visualization classes, It should have aStereoCalibrationDataComponent
Overloaded function.
__init__(self: imfusion.vision.stereo.StereoSharedImageSet) -> None
__init__(self: imfusion.vision.stereo.StereoSharedImageSet, arg0: imfusion.SharedImageSet, arg1: imfusion.SharedImageSet) -> None
__init__(self: imfusion.vision.stereo.StereoSharedImageSet, arg0: list[imfusion.vision.stereo.StereoImage]) -> None
Overloaded function.
add(self: imfusion.vision.stereo.StereoSharedImageSet, arg0: imfusion.vision.stereo.StereoImage) -> None
Adds a
StereoImage
to existing left and right image sets. This does not copy data.add(self: imfusion.vision.stereo.StereoSharedImageSet, arg0: list[imfusion.vision.stereo.StereoImage]) -> None
Adds a list of
StereoImage
to existing left and right image sets. This does not copy data.
Returns a reference to the
StereoImage
of the indexid
.
Returns a reference to the left image set.
Returns a reference to the right image set.
Size of the left (right) image sets.
- imfusion.vision.stereo.compute_camera_registration_transform(object_points: list[list[ndarray[numpy.float64[3, 1]]]], image_points1: list[list[ndarray[numpy.float64[2, 1]]]], image_points2: list[list[ndarray[numpy.float64[2, 1]]]], camera_matrix: Annotated[list[numpy.ndarray[numpy.float64[3, 3]]], FixedSize(2)], distortion_coeffs: Annotated[list[numpy.ndarray[numpy.float64[5, 1]]], FixedSize(2)], image_size: Annotated[list[numpy.ndarray[numpy.int32[2, 1]]], FixedSize(2)], optimize_intrinsics: bool = False, calibration_settings: CameraCalibrationSettings = None) ndarray[numpy.float64[4, 4]]
Runs registration on given points with given calibration. The passed arrays need to contain the same number of points for each image.
- Parameters:
object_points – list of list of np.array 3x1, Coordinates of mutually detected points for each image pair in world coordinates.
image_points1 – list of list of np.array 2x1, Coordinates of mutually detected points for each image pair in the first image.
image_points2 – list of list of np.array 2x1, Coordinates of mutually detected points for each image pair in the second image.
camera_matrix – tuple of two np.array 3x3, Camera intrinsic matrices for the respective
image_points
.distortion_coeffs – tuple of two np.array 5x1, Camera distortion coefficients for the respective
image_points
.image_size – Camera image sizes.
optimize_intrinsics – Specifies whether camera intrinsics are optimized together with stereo registration.
calibration_settings – Calibration settings used when
optimize_intrinsics
is set to True.
- Returns:
the transformation from the first camera coordinate system to the second camera coordinate system.
- Return type:
ndarray(4, 4)
- imfusion.vision.stereo.convert_interlaced_to_stereo(*args, **kwargs)
Overloaded function.
convert_interlaced_to_stereo(images: imfusion.SharedImageSet, resample_to_original_size: bool = False, first_line_is_left: bool = False, create_stereo_image_set: bool = True) -> Union[imfusion.vision.stereo.StereoSharedImageSet, tuple[imfusion.SharedImageSet, imfusion.SharedImageSet]]
Converts line-interlaced images into stereo image pairs.
- Parameters:
images – Line-interlaced 2D image set.
resample_to_original_size – Interpolates the output images to the original interlaced image size by bilinear interpolation.
first_line_is_left – If True, the left output image starts at the first line of the input image, otherwise the first line is assigned to the right output image.
create_stereo_image_set – If True, return a
StereoSharedImageSet
. Otherwise, return twoSharedImageSet
, where the first one corresponds to the left image set and the second to the right.
- Returns:
If
create_stereo_image_set
is True, returns aStereoSharedImageSet
.Otherwise, returns a tuple containing two
SharedImageSet
objects (left image set, right image set).
- Return type:
StereoSharedImageSet or tuple[SharedImageSet, SharedImageSet]
convert_interlaced_to_stereo(images: imfusion.SharedImageSet, resample_to_original_size: bool = False, first_line_is_left: bool = False, create_stereo_image_set: bool = True) -> Union[imfusion.vision.stereo.StereoSharedImageSet, tuple[imfusion.SharedImageSet, imfusion.SharedImageSet]]
Converts interlaced images into stereo image pairs.
- Parameters:
images – interlaced 2D image set.
resample_to_original_size – If True, the output image will be upsampled to original size.
first_line_is_left – If True, the left and right output images are swiped.
create_stereo_image_set – If True, return a
StereoSharedImageSet
. Otherwise, return twoSharedImageSet
, where the first one corresponds to the left image set and the second to the right.
- Returns:
If
create_stereo_image_set
is True, returns aStereoSharedImageSet
.Otherwise, returns a tuple containing two
SharedImageSet
objects (left image set, right image set).
- Return type:
StereoSharedImageSet or tuple[SharedImageSet, SharedImageSet]
- imfusion.vision.stereo.convert_side_by_side_to_stereo(images: SharedImageSet, swap_left_and_right: bool = False, create_stereo_image_set: bool = True) StereoSharedImageSet | tuple[SharedImageSet, SharedImageSet]
Converts side-by-side images into stereo image pairs.
- Parameters:
images – Side-by-side 2D image set.
swap_left_and_right – If True, the left output image comes from the right camera.
create_stereo_image_set – If True, return a
StereoSharedImageSet
. Otherwise, return twoSharedImageSet
, where the first one corresponds to the left image set and the second to the right.
- Returns:
If
create_stereo_image_set
is True, returns aStereoSharedImageSet
.Otherwise, returns a tuple containing two
SharedImageSet
objects (left image set, right image set).
- Return type:
StereoSharedImageSet or tuple[SharedImageSet, SharedImageSet]
- imfusion.vision.stereo.rectify(*args, **kwargs)
Overloaded function.
rectify(images_left: imfusion.SharedImageSet, images_right: imfusion.SharedImageSet, alpha: float = 1, zero_disparity_depth: float = 0) -> tuple[imfusion.SharedImageSet, imfusion.SharedImageSet]
Wraps both images so that they appear as if they have been taken only with a horizontal displacement. This simplifies calculating the disparities of each pixel. In the resulting images, all epipolar lines are parallel to the horizontal axis.
- Parameters:
images_left – input left image set.
images_right – input right image set.
alpha – specifies whether the output image should only contain all valid pixels (alpha = 0) or whether all pixels from the input image shall be mapped (alpha = 1). Intermediate values between 0 and 1 provide a compromise between these two cases. -1 applies an automated process.
zero_disparity_depth –
Defines the depth from the camera at which corresponding points will have zero disparity (horizontal shift).
If set to 0, ensures that disparity is zero at infinity.
Used to compute the difference between the left and right principal points (\(\beta\)) along the x-axis, using the formula:
\[\beta = \frac{|t_x * f_{new}|} {\gamma}\]where: \(t_x\) is the horizontal translation between the cameras. \(f_{new}\) is the new focal length, computed from the left and right focal lengths. \(\gamma\) is the
zero_disparity_depth
.
- Returns:
rectified left image set.
rectified right image set.
rectify(stereo_image: imfusion.vision.stereo.StereoSharedImageSet, alpha: float = 1, zero_disparity_depth: float = 0) -> imfusion.vision.stereo.StereoSharedImageSet
Wraps both images so that they appear as if they have been taken only with a horizontal displacement. This simplifies calculating the disparities of each pixel. In the resulting images, all epipolar lines are parallel to the horizontal axis.
- Parameters:
stereo_image – input stereo image set.
alpha – specifies whether the output image should only contain all valid pixels (alpha = 0) or whether all pixels from the input image shall be mapped (alpha = 1). Intermediate values between 0 and 1 provide a compromise between these two cases. -1 applies an automated process.
zero_disparity_depth –
Defines the depth from the camera at which corresponding points will have zero disparity (horizontal shift).
If set to 0, ensures that disparity is zero at infinity.
Used to compute the difference between the left and right principal points (\(\beta\)) along the x-axis, using the formula:
\[\beta = \frac{|t_x * f_{new}|} {\gamma}\]where: \(t_x\) is the horizontal translation between the cameras. \(f_{new}\) is the new focal length, computed from the left and right focal lengths. \(\gamma\) is the
zero_disparity_depth
.
- Returns:
rectified stereo image set.
- imfusion.vision.stereo.register_camera(*args, **kwargs)
Overloaded function.
register_camera(img_left: imfusion.SharedImageSet, img_right: imfusion.SharedImageSet, marker_config: imfusion.vision.marker_configuration.ChessboardInfo, calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(stereo_img_set: imfusion.vision.stereo.StereoSharedImageSet, marker_config: imfusion.vision.marker_configuration.ChessboardInfo, calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(img_left: imfusion.SharedImageSet, img_right: imfusion.SharedImageSet, marker_config: imfusion.vision.marker_configuration.CharucoBoardInfo, calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(stereo_img_set: imfusion.vision.stereo.StereoSharedImageSet, marker_config: imfusion.vision.marker_configuration.CharucoBoardInfo, calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(img_left: imfusion.SharedImageSet, img_right: imfusion.SharedImageSet, marker_config: imfusion.vision.marker_configuration.ArucoBoardInfo, calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(stereo_img_set: imfusion.vision.stereo.StereoSharedImageSet, marker_config: imfusion.vision.marker_configuration.ArucoBoardInfo, calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(img_left: imfusion.SharedImageSet, img_right: imfusion.SharedImageSet, marker_config: imfusion.vision.marker_configuration.CircleBoardInfo, calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(stereo_img_set: imfusion.vision.stereo.StereoSharedImageSet, marker_config: imfusion.vision.marker_configuration.CircleBoardInfo, calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(img_left: imfusion.SharedImageSet, img_right: imfusion.SharedImageSet, marker_config: list[imfusion.vision.marker_configuration.AprilTagInfo], calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(stereo_img_set: imfusion.vision.stereo.StereoSharedImageSet, marker_config: list[imfusion.vision.marker_configuration.AprilTagInfo], calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(img_left: imfusion.SharedImageSet, img_right: imfusion.SharedImageSet, marker_config: imfusion.vision.marker_configuration.AprilTagBoardInfo, calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(stereo_img_set: imfusion.vision.stereo.StereoSharedImageSet, marker_config: imfusion.vision.marker_configuration.AprilTagBoardInfo, calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(img_left: imfusion.SharedImageSet, img_right: imfusion.SharedImageSet, marker_config: list[imfusion.vision.marker_configuration.STagInfo], calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(stereo_img_set: imfusion.vision.stereo.StereoSharedImageSet, marker_config: list[imfusion.vision.marker_configuration.STagInfo], calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
register_camera(img_left: imfusion.SharedImageSet, img_right: imfusion.SharedImageSet, marker_config: str, calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
The algorithm performs registration between two cameras based on two sets of 2D images of marker boards taken by these cameras. The two sets of input 2D images must have valid camera calibration and must show a camera calibration target. If the algorithm runs successfully, sets the registration matrix in the
CameraCalibrationDataComponent
to the transformation from the first to the second camera coordinate system: camera 1 gets the identity matrix, camera 2 gets the inverse matrix of registration().- Parameters:
img_left – input left image
SharedImageSet
. Only 8-bit grayscale and RGB images are supported.img_right – input right image
SharedImageSet
. Only 8-bit grayscale and RGB images are supported.marker_config – specifies parameters for different types of single markers or marker boards. Or a path to a valid xml configuration file.
calibration_config – parameters used to configure the settings required for calibrating a camera.
- Returns:
the root mean square error of the registration.
- Return type:
Example
>>> images_sets = imfusion.load(images_path) >>> imfusion.vision.stereo.register_camera(images_sets[0], images_sets[1], marker_config=path_to_config_file) >>> cam_to_world_transform = images_sets[1].components.camera_calibration.registration
register_camera(stereo_img_set: imfusion.vision.stereo.StereoSharedImageSet, marker_config: str, calibration_config: imfusion.vision.CameraCalibrationSettings = None) -> float
The algorithm performs registration between two cameras based on two sets of 2D images of marker boards taken by these cameras. The two sets of input 2D images must have valid camera calibration and must show a camera calibration target. If the algorithm runs successfully, sets the registration matrix in the
StereoCalibrationDataComponent
so that both map to the system of left camera.- Parameters:
stereo_img_set – input
StereoSharedImageSet
. Only 8-bit grayscale and RGB images are supported.marker_config – specifies parameters for different types of single markers or marker boards. Or a path to a valid xml configuration file.
calibration_config – parameters used to configure the settings required for calibrating a camera.
- Returns:
the root mean square error of the registration.
- Return type:
Example
>>> imfusion.vision.stereo.register_camera(stereo_img_set, marker_config=path_to_config_file) >>> cam_to_world_transform = stereo_img_set.components.stereo_calibration.left_to_right_registration
- imfusion.vision.stereo.save(stereo_image_set: StereoSharedImageSet, file_path: str) None
Save a
StereoSharedImageSet
to the specified file path. The path extension is used to determine which file format to save to. Currently supported file formats are:ImFusion File, extension
imf
- Parameters:
shared_image_set – Instance of
StereoSharedImageSet
.file_path – Path to output file. The path extension is used to determine the file format.
- Raises:
RuntimeError if file_path doesn't end with .imf extension, or if saving fails. –
Example
>>> stereo_image_set = StereoSharedImageSet(...) >>> imfusion.vision.stereo.save(stereo_image_set, tmp_path / 'stereo_file.imf') # saves an ImFusionFile
- imfusion.vision.stereo.stereo_reconstruction(*args, **kwargs)
Overloaded function.
stereo_reconstruction(left_image: imfusion.SharedImageSet, right_image: imfusion.SharedImageSet, mask_image: imfusion.SharedImageSet = None, reconstruction_parameters: imfusion.vision.stereo.StereoReconstructionParameters = StereoBlockMatching: numDisparities [64], block size[21], preFilterCap[31], *, export_rectified_images: bool = False, selected_label: int = -1, alpha: float = 1, zero_disparity_depth: float = 0) -> imfusion.vision.stereo.StereoReconstructionResult
Stereo reconstruction algorithm for producing disparity map. Images must be rectified. Two 2D image sets representing the left and the right camera view. The images must contain a camera calibration data component. Optionally a third image set can be passed, which represents a mask.
- Parameters:
left_image – input left image set.
right_image – input right image set.
mask_image – input mask image set.
reconstruction_parameters – parameters of the method to perform reconstruction.
export_rectified_images – If set to true right and left images are rectified.
selected_label – if not -1, then masks are applied to disparity maps.
alpha – Free scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Any intermediate value yields an intermediate result between those two extreme cases.
zero_disparity_depth –
Defines the depth from the camera at which corresponding points will have zero disparity (horizontal shift).
If set to 0, ensures that disparity is zero at infinity.
Used to compute the difference between the left and right principal points (\(\beta\)) along the x-axis, using the formula:
\[\beta = \frac{|t_x * f_{new}|} {\gamma}\]where: \(t_x\) is the horizontal translation between the cameras. \(f_{new}\) is the new focal length, computed from the left and right focal lengths. \(\gamma\) is the
zero_disparity_depth
.
- Returns:
A class to encapsulate stereo reconstruction result data, including disparity maps, depth images. if ‘export_rectified_images’ is set to True, it also includes rectified left and right images, and rectified masks.
stereo_reconstruction(stereo_image: imfusion.vision.stereo.StereoSharedImageSet, mask_image: imfusion.SharedImageSet = None, reconstruction_parameters: imfusion.vision.stereo.StereoReconstructionParameters = StereoBlockMatching: numDisparities [64], block size[21], preFilterCap[31], *, compute_rectified_images: bool = False, selected_label: int = -1, alpha: float = 1, zero_disparity_depth: float = 0) -> imfusion.vision.stereo.StereoReconstructionResult
Stereo reconstruction algorithm for producing disparity map. Images must be rectified. The stereo image sets representing the left and the right camera view. The images must contain a camera calibration data component. Optionally a third image set can be passed, which represents a mask.
- Parameters:
stereo_image – input stereo image set.
mask_image – input mask image set.
reconstruction_parameters – parameters of the method to perform reconstruction.
export_rectified_images – If set to true right and left images are rectified.
selected_label – if not -1, then masks are applied to disparity maps.
alpha – Free scaling parameter. If it is -1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Any intermediate value yields an intermediate result between those two extreme cases.
zero_disparity_depth –
Defines the depth from the camera at which corresponding points will have zero disparity (horizontal shift).
If set to 0, ensures that disparity is zero at infinity.
Used to compute the difference between the left and right principal points (\(\beta\)) along the x-axis, using the formula:
\[\beta = \frac{|t_x * f_{new}|} {\gamma}\]where: \(t_x\) is the horizontal translation between the cameras. \(f_{new}\) is the new focal length, computed from the left and right focal lengths. \(\gamma\) is the
zero_disparity_depth
.
- Returns:
A class to encapsulate stereo reconstruction result data, including disparity maps, depth images. if ‘export_rectified_images’ is set to True, it also includes rectified left and right images, and rectified masks.
imfusion.vision.features
- class imfusion.vision.features.AutomaticMatchPruner(self: AutomaticMatchPruner, inlier_threshold: float = 3.0)
Bases:
MatchPruner
Automatically selects between Homography and Fundamental Matrix depending on the score computed for each model, and prunes with a corresponding selected pruner.
- Parameters:
inlier_threshold – specifies how far in pixels a correspondences allowed to be from the position predicted by the model to be counted as an inlier.
- property inlier_threshold
specifies how far in pixels a correspondences allowed to be from the position predicted by the model to be counted as an inlier.
- class imfusion.vision.features.BruteForceMatcher(self: imfusion.vision.features.BruteForceMatcher, norm: imfusion.vision.features.MatcherNorm = <MatcherNorm.HAMMING: 2>, cross_check: bool = False, ratio_threshold: float = 0.8)
Bases:
Matcher
Uses nearest neighbourhood algorithm.
- Parameters:
norm – selects the norm used for distance computation.
cross_check – if True, instead of searching for the k nearest neighbours, it searches for the nearest neighbour for each keypoints bidirectionally. The match is only retained only if the nearest neighbours agree.
ratio_threshold – Lowe’s Ratio heuristic. Used to filter out ambiguous feature matches. It ensures that the best match is significantly better than the second-best match, increasing the reliability of feature correspondences. Used only for the
DOT_PRODUCT
.
- property cross_check
if True, instead of searching for the k nearest neighbours, it searches for the nearest neighbour for each keypoints bidirectionally. The match is only retained only if the nearest neighbours agree.
- property norm
selects the norm used for distance computation.
- property ratio_threshold
Lowe’s Ratio heuristic. Used to filter out ambiguous feature matches. It ensures that the best match is significantly better than the second-best match, increasing the reliability of feature correspondences. Used only for the
DOT_PRODUCT
.
- class imfusion.vision.features.Detection(self: Detection, images: SharedImageSet, detector_type: DetectorType = ORB(max_features=500, scale_factor=1.2, num_levels=8), sampler: NMSSampler | None = None, matcher: Matcher = BruteForceMatcher(cross_check=false, ratio_threshold=0.8), pruner: MatchPruner | None = None)
Bases:
pybind11_object
Runs feature detection and matching on the input images.
- Parameters:
detector_type – detects feature keypoints on the 2D images. Refer to
DetectorType
.sampler – type of the sampler to filter the features by some criteria. Refer to
Sampler
.matcher – matcher to be used for finding correspondences between keypoints across two images. Refer to
Matcher
.pruner – removes correspondences which do not fulfill the pruning constraints. Refer to
MatchPruner
.
Example
The following example shows how to run feature detection, extract matches and plot the result using OpenCV:
images, *_ = imfusion.load(...) out_path = pathlib.Path(...) out_path.mkdir(parents=True, exist_ok=True) detector = vision.features.SIFT(num_features=3000) sampler = vision.features.NMSSampler(radius=5) pruner = vision.features.AutomaticMatchPruner(9.99) detection = vision.features.Detection(images, detector_type=detector, sampler=sampler, pruner=pruner) key_points = detection.extract_keypoints() match_indices = [(1, 5), (5, 11), (11, 16)] for match_index in match_indices: src, trg = match_index[0], match_index[1] matches = detection.detect_matches(src, trg) keypoints1 = [cv2.KeyPoint(key_point.location[0], key_point.location[1], size=1) for key_point in key_points[src]] keypoints2 = [cv2.KeyPoint(key_point.location[0], key_point.location[1], size=1) for key_point in key_points[trg]] cv_matches = [cv2.DMatch(match.sourceIndex, match.targetIndex, 0) for match in matches] img_src = cv2.cvtColor(np.array(images[src]), cv2.COLOR_RGB2GRAY) img_trg = cv2.cvtColor(np.array(images[trg]), cv2.COLOR_RGB2GRAY) matched_img = cv2.drawMatches(img_src, keypoints1, img_trg, keypoints2, cv_matches, None) cv2.imwrite(str(out_path / f"match_{src:04d}_{trg:04d}.png"), matched_img)
- detect_matches(self: Detection, src: int, dst: int) list[FeatureMatch]
Matches detected keypoints between two images, and then prunes the result, using the provided matcher and pruner.
- Parameters:
src – index of the source image.
dst – index of the destination image.
- Returns:
A list of good matches.
- class imfusion.vision.features.DetectorType
Bases:
pybind11_object
Base class for feature detection on 2D images.
- property name
Name of the detector.
- class imfusion.vision.features.DetectorWithAdaptiveThreshold
Bases:
DetectorType
- Feature detection which has a functionality of iteratively changing a threshold, to try to fill the image grid with at least one feature per cell. Works as follows:
The detector runs the detection function of feature detector with 1000000 number of features and sorts them according to some criteria.
The detector checks which grid cells have at least 1 feature in them, and if not all of the cells are filled, the algorithm decreases the threshold.
Steps 1-2 are repeated either until all cells are filled or until the threshold reaches its minimum.
Final Keypoints will contain first p_maxFeatures best keypoints + a best keypoint per cell.
- property grid_cell_size
the cell size of the image grid. The algorithm will attempt to fill each cell with at least one feature.
- property iteratively_adapt_threshold
indicates whether the adaptive thresholding should be applied.
- property minimum_threshold
the lowest value of the threshold to be tested.
- property threshold_step
the threshold step used to decrease the threshold from iteration to iteration.
- class imfusion.vision.features.FeatureMatch
Bases:
pybind11_object
Correspondence between two images.
- property score
- property sourceIndex
- property targetIndex
- class imfusion.vision.features.FundamentalMatrixMatchPruner(self: FundamentalMatrixMatchPruner, inlier_threshold: float = 3.0)
Bases:
MatchPruner
The underlying model is a general two view constraint. The scene must not only consist of a plane. In this case use the Homography model.
- Parameters:
inlier_threshold – specifies how far in pixels a correspondences allowed to be from the position predicted by the model to be counted as an inlier.
- property inlier_threshold
specifies how far in pixels a correspondences allowed to be from the position predicted by the model to be counted as an inlier.
- class imfusion.vision.features.GMSPruner(self: GMSPruner, considerRotation: bool = True, considerScale: bool = True, thresholdFactor: int = 6)
Bases:
MatchPruner
Applies motion smoothness as a pruning criteria. Recommended to be used with lots of matches (~10k).
- Reference:
Bian, JiaWang, Wen-Yan Lin, Yasuyuki Matsushita, Sai-Kit Yeung, Tan-Dat Nguyen, and Ming-Ming Cheng. “Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4181-4190. 2017.
- Parameters:
consider_rotation – if True, the algorithm will consider rotation as a possible motion. Otherwise, only translation (or also scaling) is considered.
consider_scale – if True, the algorithm will consider scale as a possible motion. Otherwise, only translation (or also rotation) is considered.
threshold_factor – the threshold factor is multiplied by the median of the motion statistics to obtain the threshold. The higher the value, the more matches are pruned.
- property consider_rotation
if True, the algorithm will consider rotation as a possible motion. Otherwise, only translation (or also scaling) is considered.
- property consider_scale
if True, the algorithm will consider scale as a possible motion. Otherwise, only translation (or also rotation) is considered.
- property threshold_factor
the threshold factor is multiplied by the median of the motion statistics to obtain the threshold. The higher the value, the more matches are pruned.
- class imfusion.vision.features.GridBasedMatcher(self: imfusion.vision.features.GridBasedMatcher, norm: imfusion.vision.features.MatcherNorm = <MatcherNorm.HAMMING: 2>, window_size: int = 100, check_orientation: bool = True)
Bases:
Matcher
Searches for the best match within the radius.
- Parameters:
norm – selects the norm used for distance computation. The
DOT_PRODUCT
norm is not supported for this matcher.window_size – radius of the search area.
check_orientation – if True, keypoint orientation is taken into account during matching.
- property check_orientation
if True, keypoint orientation is taken into account during matching.
- property norm
selects the norm used for distance computation. The
DOT_PRODUCT
norm is not supported for this matcher.
- property window_size
radius of the search area.
- class imfusion.vision.features.HomographyMatchPruner(self: HomographyMatchPruner, inlier_threshold: float = 3.0)
Bases:
MatchPruner
The underlying assumption of this model is that all points lie on the same scene plane.
- Parameters:
inlier_threshold – specifies how far in pixels a correspondences allowed to be from the position predicted by the model to be counted as an inlier.
- property inlier_threshold
specifies how far in pixels a correspondences allowed to be from the position predicted by the model to be counted as an inlier.
- class imfusion.vision.features.Keypoint
Bases:
pybind11_object
Keypoint used for feature detection.
- property angle
Computed orientation of the keypoint (-1 if not applicable).
- property location
Location in pixel coordinates.
- property response
The response by which the most strong keypoints have been selected. Can be used for the further sorting or subsampling.
- property scale
Detection scale.
- class imfusion.vision.features.MatchPruner
Bases:
pybind11_object
The pruning model removes correspondences which do not fulfill the pruning constraints.
- property name
Name of the Pruner.
- class imfusion.vision.features.MatchScorePruner(self: MatchScorePruner, use_max_number_of_matches: bool = True, max_number_of_matches: int = 250, use_match_score_threshold: bool = True, match_score_threshold: float = 50.0)
Bases:
MatchPruner
Prunes matches either by selecting N best matches or by removing the matches that have a score (distance) higher than the threshold, or both. Use pruning based on max number of matches enables the first option, while Use pruning based on score threshold enables the second. Max number of matches specifies the number of best matches to be kept when using the first option. Match score threshold specifies the maximum match score for the second option.
- Parameters:
use_max_number_of_matches – enables pruning based on max number of matches.
max_number_of_matches – max number of matches specifies the number of best matches to be kept when enabling max_number_of_matches.
use_match_score_threshold – enables pruning based on score threshold.
match_score_threshold – specifies the maximum match score when enabling use_match_score_threshold.
- property match_score_threshold
specifies the maximum match score when enabling use_match_score_threshold.
- property max_number_of_matches
max number of matches specifies the number of best matches to be kept when enabling max_number_of_matches.
- property use_match_score_threshold
enables pruning based on score threshold.
- property use_max_number_of_matches
enables pruning based on max number of matches.
- class imfusion.vision.features.Matcher
Bases:
pybind11_object
Base class for feature matching on 2D images.
- property name
Name of the matcher.
- class imfusion.vision.features.MatcherNorm(self: MatcherNorm, value: int)
Bases:
pybind11_object
Members:
L1
L2
HAMMING
HAMMING2
DOT_PRODUCT
- DOT_PRODUCT = <MatcherNorm.DOT_PRODUCT: 4>
- HAMMING = <MatcherNorm.HAMMING: 2>
- HAMMING2 = <MatcherNorm.HAMMING2: 3>
- L1 = <MatcherNorm.L1: 0>
- L2 = <MatcherNorm.L2: 1>
- property name
- property value
- class imfusion.vision.features.NMSSampler(self: NMSSampler, radius: float = 5.0)
Bases:
pybind11_object
Features are first sorted by their response. Then for every feature point it is checked whether there are other features within the specified Radius. And if so, these features are removed.
- Parameters:
radius – defines the minimum allowable distance between two features.
- property radius
defines the minimum allowable distance between two features.
- class imfusion.vision.features.ORB(self: ORB, fast_threshold: int = 20, edge_threshold: int = 31, first_level: int = 0, max_features: int = 500, num_levels: int = 8, patch_size: int = 31, scale_factor: float = 1.2000000476837158, score_type: int = 0, wta_k: int = 2, adapt_threshold: bool = False, cell_size: int = 100, min_threshold: float = 5.0, threshold_step: float = 2.0)
Bases:
DetectorWithAdaptiveThreshold
Implementing the ORB (oriented BRIEF) keypoint detector and descriptor extractor. The algorithm uses FAST in pyramids to detect stable keypoints, selects the strongest features using FAST or Harris response, finds their orientation using first-order moments and computes the descriptors using BRIEF (where the coordinates of random point pairs (or k-tuples) are rotated according to the measured orientation).
This method is well-suited for real-time applications and offers good performance for matching and tracking features.
- Parameters:
fast_threshold – threshold value of the FAST descriptor detection.
edge_threshold – the border size after which the features are not detected.
first_level – the level of the source of the source image in the pyramid.
max_features – maximum number of features to detect in image.
num_levels – number of levels in the image pyramid.
patch_size – size of the patch used for the BRIEF descriptors.
scale_factor – pyramid decimation ratio. If the value is
2
, the next level image in the pyramid is twice smaller.score_type – the algorithm used for ranking the descriptiveness. ORB uses Harris score by default.
wta_k – the number of points used to produce each element of the oriented BRIEF descriptor.
adapt_threshold – indicates whether the adaptive thresholding should be applied.
cell_size – the cell size of the image grid. The algorithm will attempt to fill each cell with at least one feature.
min_threshold – the lowest value of the threshold to be tested.
threshold_step – the threshold step used to decrease the threshold from iteration to iteration.
- property edge_threshold
the border size after which the features are not detected.
- property fast_threshold
threshold value of the FAST descriptor detection.
- property first_level
the level of the source of the source image in the pyramid.
- property max_features
maximum number of features to detect in image.
- property num_levels
number of levels in the image pyramid.
- property patch_size
size of the patch used for the BRIEF descriptors.
- property scale_factor
pyramid decimation ratio. If the value is
2
, the next level image in the pyramid is twice smaller.
- property score_type
the algorithm used for ranking the descriptiveness. ORB uses Harris score by default.
- property wta_k
the number of points used to produce each element of the oriented BRIEF descriptor.
- class imfusion.vision.features.RIDE(self: RIDE, max_features: int = 5000, rotation_invariantize: bool = True, apply_dense_nms: bool = True, nms_radius: int = 4)
Bases:
DetectorType
Robust Independent Descriptor Extraction (RIDE) is a feature detection method designed for handling challenging scenarios, such as variations in illumination and perspective. It provides reliable feature extraction for complex scenes.
- Parameters:
max_features – maximum number of features to detect in image.
rotation_invariantize – Whether to invariantize to rotations. If checked, the algorithm will compute the orientation of each feature and rotate the feature descriptor accordingly.
apply_dense_nms – Whether to use dense non maximum suppression (NMS). This is different to the standard NMS, as it applies NMS on the dense grid of features, rather than on the sparse set of detected features.
nms_radius – The radius of the dense NMS.
- property apply_dense_nms
Whether to use dense non maximum suppression (NMS). This is different to the standard NMS, as it applies NMS on the dense grid of features, rather than on the sparse set of detected features.
- property max_features
maximum number of features to detect in image.
- property nms_radius
The radius of the dense NMS.
- property rotation_invariantize
Whether to invariantize to rotations. If checked, the algorithm will compute the orientation of each feature and rotate the feature descriptor accordingly.
- class imfusion.vision.features.SIFT(self: SIFT, contrast_threshold: float = 0.04, num_features: int = 500, num_octave_layers: int = 3, edge_threshold: float = 10.0, sigma: float = 1.6, adapt_threshold: bool = False, cell_size: int = 100, min_threshold: float = 0.01, threshold_step: float = 0.01)
Bases:
DetectorWithAdaptiveThreshold
Implementation of the Scale Invariant Feature Transform (SIFT) algorithm for extracting keypoints and computing descriptors. This method is invariant to scaling, rotation, and partial affine transformations, making it suitable for various computer vision tasks.
- Parameters:
contrast_threshold – the threshold value to filter out features coming from low-contrast regions. The higher the value, the less number of features are produced.
num_features – maximum number of features to detect in image.
num_octave_layers – the number of layers in each octave.
edge_threshold – the threshold value to filter out edge-like features. The higher the value, the more number of features are produced.
sigma – sigma of the Gaussian applied to the first image octave.
adapt_threshold – indicates whether the adaptive thresholding should be applied.
cell_size – the cell size of the image grid. The algorithm will attempt to fill each cell with at least one feature.
min_threshold – the lowest value of the threshold to be tested.
threshold_step – the threshold step used to decrease the threshold from iteration to iteration.
- property contrast_threshold
the threshold value to filter out features coming from low-contrast regions. The higher the value, the less number of features are produced.
- property edge_threshold
the threshold value to filter out edge-like features. The higher the value, the more number of features are produced.
- property num_features
maximum number of features to detect in image.
- property num_octave_layers
the number of layers in each octave.
- property sigma
sigma of the Gaussian applied to the first image octave.
- class imfusion.vision.features.ShiTomasi(self: ShiTomasi, num_features: int = 1024, block_size: int = 3, gradient_size: int = 3, quality_level: float = 0.01, min_dist: float = 10.0, free_param: float = 0.04, use_harris_detector: bool = False)
Bases:
DetectorType
Finds the most prominent corners in the image or in the specified image region. This method is based on the minimum eigenvalue of the gradient matrix, making it effective for detecting strong corners in an image.
- Parameters:
num_features – maximum number of features to detect in image.
block_size – size of an average block for computing a derivative covariation matrix over each pixel neighborhood.
gradient_size – size of an average block for computing gradient over each pixel neighborhood.
quality_level – parameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue or the Harris function response. The corners with the quality measure less than the product are rejected.
min_dist – Minimum possible Euclidean distance between the returned corners.
free_param – Free parameter of the Harris detector.
use_harris_detector – Whether to use Shi-Tomasi or Harris Corner.
- property block_size
size of an average block for computing a derivative covariation matrix over each pixel neighborhood.
- property gradient_size
size of an average block for computing gradient over each pixel neighborhood.
- property k
Free parameter of the Harris detector.
- property min_dist
Minimum possible Euclidean distance between the returned corners.
- property num_features
maximum number of features to detect in image.
- property quality_level
parameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue or the Harris function response. The corners with the quality measure less than the product are rejected.
- property use_harris_detector
Whether to use Shi-Tomasi or Harris Corner.