Reference

imfusion

imfusion - ImFusion SDK for Medical Imaging

This module provides Python bindings for the C++ ImFusion libraries.

exception imfusion.AlgorithmExecutionError

Bases: RuntimeError

exception imfusion.FileNotFoundError

Bases: FileNotFoundError

exception imfusion.IOError

Bases: OSError

exception imfusion.IncompatibleError

Bases: ValueError

exception imfusion.MissingLicenseError

Bases: RuntimeError

class imfusion.Algorithm(self: BaseAlgorithm, actions: list[object])

Bases: BaseAlgorithm

Base class for Algorithms.

An Algorithm accepts certain Data as input and performs some computation on it.

Example for an algorithm that takes exactly one image and prints its name:

>>> class MyAlgorithm(Algorithm):
...     def __init__(self, image):
...         super().__init__()
...         self.image = image
...
...     @classmethod
...     def convert_input(cls, data):
...         images = data.images()
...         if len(images) == 1 and len(data) == 1:
...             return [images[0]]
...         raise IncompatibleError('Requires exactly one image')
...
...     def compute(self):
...         print(self.image.name)

In order to make an Algorithm available to the ImFusion Suite (i.e. the context menu when right-clicking on selected data), it has to be registered to the ApplicationController:

>>> imfusion.register_algorithm('Python.MyAlgorithm','My Algorithm', MyAlgorithm)  # DOCTEST: +skip

If the Algorithm is created through the ImFusion Suite, the convert_input() method is called to determine if the Algorithm is compatible with the desired input data. If this method does not raise an exception, the Algorithm is initialized with the data returned by convert_input(). The implementation is similar to this:

try:
    input = MyAlgorithm.convert_input(some_data)
    return MyAlgorithm(*input)
except IncompatibleError:
    return None

The Algorithm class also provides default implementations for the configuration() and configure() methods that automatically serialize attributes created with add_param().

class action(display_name: str)

Bases: object

Decorator to demarcate a method as an “action”. Actions are displayed as additional buttons when creating an AlgorithmController in the Suite and can be run generically, using their id, through run_action().

Parameters:

display_name (str) – Text that should be shown on the Controller button.

static action_wrapper(func: Callable[[BaseAlgorithm], Status | None]) Callable[[BaseAlgorithm], Status]

Helper that returns UNKNOWN automatically if the wrapped method did not return a status.

Parameters:

func (Callable[[BaseAlgorithm], Status | None]) –

Return type:

Callable[[BaseAlgorithm], Status]

add_param(name, value, attributes='')

Add a new parameter to the object.

The parameter is available as a new attribute with the given name and value. The attribute will be configured automatically.

>>> class MyAlgorithm(Algorithm):
...     def __init__(self):
...         super().__init__()
...         self.add_param('x', 5)
>>> a = MyAlgorithm()
>>> a.x
5
configuration()

Returns a copy of the current algorithm configuration.

configure(p)

Sets the current algorithm configuration with the given Properties.

classmethod convert_input(data: List[Data]) List[Data]

Convert the given DataList to a valid input for the algorithm.

Must be overridden in derived classes. Raise an IncompatibleError if the given data does not exactly match the required input of the algorithm. Should return a list, a dict or a generator.

Parameters:

data (List[Data]) –

Return type:

List[Data]

output()

Return the output generated by the previous call to compute(). The returned type must be a list of Data objects! The default implementation returns an empty list.

class imfusion.Annotation

Bases: pybind11_object

class AnnotationType(self: AnnotationType, value: int)

Bases: pybind11_object

Members:

CIRCLE

LINE

POINT

POLY_LINE

RECTANGLE

CIRCLE = <AnnotationType.CIRCLE: 0>
LINE = <AnnotationType.LINE: 1>
POINT = <AnnotationType.POINT: 2>
POLY_LINE = <AnnotationType.POLY_LINE: 3>
RECTANGLE = <AnnotationType.RECTANGLE: 4>
property name
property value
on_editing_finished(self: Annotation, callback: object) SignalConnection

Register a callback which is call when the annotation is fully defined by the user.

The callback must not require any arguments.

>>> a = imfusion.app.annotation_model.create_annotation(imfusion.Annotation.LINE)
>>> def callback():
...     print("All points are defined")
>>> a.on_editing_finished(callback)
>>> a.start_editing()
on_points_changed(self: Annotation, callback: object) SignalConnection

Register a callback which is call when any of the points change position.

The callback must not require any arguments.

>>> a = imfusion.app.annotation_model.create_annotation(imfusion.Annotation.LINE)
>>> def callback():
...     print("Points changed")
>>> a.on_points_changed(callback)
>>> a.start_editing()
start_editing(self: Annotation) None

Start interactive placement of the annotation.

This can currently only be called once.

CIRCLE = <AnnotationType.CIRCLE: 0>
LINE = <AnnotationType.LINE: 1>
POINT = <AnnotationType.POINT: 2>
POLY_LINE = <AnnotationType.POLY_LINE: 3>
RECTANGLE = <AnnotationType.RECTANGLE: 4>
property color

Color of the annotation as a normalized RGB tuple.

property editable

Whether the annotation can be manipulated by the user.

property label_text
property label_visible
property line_width
property max_points

The maximum amount of points this annotation supports.

A -1 indicates that this annotation supports any number of points.

property name
property points

The points which define the annotation in world coordinates.

property type

Return the type of this annotation.

Return None if this type is only partially supported from Python.

property visible
class imfusion.AnnotationModel

Bases: pybind11_object

create_annotation(self: AnnotationModel, arg0: AnnotationType) Annotation
property annotations
class imfusion.ApplicationController

Bases: pybind11_object

A ApplicationController instance serves as the center of the ImFusionSDK.

It provides an OpenGL context, a DataModel, executes algorithms and more. While multiple instances are possible, in general there is only one instance.

add_algorithm(self: ApplicationController, id: str, data: list = [], properties: Properties = None) object

Add the algorithm with the given name to the application.

The algorithm will only be created if it is compatible with the given data. The optional Properties object will be used to configure the algorithm. Returns the created algorithm or None if no compatible algorithm could be found.

>>> app.add_algorithm("Create Synthetic Data", [])  
<imfusion._bindings.BaseAlgorithm object at ...>
close_all(self: ApplicationController) None

Delete all algorithms and datasets. Make sure to not reference any deleted objects after calling this!

execute_algorithm(self: ApplicationController, id: str, data: list = [], properties: Properties = None) list

Execute the algorithm with the given name and returns its output.

The algorithm will only be executed if it is compatible with the given data. The optional Properties object will be used to configure the algorithm before executing it. Any data created by the algorithm is added to the DataModel before being returned.

load_workspace(self: ApplicationController, path: str, **kwargs) bool

Loads a workspace file and returns True if the loading was successful. Placeholders can be specified as keyword arguments, for example: >>> app.load_workspace(“path/to/workspace.iws”, sweep=sweep, case=case)

open(self: ApplicationController, path: str) list

Tries to open the given filepath as data. If successful the data is added to DataModel and returned. Otherwise raises a FileNotFoundError.

remove_algorithm(self: ApplicationController, algorithm: BaseAlgorithm) None

Remove and deletes the given algorithm from the application. Don’t reference the given algorithm afterwards!

save_workspace(self: ApplicationController, path: str) bool

Saves current workspace to a iws file

select_data(*args, **kwargs)

Overloaded function.

  1. select_data(self: imfusion._bindings.ApplicationController, arg0: imfusion._bindings.Data) -> None

  2. select_data(self: imfusion._bindings.ApplicationController, arg0: imfusion._bindings.DataList) -> None

  3. select_data(self: imfusion._bindings.ApplicationController, arg0: list) -> None

update_display(self: ApplicationController) None
property algorithms

Return a list of all open algorithms.

property annotation_model
property data_model
property display
property selected_data
class imfusion.BaseAlgorithm(self: BaseAlgorithm, actions: list[object])

Bases: Configurable

Low-level base class for all algorithms.

This interface mirrors the C++ interface very closely. Instances of this class are returned when you create an Algorithm that exists in the C++ SDK, either through add_algorithm() or :meth:~imfusion.create_algorithm`.

If you want to implement your own Algorithms from Python see Algorithm instead.

class Action

Bases: pybind11_object

property display_name
property id
property is_hidden
class Status(self: Status, value: int)

Bases: pybind11_object

Members:

UNKNOWN

SUCCESS

ERROR

INVALID_INPUT

INCOMPLETE_INPUT

OUT_OF_MEMORY_HOST

OUT_OF_MEMORY_GPU

UNSUPPORTED_GPU

UNKNOWN_ACTION

USER

ERROR = <Status.ERROR: 1>
INCOMPLETE_INPUT = <Status.INCOMPLETE_INPUT: 3>
INVALID_INPUT = <Status.INVALID_INPUT: 2>
OUT_OF_MEMORY_GPU = <Status.OUT_OF_MEMORY_GPU: 5>
OUT_OF_MEMORY_HOST = <Status.OUT_OF_MEMORY_HOST: 4>
SUCCESS = <Status.SUCCESS: 0>
UNKNOWN = <Status.UNKNOWN: -1>
UNKNOWN_ACTION = <Status.UNKNOWN_ACTION: 7>
UNSUPPORTED_GPU = <Status.UNSUPPORTED_GPU: 6>
USER = <Status.USER: 1000>
property name
property value
compute(self: BaseAlgorithm) None
output(self: BaseAlgorithm) list
output_annotations(self: BaseAlgorithm) list[Annotation]
run_action(self: BaseAlgorithm, id: str) Status

Run one of the registered actions.

Parameters:

id (str) – Identifier of the action to run.

ERROR = <Status.ERROR: 1>
INCOMPLETE_INPUT = <Status.INCOMPLETE_INPUT: 3>
INVALID_INPUT = <Status.INVALID_INPUT: 2>
OUT_OF_MEMORY_GPU = <Status.OUT_OF_MEMORY_GPU: 5>
OUT_OF_MEMORY_HOST = <Status.OUT_OF_MEMORY_HOST: 4>
SUCCESS = <Status.SUCCESS: 0>
UNKNOWN = <Status.UNKNOWN: -1>
UNKNOWN_ACTION = <Status.UNKNOWN_ACTION: 7>
UNSUPPORTED_GPU = <Status.UNSUPPORTED_GPU: 6>
USER = <Status.USER: 1000>
property actions

List of registered actions.

property id
property input
property name
property status
class imfusion.Configurable

Bases: pybind11_object

configuration(self: Configurable) Properties
configure(self: Configurable, properties: Properties) None
configure_defaults(self: Configurable) None
class imfusion.ConsoleController(self: ConsoleController, name: str = 'ImFusion Python Module')

Bases: ApplicationController

ApplicationController without a UI interface.

This class is not available in the embedded Python interpreter in the ImFusionSuite.

class imfusion.CroppingMask(self: CroppingMask, dimensions: ndarray[numpy.int32[3, 1]])

Bases: Mask

Simple axis-aligned cropping mask with optional roundness.

class RoundDims(self: RoundDims, value: int)

Bases: pybind11_object

Members:

XY

YZ

XZ

XYZ

XY = <RoundDims.XY: 0>
XYZ = <RoundDims.XYZ: 3>
XZ = <RoundDims.XZ: 2>
YZ = <RoundDims.YZ: 1>
property name
property value
XY = <RoundDims.XY: 0>
XYZ = <RoundDims.XYZ: 3>
XZ = <RoundDims.XZ: 2>
YZ = <RoundDims.YZ: 1>
property border

Number of pixels cropped away

property inverted

Whether the mask is inverted

property roundness

Roundness in percent (100 means an ellipse, 0 a rectangle)

property roundness_dims

Which dimensions the roundness parameter should be applied

class imfusion.Data

Bases: pybind11_object

class Kind(self: Kind, value: int)

Bases: pybind11_object

Members:

UNKNOWN

IMAGE

VOLUME

IMAGE_SET

VOLUME_SET

IMAGE_STREAM

VOLUME_STREAM

POINT_SET

SURFACE

TRACKING_STREAM

TRACKING_DATA

IMAGE = <Kind.IMAGE: 1>
IMAGE_SET = <Kind.IMAGE_SET: 3>
IMAGE_STREAM = <Kind.IMAGE_STREAM: 5>
POINT_SET = <Kind.POINT_SET: 7>
SURFACE = <Kind.SURFACE: 8>
TRACKING_DATA = <Kind.TRACKING_DATA: 10>
TRACKING_STREAM = <Kind.TRACKING_STREAM: 9>
UNKNOWN = <Kind.UNKNOWN: 0>
VOLUME = <Kind.VOLUME: 2>
VOLUME_SET = <Kind.VOLUME_SET: 4>
VOLUME_STREAM = <Kind.VOLUME_STREAM: 6>
property name
property value
class Modality(self: Modality, value: int)

Bases: pybind11_object

Members:

NA

XRAY

CT

MRI

ULTRASOUND

VIDEO

NM

OCT

LABEL

CT = <Modality.CT: 2>
LABEL = <Modality.LABEL: 8>
MRI = <Modality.MRI: 3>
NA = <Modality.NA: 0>
NM = <Modality.NM: 6>
OCT = <Modality.OCT: 7>
ULTRASOUND = <Modality.ULTRASOUND: 4>
VIDEO = <Modality.VIDEO: 5>
XRAY = <Modality.XRAY: 1>
property name
property value
matrix_from_world(self: Data) ndarray[numpy.float64[4, 4]]
matrix_to_world(self: Data) ndarray[numpy.float64[4, 4]]
set_matrix_from_world(self: Data, arg0: ndarray[numpy.float64[4, 4]]) None
set_matrix_to_world(self: Data, arg0: ndarray[numpy.float64[4, 4]]) None
property components
property kind
property name
class imfusion.DataComponent(self: DataComponent)

Bases: pybind11_object

Data components provide a way to generically attach custom information to Data.

Data and StreamData are the two main classes that hold a list of data components, allowing custom information (for example optional data or configuration settings) to be attached to instances of these classes. Data components are meant to be used for information that is bound to a specific Data instance and that can not be represented by the usual ImFusion data types.

Data components should implement the Configurable methods, in order to support generic (de)serialization.

Note

Data components are supposed to act as generic storage for custom information. When subclassing DataComponent, you should not implement any heavy evaluation logic since this is the domain of Algorithms or other classes accessing the DataComponents.

Example

class MyComponent(imfusion.DataComponent, accessor_name="my_component"):
        def __init__(self, a=""):
                imfusion.DataComponent.__init__(self)
                self.a = a

        @property
        def a(self):
                return self._a

        @a.setter
        def a(self, value):
                if value and not isinstance(value, str):
                        raise TypeError("`a` must be of type `str`")
                self._a = value

        def configure(self, properties: imfusion.Properties) -> None:
                self.a = str(properties["a"])

        def configuration(self) -> imfusion.Properties:
                return imfusion.Properties({"a": self.a})

        def __eq__(self, other: "MyComponent") -> bool:
                return self.a == other.a
configuration(self: DataComponent) Properties
configure(self: DataComponent, properties: Properties) None
property id

Returns a unique string identifier for this type of data component

class imfusion.DataComponentBase

Bases: Configurable

property id

Returns the unique string identifier of this component class.

class imfusion.DataComponentList

Bases: pybind11_object

A list of DataComponent. The list contains properties for specific DataComponent types. Each DataComponent type can only occur once.

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion._bindings.DataComponentList, index: int) -> object

  2. __getitem__(self: imfusion._bindings.DataComponentList, indices: list[int]) -> list[object]

  3. __getitem__(self: imfusion._bindings.DataComponentList, slice: slice) -> list[object]

  4. __getitem__(self: imfusion._bindings.DataComponentList, id: str) -> object

add(*args, **kwargs)

Overloaded function.

  1. add(self: imfusion._bindings.DataComponentList, component: imfusion._bindings.DataComponent) -> object

Adds the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion._bindings.ImageInfoDataComponent) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion._bindings.DisplayOptions2d) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion._bindings.DisplayOptions3d) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion._bindings.TransformationStashDataComponent) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion._bindings.DataSourceComponent) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion._bindings.LabelDataComponent) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion._bindings.DatasetLicenseComponent) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion._bindings.RealWorldMappingDataComponent) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion.dicom.GeneralEquipmentModuleDataComponent) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion.dicom.SourceInfoComponent) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion.dicom.ReferencedInstancesComponent) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion.dicom.RTStructureDataComponent) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion.machinelearning._bindings.TargetTag) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion.machinelearning._bindings.ProcessingRecordComponent) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

  1. add(self: imfusion._bindings.DataComponentList, arg0: imfusion.ReferenceImageDataComponent) -> imfusion._bindings.DataComponentBase

Adds a copy of the component to the component list and returns a reference to the copy.

property data_source
property dataset_license
property display_options_2d
property display_options_3d
property general_equipment_module
property image_info
property label
property processing_record
property real_world_mapping
property reference_image
property referenced_instances
property rt_structure
property source_info
property target_tag
property transformation_stash
class imfusion.DataGroup

Bases: Data

children_recursive(self: DataGroup) DataList
property __iter__
property children
property proxy_child
class imfusion.DataList(*args, **kwargs)

Bases: pybind11_object

List of Data. Is implicitly converted from and to regular Python lists.

Deprecated since version 2.15: Use a regular list instead.

Overloaded function.

  1. __init__(self: imfusion._bindings.DataList) -> None

  2. __init__(self: imfusion._bindings.DataList, list: list) -> None

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion._bindings.DataList, index: int) -> imfusion._bindings.Data

  2. __getitem__(self: imfusion._bindings.DataList, indices: list[int]) -> list[imfusion._bindings.Data]

  3. __getitem__(self: imfusion._bindings.DataList, slice: slice) -> list[imfusion._bindings.Data]

__iter__(self: DataList) Iterator[Data]
add(self: DataList, arg0: Data) None
append(self: DataList, arg0: Data) None
get_images(self: imfusion._bindings.DataList, kind: imfusion._bindings.Data.Kind = <Kind.UNKNOWN: 0>, modality: imfusion._bindings.Data.Modality = <Modality.NA: 0>) list
class imfusion.DataModel

Bases: pybind11_object

The DataModel instance holds all datasets of an ApplicationController.

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion._bindings.DataModel, index: int) -> imfusion._bindings.Data

  2. __getitem__(self: imfusion._bindings.DataModel, indices: list[int]) -> list[imfusion._bindings.Data]

  3. __getitem__(self: imfusion._bindings.DataModel, slice: slice) -> list[imfusion._bindings.Data]

add(*args, **kwargs)

Overloaded function.

  1. add(self: imfusion._bindings.DataModel, data: imfusion._bindings.Data, name: str = ‘’) -> imfusion._bindings.Data

Add data to the model. The data will be copied and a reference to the copy is returned. If the data cannot be added, a ValueError is raised.

  1. add(self: imfusion._bindings.DataModel, data_list: list[imfusion._bindings.Data]) -> list

Add multiple pieces of data to the model. The data will be copied and a reference to the copy is returned. If the data cannot be added, a ValueError is raised.

clear(self: DataModel) None

Remove all data from the model

contains(self: DataModel, data: Data) bool
create_group(self: DataModel, arg0: DataList) DataGroup

Groups a list of Data in the model. Only Data that is already part of the model can be grouped.

get(self: DataModel, name: str) Data
get_common_parent(self: DataModel, data_list: DataList) DataGroup

Return the most common parent of all given Data

get_parent(self: DataModel, data: Data) DataGroup

Return the parent DataGroup of the given Data or None if it is not part of the model. For top-level data this function will return get_root_node().

index(self: DataModel, data: Data) int

Return index of data. The index is depth-first for all groups.

remove(self: DataModel, data: Data) None

Remove and delete data from the model. Afterwards data must not be reference anymore!

property root_node

Return the parent DataGroup of the given Data or None if it does not have a parent

property size

Return the total amount of data in the model

class imfusion.DataSourceComponent

Bases: DataComponentBase

class DataSourceInfo(self: DataSourceInfo, arg0: str, arg1: str, arg2: Properties, arg3: int, arg4: list[DataSourceInfo])

Bases: Configurable

update(self: DataSourceInfo, arg0: DataSourceInfo) None
property filename
property history
property index_in_file
property io_algorithm_config
property io_algorithm_name
property filenames
property sources
class imfusion.DatasetLicenseComponent(*args, **kwargs)

Bases: DataComponentBase

Overloaded function.

  1. __init__(self: imfusion._bindings.DatasetLicenseComponent) -> None

  2. __init__(self: imfusion._bindings.DatasetLicenseComponent, infos: list[imfusion._bindings.DatasetLicenseComponent.DatasetInfo]) -> None

class DatasetInfo(*args, **kwargs)

Bases: pybind11_object

Overloaded function.

  1. __init__(self: imfusion._bindings.DatasetLicenseComponent.DatasetInfo) -> None

  2. __init__(self: imfusion._bindings.DatasetLicenseComponent.DatasetInfo, name: str, authors: str, website: str, license: str, attribution_required: bool, commercial_use_allowed: bool) -> None

property attribution_required
property authors
property commercial_use_allowed
property license
property name
property website
infos(self: DatasetLicenseComponent) list[DatasetInfo]
class imfusion.Deformation

Bases: pybind11_object

configuration(self: Deformation) Properties
configure(self: Deformation, properties: Properties) None
displace_point(self: Deformation, at: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
displace_points(self: Deformation, at: list[ndarray[numpy.float64[3, 1]]]) list[ndarray[numpy.float64[3, 1]]]
displacement(*args, **kwargs)

Overloaded function.

  1. displacement(self: imfusion._bindings.Deformation, at: numpy.ndarray[numpy.float64[3, 1]]) -> numpy.ndarray[numpy.float64[3, 1]]

  2. displacement(self: imfusion._bindings.Deformation, at: numpy.ndarray[numpy.float64[2, 1]]) -> numpy.ndarray[numpy.float64[3, 1]]

  3. displacement(self: imfusion._bindings.Deformation, at: list[numpy.ndarray[numpy.float64[3, 1]]]) -> list[numpy.ndarray[numpy.float64[3, 1]]]

class imfusion.Display

Bases: pybind11_object

maximize_view(self: Display, view: View) None
unmaximize_view(self: Display) None
views(self: Display) list
views2d(self: Display) list
views3d(self: Display) list
views_slice(self: Display) list
property focus_view
property layout_mode
class imfusion.DisplayOptions2d(self: DisplayOptions2d, arg0: Data)

Bases: DataComponentBase

property gamma
property invert
property level
property window
class imfusion.DisplayOptions3d(self: DisplayOptions3d, arg0: Data)

Bases: DataComponentBase

property alpha
property invert
property level
property window
class imfusion.ExplicitIntensityMask(self: ExplicitIntensityMask, ref_image: SharedImage, mask_image: SharedImage)

Bases: Mask

Combination of an ExplicitMask and an IntensityMask.

property border_clamp

If true, set sampler wrapping mode to CLAMP_TO_BORDER (default). If false, set to CLAMP_TO_EDGE.

property border_color

Border color (normalized for integer images)

property intensity_range

Range of allowed pixel values

class imfusion.ExplicitMask(*args, **kwargs)

Bases: Mask

Mask holding an individual mask value for every pixel.

Overloaded function.

  1. __init__(self: imfusion._bindings.ExplicitMask, width: int, height: int, slices: int, initial: int = 0) -> None

  2. __init__(self: imfusion._bindings.ExplicitMask, dimensions: numpy.ndarray[numpy.int32[3, 1]], initial: int = 0) -> None

  3. __init__(self: imfusion._bindings.ExplicitMask, mask_image: imfusion._bindings.MemImage) -> None

mask_image(self: ExplicitMask) SharedImage

Returns a copy of the mask image held by the mask.

class imfusion.FrameworkInfo

Bases: pybind11_object

Provides general information about the framework.

property license
property opengl
property plugins
class imfusion.FreeFormDeformation

Bases: Deformation

configuration(self: FreeFormDeformation) Properties
configure(self: FreeFormDeformation, arg0: Properties) None
control_points(self: FreeFormDeformation) list[ndarray[numpy.float64[3, 1]]]

Get current control point locations (including displacement)

property displacements

Displacement in mm of all control points

property grid_spacing

Spacing of the control point grid

property grid_transformation

Transformation matrix of the control point grid

property subdivisions

Subdivisions of the control point grid

class imfusion.GlPlatformInfo

Bases: pybind11_object

Provides information about the underlying OpenGL driver.

property extensions
property renderer
property vendor
property version
class imfusion.ImageDescriptor(*args, **kwargs)

Bases: pybind11_object

Struct describing the essential properties of an image.

The ImFusion framework distinguishes two main image pixel value domains, which are indicated by the shift and scale parameters of this image descriptor:

  • Original pixel value domain: Pixel values are the same as in their original source (e.g. when loaded from a file). Same as the storage pixel value domain if the image’s scale is 1 and the shift is 0

  • Storage pixel value domain: Pixel values as they are stored in a MemImage. The user may decide to apply such a rescaling in order to better use the available limits of the underlying type.

The following conversion rules apply:

  • OV = (SV / scale) - shift

  • SV = (OV + shift) * scale

Overloaded function.

  1. __init__(self: imfusion._bindings.ImageDescriptor) -> None

  2. __init__(self: imfusion._bindings.ImageDescriptor, type: imfusion._bindings.PixelType, dimensions: numpy.ndarray[numpy.int32[3, 1]], channels: int = 1) -> None

  3. __init__(self: imfusion._bindings.ImageDescriptor, type: imfusion._bindings.PixelType, width: int, height: int, slices: int = 1, channels: int = 1) -> None

configure(self: ImageDescriptor, properties: Properties) None

Deserialize an image descriptor from Properties

coord(self: ImageDescriptor, index: int) ndarray[numpy.int32[4, 1]]

Return the pixel/voxel coordinate (x,y,z,c) for a given index

has_index(self: ImageDescriptor, x: int, y: int, z: int = 0, c: int = 0) int

Return true if the pixel at (x,y,z) exists, false otherwise

image_to_pixel(self: ImageDescriptor, world: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]

Convert 3D image coordinates to pixel/voxel position

index(self: ImageDescriptor, x: int, y: int, z: int = 0, c: int = 0) int

Return a linear memory index for a pixel or voxel

is_compatible(self: ImageDescriptor, other: ImageDescriptor, ignore_type: bool = False, ignore_3D: bool = False, ignore_channels: bool = False, ignore_spacing: bool = True) bool

Convenience function to perform partial comparison of two image descriptors. Two descriptors are compatible if their width and height, and optionally number of slices, number of channels and type are the same

is_valid(self: ImageDescriptor) bool

Return if the descriptor is valid (a size of one is allowed)

original_to_storage(self: ImageDescriptor, value: float) float

Apply the image’s shift and scale in order to convert a value from original pixel value domain to storage pixel value domain

pixel_to_image(self: ImageDescriptor, pixel: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]

Convert a 3D pixel/voxel position to image coordinates

set_dimensions(self: ImageDescriptor, dimensions: ndarray[numpy.int32[3, 1]], channels: int = 0) None

Convenience function for specifying the image dimensions and channels at once. If channels is 0, the number of channels will remain unchanged

set_spacing(self: ImageDescriptor, spacing: ndarray[numpy.float64[3, 1]], is_metric: bool) None

Convenience function for specifying spacing and metric flag at the same time

storage_to_original(self: ImageDescriptor, value: float) float

Apply the image’s shift and scale in order to convert a value from storage pixel value domain to original pixel value domain

property byte_size

Return the size of the image in bytes

property channels
property configuration

Serialize an image descriptor to Properties

property dimension
property dimensions
property extent
property height
property image_to_pixel_matrix

Return a 4x4 matrix to transform from image space to pixel space

property image_to_texture_matrix

Return a 4x4 matrix to transform from image space to texture space

property is_metric
property pixel_to_image_matrix

Return a 4x4 matrix to transform from pixel space to image space

property pixel_type
property scale
property shift
property size

Return the size (number of elements) of the image

property slices
property spacing

Access the image descriptor spacing. When setting the spacing, it is always assumed that the given spacing is metric. If you want to specify a non-metric spacing, use desc.set_spacing(new_spacing, is_metric=False)

property texture_to_image_matrix

Return a 4x4 matrix to transform from texture space to image space

property type_size

Return the nominal size in bytes of the current component type, zero if unknown

property width
class imfusion.ImageDescriptorWorld(self: ImageDescriptorWorld, descriptor: ImageDescriptor, matrix_to_world: ndarray[numpy.float64[4, 4]])

Bases: pybind11_object

Convenience struct extending an ImageDescriptor to also include a matrix describing the image orientation in world coordinates.

This struct can be useful for describing the geometrical properties of an image without need to hold the (heavy) image content. As such it can be used for representing reference geometries (see ImageResamplingAlgorithm), or for one-line creation of a new SharedImage.

image_to_pixel(self: ImageDescriptorWorld, world: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]

Convert 3D image coordinates to pixel/voxel position

is_spatially_compatible(self: ImageDescriptorWorld, other: ImageDescriptorWorld) bool

Convenience function to compare two image world descriptors (for instance to know whether a resampling is necessary). Two descriptors are compatible if their dimensions, matrix and spacing are identical.

pixel_to_image(self: ImageDescriptorWorld, pixel: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]

Convert a 3D pixel/voxel position to image coordinates

pixel_to_world(self: ImageDescriptorWorld, pixel: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]

Convert a 3D pixel/voxel position to world coordinates

world_to_pixel(self: ImageDescriptorWorld, world: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]

Convert 3D world coordinates to pixel/voxel position

property descriptor
property image_to_pixel_matrix

Return a 4x4 matrix to transform from image space to pixel space

property matrix_from_world
property matrix_to_world
property pixel_to_image_matrix

Return a 4x4 matrix to transform from pixel space to image space

property pixel_to_texture_matrix

Return a 4x4 matrix to transform from image space to texture space

property pixel_to_world_matrix

Return a 4x4 matrix to transform from pixel space to world space

property texture_to_pixel_matrix

Return a 4x4 matrix to transform from texture space to image space

property texture_to_world_matrix

Return a 4x4 matrix to transform from texture space to world space

property world_to_pixel_matrix

Return a 4x4 matrix to transform from world space to pixel space

property world_to_texture_matrix

Return a 4x4 matrix to transform from world space to texture space

class imfusion.ImageInfoDataComponent(self: ImageInfoDataComponent)

Bases: DataComponentBase

DataComponent storing general information on the image origin.

Modeled after the DICOM patient-study-series hierarchy, it stores information on the patient, study and series the data set belongs to.

class AnatomicalOrientationType(self: AnatomicalOrientationType, value: int)

Bases: pybind11_object

The anatomical orientation type used in Instances generated by this equipment.

Members:

UNKNOWN

BIPED

QUADRUPED

BIPED = <AnatomicalOrientationType.BIPED: 1>
QUADRUPED = <AnatomicalOrientationType.QUADRUPED: 2>
UNKNOWN = <AnatomicalOrientationType.UNKNOWN: 0>
property name
property value
class Laterality(self: Laterality, value: int)

Bases: pybind11_object

Laterality of (paired) body part examined

Members:

UNKNOWN

LEFT

RIGHT

LEFT = <Laterality.LEFT: 1>
RIGHT = <Laterality.RIGHT: 2>
UNKNOWN = <Laterality.UNKNOWN: 0>
property name
property value
class PatientSex(self: PatientSex, value: int)

Bases: pybind11_object

Gender of the patient

Members:

UNKNOWN

MALE

FEMALE

OTHER

FEMALE = <PatientSex.FEMALE: 2>
MALE = <PatientSex.MALE: 1>
OTHER = <PatientSex.OTHER: 3>
UNKNOWN = <PatientSex.UNKNOWN: 0>
property name
property value
property frame_of_reference_uid

Uniquely identifies the Frame of Reference for a Series. Multiple Series within a Study may share a Frame of Reference UID.

property laterality

Laterality of (paired) body part examined

property modality

DICOM modality string specifying the method used to create this series

property orientation_type

DICOM Anatomical Orientation Type

property patient_birth_date

Patient date of birth in yyyyMMdd format

property patient_comment

Additional information about the Patient

property patient_id

DICOM Patient ID

property patient_name

Patient name

property patient_position

Specifies position of the Patient relative to the imaging equipment.

property patient_sex

Patient sex

property photometric_interpretation

Specifies the intended interpretation of the pixel data (e.g. RGB, HSV, …).

property responsible_person

Name of person with medical or welfare decision making authority for the Patient.

property series_date

Series date in yyyyMMdd format

property series_description

Series description

property series_instance_uid

Unique identifier of the Series

property series_number

DICOM Series number. The value of this attribute should be unique for all Series in a Study created on the same equipment.

property series_time

Series time in HHmmss format

property series_time_exact

Series time in microseconds. 0 if the original series time was empty.

property study_date

Study date in yyyyMMdd format

property study_description

Study description

property study_id

DICOM Study ID

property study_instance_uid

Unique identifier for the Study

property study_time

Study time in HHmmss format, optionally with time zone offset &ZZXX

property study_time_exact

Study time in microseconds. 0 if the original study time was empty.

property study_timezone

Study time zone abbreviation

class imfusion.ImageResamplingAlgorithm(*args, **kwargs)

Bases: BaseAlgorithm

Algorithm for resampling an image to a target dimension or resolution, optionally with respect to another image.

If a reference image is not provided the size of the output can be either explicitly specified, or implicitly determined by setting a target spacing, binning or relative size w.r.t. the input (in percentage). Only one of these strategies can be active at a time, as specified by the resamplingMode field. The value of the other target fields will be ignored. The algorithm offers convenience methods to jointly update the value of a target field and change the resampling mode accordingly.

In case you provide a reference image it will its pixel grid (dimensions, spacing, pose matrix) for the output. However, the pixel type as well as shift/scale will remain the same as in the input image.

The algorithm supports Linear and Nearest interpolation modes. In the Linear case (default), when accessing the input image at a fractional coordinate, the obtained value will be computed by linearly interpolating between the closest pixels/voxels. In the Nearest case, the value of the closest pixel/voxel will be used instead.

Furthermore, multiple reduction modes are also supported. In contrast to the interpolation mode, which affects how the value of the input image at a given (potentially fractional) coordinate is extracted, this determines what happens when multiple input pixels/voxels contribute to the value of a single output pixel/voxel. In Nearest mode, the value of the closest input pixel/voxel is used as-is. Alternatively, the Minimum, Maximum or Average value of the neighboring pixel/voxels can be used.

By default, the image will be modified in-place; a new one can be created instead by changing the value of the createNewImage parameter.

By default, the resulting image will have an altered physical extent, since the original extent may not be divisible by the target spacing. The algorithm can modify the target spacing to exactly maintain the physical extent, by toggling the preserveExtent parameter.

If the keepZeroValues parameter is set to true, the input pixels/voxels having zero value will not be modified by the resampling process.

Overloaded function.

  1. __init__(self: imfusion._bindings.ImageResamplingAlgorithm, input_images: imfusion._bindings.SharedImageSet, reference_images: imfusion._bindings.SharedImageSet = None) -> None

  2. __init__(self: imfusion._bindings.ImageResamplingAlgorithm, input_images: imfusion._bindings.SharedImageSet, reference_world_descriptors: list[imfusion._bindings.ImageDescriptorWorld]) -> None

class ResamplingMode(self: ResamplingMode, value: int)

Bases: pybind11_object

Members:

TARGET_DIM

TARGET_PERCENT

TARGET_SPACING

TARGET_BINNING

TARGET_BINNING = <ResamplingMode.TARGET_BINNING: 3>
TARGET_DIM = <ResamplingMode.TARGET_DIM: 0>
TARGET_PERCENT = <ResamplingMode.TARGET_PERCENT: 1>
TARGET_SPACING = <ResamplingMode.TARGET_SPACING: 2>
property name
property value
resampling_needed(self: ImageResamplingAlgorithm, frame: int = -1) bool

Return whether resampling is needed or the specified settings result in the same image size and spacing

set_input(*args, **kwargs)

Overloaded function.

  1. set_input(self: imfusion._bindings.ImageResamplingAlgorithm, new_input_images: imfusion._bindings.SharedImageSet, new_reference_images: imfusion._bindings.SharedImageSet, reconfigure_from_new_data: bool) -> None

Replaces the input of the algorithm. If reconfigureFromNewData is true, the algorithm reconfigures itself based on meta data of the new input

  1. set_input(self: imfusion._bindings.ImageResamplingAlgorithm, new_input_images: imfusion._bindings.SharedImageSet, new_reference_world_descriptors: list[imfusion._bindings.ImageDescriptorWorld], reconfigure_from_new_data: bool) -> None

Replaces the input of the algorithm. If reconfigureFromNewData is true, the algorithm reconfigures itself based on meta data of the new input

set_target_min_spacing(*args, **kwargs)

Overloaded function.

  1. set_target_min_spacing(self: imfusion._bindings.ImageResamplingAlgorithm, min_spacing: float) -> bool

Set the target spacing with the spacing of the input image, replacing the value in each dimension with the maximum between the original and the provided value.

Parameters:

min_spacing – the minimum value that the target spacing should have in each direction

Returns:

True if the final target spacing is different than the input image spacing

  1. set_target_min_spacing(self: imfusion._bindings.ImageResamplingAlgorithm, min_spacing: numpy.ndarray[numpy.float64[3, 1]]) -> bool

Set the target spacing with the spacing of the input image, replacing the value in each dimension with the maximum between the original and the provided value.

Parameters:

min_spacing – the minimum value that the target spacing should have in each direction

Returns:

True if the final target spacing is different than the input image spacing

TARGET_BINNING = <ResamplingMode.TARGET_BINNING: 3>
TARGET_DIM = <ResamplingMode.TARGET_DIM: 0>
TARGET_PERCENT = <ResamplingMode.TARGET_PERCENT: 1>
TARGET_SPACING = <ResamplingMode.TARGET_SPACING: 2>
property clone_deformation

Whether to clone deformation from original image before attaching to result

property create_new_image

Whether to compute the result in-place or in a newly allocated image

property force_cpu

Whether to force the computation on the CPU

property interpolation_mode

Mode for image interpolation

property keep_zero_values

Whether to update the target spacing to keep exactly the physical dimensions of the input image

property preserve_extent

Whether to update the target spacing to keep exactly the physical dimensions of the input image

property reduction_mode

Mode for image reduction (e.g. downsampling, resampling, binning)

property resampling_mode

How the output image size should be obtained (explicit dimensions, percentage relative to the input image, …)

property target_binning

How many pixels from the input image should be combined into an output pixel

property target_dimensions

Target dimensions for the new image

property target_percent

Target dimensions for the new image, relatively to the input one

property target_spacing

Target spacing for the new image

property verbose

Whether to enable advanced logging

class imfusion.ImageView2D

Bases: View

class imfusion.ImageView3D

Bases: View

class imfusion.IntensityMask(*args, **kwargs)

Bases: Mask

Masks pixels with a specific value or values outside a specific range.

Overloaded function.

  1. __init__(self: imfusion._bindings.IntensityMask, type: imfusion._bindings.PixelType, value: float = 0.0) -> None

  2. __init__(self: imfusion._bindings.IntensityMask, image: imfusion._bindings.MemImage, value: float = 0.0) -> None

property masked_value

Specific value that should be masked

property masked_value_range

Half-open range [min, max) of allowed pixel values

property type
property use_range

Whether the mask should operate in range mode (true) or single-value mode (false)

class imfusion.InterpolationMode(self: InterpolationMode, value: int)

Bases: pybind11_object

Members:

NEAREST

LINEAR

LINEAR = <InterpolationMode.LINEAR: 1>
NEAREST = <InterpolationMode.NEAREST: 0>
property name
property value
class imfusion.LabelDataComponent(self: LabelDataComponent, label_map: SharedImageSet = None)

Bases: pybind11_object

Stores metadata for a label map, supporting up to 255 labels.

Creates a LabelDataComponent. If a label map of type uint8 is provided, detects labels in the label map.

class LabelConfig(self: LabelConfig, name: str = '', color: ndarray[numpy.float64[4, 1]] = array([0., 0., 0., 0.]), is_visible2d: bool = True, is_visible3d: bool = True)

Bases: pybind11_object

Encapsulates metadata for a label value in a label map.

Constructor for LabelConfig.

Parameters:
  • name – Name of the label.

  • color – RGBA color used for rendering the label.

  • is_visible2d – Visibility flag for 2D/MPR views.

  • is_visible3d – Visibility flag for 3D views.

property color

RGBA color used for rendering the label. Values should be in the range [0, 1].

property is_visible2d

Visibility flag for 2D/MPR views.

property is_visible3d

Visibility flag for 3D views.

property name

Name of the label.

property segmentation_algorithm_name

Name of the algorithm used to generate the segmentation.

property segmentation_algorithm_type

Type of algorithm used to generate the segmentation.

property snomed_category_code_meaning

Human-readable meaning of the category code.

property snomed_category_code_value

SNOMED CT code for the category this label represents.

property snomed_type_code_meaning

Human-readable meaning of the type code.

property snomed_type_code_value

SNOMED CT code for the type this label represents.

class SegmentationAlgorithmType(self: SegmentationAlgorithmType, value: int)

Bases: pybind11_object

Members:

UNKNOWN

AUTOMATIC

SEMI_AUTOMATIC

MANUAL

AUTOMATIC = <SegmentationAlgorithmType.AUTOMATIC: 1>
MANUAL = <SegmentationAlgorithmType.MANUAL: 3>
SEMI_AUTOMATIC = <SegmentationAlgorithmType.SEMI_AUTOMATIC: 2>
UNKNOWN = <SegmentationAlgorithmType.UNKNOWN: 0>
property name
property value
detect_labels(self: LabelDataComponent, image: SharedImageSet) None

Detects labels present in an image of type uint8 and creates configurations for non-existing labels using default configurations.

has_label(self: LabelDataComponent, pixel_value: int) bool

Checks if a label configuration exists for a pixel value.

label_config(self: LabelDataComponent, pixel_value: int) LabelConfig | None

Gets label configuration for a pixel value.

label_configs(self: LabelDataComponent) dict[int, LabelConfig]

Returns known label configurations.

remove_label(self: LabelDataComponent, pixel_value: int) None

Removes label configuration for a pixel value.

remove_unused_labels(self: LabelDataComponent, image: SharedImageSet) None

Removes configurations for non-existing labels in an image.

set_default_label_config(self: LabelDataComponent, pixel_value: int) None

Sets default label configuration for a pixel value.

set_label_config(self: LabelDataComponent, pixel_value: int, config: LabelConfig) None

Sets label configuration for a pixel value.

set_label_configs(self: LabelDataComponent, configs: dict[int, LabelConfig]) None

Sets known label configurations from a dictionary mapping pixel values to LabelConfig objects.

AUTOMATIC = <SegmentationAlgorithmType.AUTOMATIC: 1>
MANUAL = <SegmentationAlgorithmType.MANUAL: 3>
SEMI_AUTOMATIC = <SegmentationAlgorithmType.SEMI_AUTOMATIC: 2>
UNKNOWN = <SegmentationAlgorithmType.UNKNOWN: 0>
class imfusion.LayoutMode(self: LayoutMode, value: int)

Bases: pybind11_object

Members:

LAYOUT_ROWS

LAYOUT_FOCUS_PLUS_STACK

LAYOUT_FOCUS_PLUS_ROWS

LAYOUT_SIDE_BY_SIDE

LAYOUT_CUSTOM

LAYOUT_CUSTOM = <LayoutMode.LAYOUT_CUSTOM: 100>
LAYOUT_FOCUS_PLUS_ROWS = <LayoutMode.LAYOUT_FOCUS_PLUS_ROWS: 2>
LAYOUT_FOCUS_PLUS_STACK = <LayoutMode.LAYOUT_FOCUS_PLUS_STACK: 1>
LAYOUT_ROWS = <LayoutMode.LAYOUT_ROWS: 0>
LAYOUT_SIDE_BY_SIDE = <LayoutMode.LAYOUT_SIDE_BY_SIDE: 3>
property name
property value
class imfusion.LicenseInfo

Bases: pybind11_object

Provides information about the currently used license.

property expiration_date

Date until the license is valid in ISO format or None if the license won’t expire.

property key
class imfusion.Mask

Bases: pybind11_object

Base interface for implementing polymorphic image masks.

class CreateOption(self: CreateOption, value: int)

Bases: pybind11_object

Enumeration of available behavior for Mask::create_explicit_mask().

Members:

DEEP_COPY

SHALLOW_COPY_IF_POSSIBLE

DEEP_COPY = <CreateOption.DEEP_COPY: 0>
SHALLOW_COPY_IF_POSSIBLE = <CreateOption.SHALLOW_COPY_IF_POSSIBLE: 1>
property name
property value
create_explicit_mask(self: imfusion._bindings.Mask, image: imfusion._bindings.SharedImage, create_option: imfusion._bindings.Mask.CreateOption = <CreateOption.DEEP_COPY: 0>) MemImage

Creates an explicit mask representation of this mask for a given image.

is_compatible(self: Mask, arg0: SharedImage) bool

Returns True if the mask can be used with the given image or False otherwise.

mask_value(*args, **kwargs)

Overloaded function.

  1. mask_value(self: imfusion._bindings.Mask, coord: numpy.ndarray[numpy.int32[3, 1]], color: numpy.ndarray[numpy.float32[4, 1]]) -> int

Returns 0 if the given pixel is outside the mask (i.e. invisible/to be ignored) or a non-zero value if it is inside the mask (i.e. visible/to be considered).

  1. mask_value(self: imfusion._bindings.Mask, coord: numpy.ndarray[numpy.int32[3, 1]], value: float) -> int

Returns 0 if the given pixel is outside the mask (i.e. invisible/to be ignored) or a non-zero value if it is inside the mask (i.e. visible/to be considered).

DEEP_COPY = <CreateOption.DEEP_COPY: 0>
SHALLOW_COPY_IF_POSSIBLE = <CreateOption.SHALLOW_COPY_IF_POSSIBLE: 1>
property requires_pixel_value

Returns True if the mask_value() rely on the pixel value. If this method returns False, the mask_value() method can be safely used with only the coordinate.

class imfusion.MemImage(*args, **kwargs)

Bases: pybind11_object

A MemImage instance represents an image which resides in main memory.

The MemImage class supports the Buffer Protocol. This means that the underlying buffer can be wrapped in e.g. numpy without a copy:

>>> import numpy
>>> mem = MemImage(Image.BYTE, 10, 10)
>>> arr = numpy.array(mem, copy=False)
>>> arr.fill(0)
>>> numpy.sum(arr)
0

Be aware that most numpy operation create a copy of the data and don’t affect the original data:

>>> numpy.sum(numpy.add(arr, 1))
100
>>> numpy.sum(arr)
0

To update the buffer of a MemImage, use numpy.copyto:

>>> numpy.copyto(arr, numpy.add(arr, 1))
>>> numpy.sum(arr)
100

Alternatively use the out argument of certain numpy functions:

>>> numpy.add(arr, 1, out=arr)
array(...)
>>> numpy.sum(arr)
200

Overloaded function.

  1. __init__(self: imfusion._bindings.MemImage, type: imfusion._bindings.PixelType, width: int, height: int, slices: int = 1, channels: int = 1) -> None

  2. __init__(self: imfusion._bindings.MemImage, desc: imfusion._bindings.ImageDescriptor) -> None

Factory method to instantiate a MemImage from an ImageDescriptor. note This method does not initialize the underlying buffer

  1. __init__(self: imfusion._bindings.MemImage, array: numpy.ndarray[numpy.int8], greyscale: bool = False) -> None

Create a MemImage from a numpy.array.

The array must be contiguous and must have between 2 and 4 dimensions. The dimensions are interpreted as (slices, height, width, channels). Missing dimensions are set to one. The color dimension must always be present even for greyscale image in which case it would be 1.

Use the optional greyscale argument to specify that the color dimensions is missing and the buffer should be interpreted as greyscale.

The actual array data is copied into the MemImage.

  1. __init__(self: imfusion._bindings.MemImage, array: numpy.ndarray[numpy.uint8], greyscale: bool = False) -> None

  2. __init__(self: imfusion._bindings.MemImage, array: numpy.ndarray[numpy.int16], greyscale: bool = False) -> None

  3. __init__(self: imfusion._bindings.MemImage, array: numpy.ndarray[numpy.uint16], greyscale: bool = False) -> None

  4. __init__(self: imfusion._bindings.MemImage, array: numpy.ndarray[numpy.int32], greyscale: bool = False) -> None

  5. __init__(self: imfusion._bindings.MemImage, array: numpy.ndarray[numpy.uint32], greyscale: bool = False) -> None

  6. __init__(self: imfusion._bindings.MemImage, array: numpy.ndarray[numpy.float32], greyscale: bool = False) -> None

  7. __init__(self: imfusion._bindings.MemImage, array: numpy.ndarray[numpy.float64], greyscale: bool = False) -> None

apply_shift_and_scale(arr)

Return a copy of the array with storage values converted to original values. The dtype of the returned array is always DOUBLE.

astype(self: MemImage, image_type: object) MemImage

Create a copy of the current MemImage instance with the requested Image format.

This function accepts either: - an Image type (e.g. imfusion.Image.UINT); - most of the numpy’s dtypes (e.g. np.uint); - python’s float or int types.

If the requested Image format already matches the Image format of the current instance, then a clone of the current instance is returned.

clone(self: MemImage) MemImage
convert_to_gray(self: MemImage) MemImage
create_float(self: object, normalize: bool = True, calc_min_max: bool = True, apply_scale_shift: bool = False) object
crop(self: MemImage, width: int, height: int, slices: int = -1, ox: int = -1, oy: int = -1, oz: int = -1) MemImage
downsample(self: imfusion._bindings.MemImage, dx: int, dy: int, dz: int = 1, zero_mask: bool = False, reduction_mode: imfusion._bindings.ReductionMode = <ReductionMode.AVERAGE: 1>) MemImage
flip(self: MemImage, dim: int) MemImage
image_to_pixel(self: MemImage, world: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]

Convert a 3D image coordinate to a pixel position.

invert(self: MemImage, use_image_range: bool) MemImage
numpy()

Convenience method for converting a MemImage or a SharedImage into a newly created numpy array with scale and shift already applied.

Shift and scale may determine a complex change of pixel type prior the conversion into numpy array:

  • as a first rule, even if the type of shift and scale is float, they will still be considered as integers if they are representing integers (e.g. a shift of 2.000 will be treated as 2);

  • if shift and scale are such that the pixel values range (determined by the pixel_type) would not be fitting into the pixel_type, e.g. a negative pixel value but the type is unsigned, then the pixel_type will be promoted into a signed type if possible, otherwise into a single precision floating point type;

  • if shift and scale are such that the pixel values range (determined by the pixel_type) would be fitting into a demoted pixel_type, e.g. the type is signed but the range of pixel values is unsigned, then the pixel_type will be demoted;

  • if shift and scale do not certainly determine that all the possible pixel values (in the range determined by the pixel_type) would become integers, then the pixel_type will be promoted into a single precision floating point type.

  • in any case, the returned numpy array will be returned with type up to 32-bit integers. If the integer type would require more bits, then the resulting pixel_type will be DOUBLE.

Parameters:

self – instance of a MemImage or of a SharedImage

Returns:

numpy.ndarray

pad(*args, **kwargs)

Overloaded function.

  1. pad(self: imfusion._bindings.MemImage, pad_lower_left_front: numpy.ndarray[numpy.int32[3, 1]], pad_upper_right_back: numpy.ndarray[numpy.int32[3, 1]], padding_mode: imfusion._bindings.PaddingMode, legacy_mirror_padding: bool = True) -> imfusion._bindings.MemImage

  2. pad(self: imfusion._bindings.MemImage, padding_mode: tuple[int, int], pad_size_x: tuple[int, int], pad_size_y: tuple[int, int], pad_size_z: imfusion._bindings.PaddingMode, legacy_mirror_padding: bool = True) -> imfusion._bindings.MemImage

pixel_to_image(self: MemImage, pixel: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]

Convert a 3D pixel position to an image coordinate.

range_threshold(self: MemImage, inside_range: bool, lower_value: float, upper_value: float, use_original: bool = True, replace_with: float = 0) MemImage
resample(*args, **kwargs)

Overloaded function.

  1. resample(self: imfusion._bindings.MemImage, spacing_adjustment: imfusion._bindings.SpacingMode, spacing: numpy.ndarray[numpy.float64[3, 1]], zero_mask: bool = False, reduction_mode: imfusion._bindings.ReductionMode = <ReductionMode.AVERAGE: 1>, interpolation_mode: imfusion._bindings.InterpolationMode = <InterpolationMode.LINEAR: 1>, allowed_dimension_change: bool = False) -> imfusion._bindings.MemImage

  2. resample(self: imfusion._bindings.MemImage, dimensions: numpy.ndarray[numpy.int32[3, 1]], zero_mask: bool = False, reduction_mode: imfusion._bindings.ReductionMode = <ReductionMode.AVERAGE: 1>, interpolation_mode: imfusion._bindings.InterpolationMode = <InterpolationMode.LINEAR: 1>) -> imfusion._bindings.MemImage

resize(self: MemImage, width: int, height: int, slices: int, channels: int = 1) MemImage
rotate(*args, **kwargs)

Overloaded function.

  1. rotate(self: imfusion._bindings.MemImage, angle: int = 90, flip_dim: int = -1, axis: int = 2) -> imfusion._bindings.MemImage

  2. rotate(self: imfusion._bindings.MemImage, rot: numpy.ndarray[numpy.float64[3, 3]], tolerance: float = 0.0) -> imfusion._bindings.MemImage

threshold(self: MemImage, value: float, below: bool, apply_shift_scale: bool = True, merge_channels: bool = False, replace_with: float = 0) MemImage
static zeros(desc: ImageDescriptor) MemImage

Factory method to create a zero-initialized image

property channels
property dimension
property dimensions
property extent
property height
property image_to_pixel_matrix
property metric
property ndim
property pixel_to_image_matrix
property scale
property shape

Return a numpy compatible shape descripting the dimensions of this image.

The returned tuple has 4 entries: slices, height, width, channels

property shift
property slices
property spacing
property type
property width
class imfusion.Mesh(*args, **kwargs)

Bases: Data

Overloaded function.

  1. __init__(self: imfusion._bindings.Mesh, mesh: imfusion._bindings.Mesh) -> None

  2. __init__(self: imfusion._bindings.Mesh, name: str = ‘’) -> None

add_face(self: Mesh, index: ndarray[numpy.int32[3, 1]], force: bool) int
add_vertex(self: Mesh, position: ndarray[numpy.float64[3, 1]]) int
face_normal(self: Mesh, index: int) ndarray[numpy.float64[3, 1]]
face_vertices(self: Mesh, index: int) ndarray[numpy.int32[3, 1]]
halfedge_color(self: Mesh, vertex_index: int, face_index: int) ndarray[numpy.float32[4, 1]]
halfedge_normal(self: Mesh, vertex_index: int, face_index: int) ndarray[numpy.float64[3, 1]]
halfedge_vertices(self: Mesh, vertex_index: int, face_index: int) ndarray[numpy.int32[2, 1]]
is_closed(self: Mesh) bool
is_manifold(self: Mesh) bool
is_self_intersecting(self: Mesh) bool
is_vertex_manifold(self: Mesh, index: int) bool
is_watertight(self: Mesh, check_self_intersection: bool) bool
remove_face(self: Mesh, index: int, remove_isolated_vertices: bool) bool
remove_faces(self: Mesh, indices: list[int], remove_isolated_vertices: bool) None
remove_halfedge_colors(self: Mesh) None
remove_halfedge_normals(self: Mesh) None
remove_vertex_colors(self: Mesh) None
remove_vertex_normals(self: Mesh) None
remove_vertices(self: Mesh, indices: list[int], remove_isolated_vertices: bool) None
set_halfedge_color(*args, **kwargs)

Overloaded function.

  1. set_halfedge_color(self: imfusion._bindings.Mesh, vertex_index: int, face_index: int, color: numpy.ndarray[numpy.float32[4, 1]]) -> None

  2. set_halfedge_color(self: imfusion._bindings.Mesh, vertex_index: int, face_index: int, color: numpy.ndarray[numpy.float32[3, 1]], alpha: float) -> None

set_halfedge_normal(self: Mesh, vertex_index: int, face_index: int, normal: ndarray[numpy.float64[3, 1]]) None
set_vertex(self: Mesh, index: int, vertex: ndarray[numpy.float64[3, 1]]) None
set_vertex_color(*args, **kwargs)

Overloaded function.

  1. set_vertex_color(self: imfusion._bindings.Mesh, index: int, color: numpy.ndarray[numpy.float32[4, 1]]) -> None

  2. set_vertex_color(self: imfusion._bindings.Mesh, index: int, color: numpy.ndarray[numpy.float32[3, 1]], alpha: float) -> None

set_vertex_normal(self: Mesh, index: int, normal: ndarray[numpy.float64[3, 1]]) None
vertex(self: Mesh, index: int) ndarray[numpy.float64[3, 1]]
vertex_color(self: Mesh, index: int) ndarray[numpy.float32[4, 1]]
vertex_normal(self: Mesh, index: int) ndarray[numpy.float64[3, 1]]
property center
property extent
property filename
property has_halfedge_colors
property has_halfedge_normals
property has_vertex_colors
property has_vertex_normals
property number_of_faces
property number_of_vertices
class imfusion.Optimizer

Bases: pybind11_object

Object for non-linear optimization.

The current bindings are work in progress and therefore limited. They are so far mostly meant to be used for changing an existing optimizer rather than creating one from scratch.

class Mode(self: Mode, value: int)

Bases: pybind11_object

Mode of operation when execute is called.

Members:

OPT : Standard optimization.

STUDY : Randomized study.

PLOT : Evaluate for 1D or 2D plot generation.

EVALUATE : Single evaluation.

EVALUATE = <Mode.EVALUATE: 3>
OPT = <Mode.OPT: 0>
PLOT = <Mode.PLOT: 2>
STUDY = <Mode.STUDY: 1>
property name
property value
abort(self: Optimizer) None

Request to abort the optimization.

configuration(self: Optimizer) Properties

Retrieves the configuration of the object.

configure(self: Optimizer, arg0: Properties) None

Configures the object.

execute(self: Optimizer, x: list[float]) list[float]

Execute the optimization given a vector of initial parameters of full dimensionality.

set_bounds(*args, **kwargs)

Overloaded function.

  1. set_bounds(self: imfusion._bindings.Optimizer, bounds: float) -> None

Set the same symmetric bounds in all parameters. A value of zero disables the bounds.

  1. set_bounds(self: imfusion._bindings.Optimizer, lower_bounds: float, upper_bounds: float) -> None

Set the same lower and upper bounds for all parameters.

  1. set_bounds(self: imfusion._bindings.Optimizer, bounds: list[float]) -> None

Set individual symmetric bounds.

  1. set_bounds(self: imfusion._bindings.Optimizer, lower_bounds: list[float], upper_bounds: list[float]) -> None

Set individual lower and upper bounds.

  1. set_bounds(self: imfusion._bindings.Optimizer, bounds: list[tuple[float, float]]) -> None

Set individual lower and upper bounds as a list of pairs.

set_logging_level(self: Optimizer, file_level: int | None = None, console_level: int | None = None) None

Set level of detail for logging to text file and the console. 0 = none (default), 1 = init/result, 2 = every evaluation, 3 = only final result after study.

property abort_eval

Abort after a certain number of cost function evaluations.

property abort_fun_tol

Abort if change in cost function value is becomes too small.

property abort_fun_val

Abort if this function value is reached.

property abort_par_tol

Abort if change in parameter values becomes too small.

property abort_time

Abort after a certain elapsed number of seconds.

property aborted

Whether the optimizer was aborted.

property best_val

Return best cost function value.

property dimension

Total number of parameters. The selection gets cleared when the dimension value is modified.

property first_val

Return cost function value of first evaluation.

property minimizing

Whether the optimizer has a loss function (that it should minimize) or an objective function (that it should maximize).

property mode

Mode of operation when execute is called.

property num_eval

Return number of cost function evaluations computed so far.

property param_names

Names of the parameters.

property selection

Selected parameters.

property type

Type of optimizer (see doc/header).

class imfusion.PaddingMode(*args, **kwargs)

Bases: pybind11_object

Members:

CLAMP

MIRROR

ZERO

Overloaded function.

  1. __init__(self: imfusion._bindings.PaddingMode, value: int) -> None

  2. __init__(self: imfusion._bindings.PaddingMode, arg0: str) -> None

CLAMP = <PaddingMode.CLAMP: 2>
MIRROR = <PaddingMode.MIRROR: 1>
ZERO = <PaddingMode.ZERO: 0>
property name
property value
class imfusion.ParametricDeformation

Bases: Deformation

set_parameters(self: ParametricDeformation, parameters: list[float]) None
class imfusion.PixelType(self: PixelType, value: int)

Bases: pybind11_object

Members:

BYTE

UBYTE

SHORT

USHORT

INT

UINT

FLOAT

DOUBLE

HFLOAT

BYTE = <PixelType.BYTE: 5120>
DOUBLE = <PixelType.DOUBLE: 5130>
FLOAT = <PixelType.FLOAT: 5126>
HFLOAT = <PixelType.HFLOAT: 5131>
INT = <PixelType.INT: 5124>
SHORT = <PixelType.SHORT: 5122>
UBYTE = <PixelType.UBYTE: 5121>
UINT = <PixelType.UINT: 5125>
USHORT = <PixelType.USHORT: 5123>
property name
property value
class imfusion.PluginInfo

Bases: pybind11_object

Provides information about a framework plugin.

property name
property path
class imfusion.PointCloud(self: PointCloud, points: list[ndarray[numpy.float64[3, 1]]] = [], *, normals: list[ndarray[numpy.float64[3, 1]]] = [], colors: list[ndarray[numpy.float64[3, 1]]] = [])

Bases: Data

Data structure representing a point cloud in 3d space. Each point can have an associated color and normal vector.

Constructs a point cloud with the specified points, normals and colors. If the number of colors / normals does not match the number of points, they will be ignored with a warning.

Parameters:
  • points – Vertices of the point cloud.

  • normals – Normals of the point cloud. If the length does not match points, normals will be dropped with a warning.

  • colors – Colors (RGB) of the point cloud. If the length does not match points, colors will be dropped with a warning.

clone(self: PointCloud) PointCloud

Create a new point cloud by deep copying an all data from this instance.

transform_point_cloud(self: PointCloud, transformation: ndarray[numpy.float64[4, 4]]) None
property colors
property has_normals
property is_dense
property normals
property points
property weights
class imfusion.Properties(*args, **kwargs)

Bases: pybind11_object

Container for arbitrary properties, internally stored as strings.

The bindings provide two interfaces: a C++-like one based on param() and set_param(), and a more Pythonic interface using the [] operator. Both interfaces are equivalent and interchangeable.

Parameters can be set with the set_param method, e.g.:

>>> p = Properties()
>>> p.set_param('Spam', 5)

The ParamType will be set depending on the type of the value similar to C++. To retrieve a parameter, a value of the desired return type must be passed:

>>> spam = 0
>>> p.param('Spam', spam)
5

If the parameter doesn’t exists, the value of the second argument is returned:

>>> foo = 8
>>> p.param('Foo', foo)
8

The Properties object also exposes all its parameters as items, e.g. to add a new parameter just add a new key:

>>> p = Properties()
>>> p['spam'] = 5

When using the dictionary-like syntax with the basic types (bool, int, float, str and list), the returned values are correctly typed:

>>> type(p['spam'])
<class 'int'>

However, for matrix and vector types, the param() method has to be used which receives an extra variable of the same type that has to be returned:

>>> import numpy as np
>>> np_array = np.ones(3)
>>> p['foo'] = np_array
>>> p.param('foo', np_array)
array([1., 1., 1.])

In fact, the dictionary-like syntax would just return it as a string instead: >>> p[‘foo’] ‘1 1 1 ‘

Additionally, the attributes of parameters are available through the attributes() method:

>>> p.set_param_attributes('spam', 'max: 10')
>>> p.param_attributes('spam')
[('max', '10')]

A Properties object can be obtained from a dictionary:

>>> p = Properties({'spam': 5, 'eggs': True, 'sub': { 'eggs': False }})
>>> p['eggs']
True
>>> p['sub']['eggs']
False

There are two possible but slightly different ways to convert a Properties instance into a dictionary. The first method is by dict() casting, which returns a dictionary made by nested Properties, the second method is by calling the asdict() method, which returns a dictionary expanding also the nested Properties instances:

>>> dict(p)
{'spam': 5, 'eggs': True, 'sub': <imfusion._bindings.Properties object at 0x7fb0ac062b70>}
>>> p.asdict()
{'spam': 5, 'eggs': True, 'sub': {'eggs': False}}

Overloaded function.

  1. __init__(self: imfusion._bindings.Properties, name: str = ‘’) -> None

  2. __init__(self: imfusion._bindings.Properties, dictionary: dict) -> None

class EnumStringParam(self: EnumStringParam, *, value: str, admitted_values: set[str])

Bases: pybind11_object

Param that can assume a certain value among a set of str possibilities.

Parameters:
  • value – a choice among the set of admitted_values.

  • admitted_values – set of str defining the available values.

classmethod from_enum()

(cls: object, enum_member: object, take_enum_values: bool = False) -> imfusion._bindings.Properties.EnumStringParam

Construct an EnumStringParam automatically out of the provided instance of an enumeration class.

Parameters:
  • enum_member – a member of an enumeration class. The current value will be assigned to this argument, while the admitted_values will be automatically constructed from the members of the enumeration class.

  • take_enum_values – is False, then the enumeration members are taken as values. If True, then the enumeration values are taken as values: please note that in this case all the enumeration values must be unique and of str type.

to_enum(self: EnumStringParam, enum_type: object) object

Casts into the corresponding member of the enum_type type. It raises when this is not possible.

Parameters:

enum_type – the enumeration class into which to cast the current value. Please note that this enumeration class must be compatible, which means it must correspond to the set of admitted_values.

property admitted_values

The current set of admitted values.

property value

The current value that is assumed among the current set of admitted values.

__getitem__(self: Properties, arg0: str) object
__iter__(self: object) Iterator
__setitem__(*args, **kwargs)

Overloaded function.

  1. __setitem__(self: imfusion._bindings.Properties, name: str, value: bool) -> None

  2. __setitem__(self: imfusion._bindings.Properties, name: str, value: int) -> None

  3. __setitem__(self: imfusion._bindings.Properties, name: str, value: float) -> None

  4. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 3]]) -> None

  5. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 4]]) -> None

  6. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 4]]) -> None

  7. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 3]]) -> None

  8. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 4]]) -> None

  9. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[2, 1]]) -> None

  10. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 1]]) -> None

  11. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 1]]) -> None

  12. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[5, 1]]) -> None

  13. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[2, 1]]) -> None

  14. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 1]]) -> None

  15. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 1]]) -> None

  16. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.int32[2, 1]]) -> None

  17. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.int32[3, 1]]) -> None

  18. __setitem__(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.int32[4, 1]]) -> None

  19. __setitem__(self: imfusion._bindings.Properties, name: str, value: str) -> None

  20. __setitem__(self: imfusion._bindings.Properties, name: str, value: os.PathLike) -> None

  21. __setitem__(self: imfusion._bindings.Properties, name: str, value: list[str]) -> None

  22. __setitem__(self: imfusion._bindings.Properties, name: str, value: list[os.PathLike]) -> None

  23. __setitem__(self: imfusion._bindings.Properties, name: str, value: list[bool]) -> None

  24. __setitem__(self: imfusion._bindings.Properties, name: str, value: list[int]) -> None

  25. __setitem__(self: imfusion._bindings.Properties, name: str, value: list[float]) -> None

  26. __setitem__(self: imfusion._bindings.Properties, name: str, value: list[numpy.ndarray[numpy.float64[2, 1]]]) -> None

  27. __setitem__(self: imfusion._bindings.Properties, name: str, value: list[numpy.ndarray[numpy.float64[3, 1]]]) -> None

  28. __setitem__(self: imfusion._bindings.Properties, name: str, value: list[numpy.ndarray[numpy.float64[4, 1]]]) -> None

  29. __setitem__(self: imfusion._bindings.Properties, name: str, value: imfusion._bindings.Properties.EnumStringParam) -> None

  30. __setitem__(self: imfusion._bindings.Properties, name: str, value: object) -> None

add_sub_properties(self: Properties, name: str) Properties
asdict(self: Properties) dict

Return the Properties as a dict.

The dictionary values have the correct type when they are basic (bool, int, float, str and list), all other param types are returned with a str type. Subproperties are turned into nested dicts.

clear(self: Properties) None
copy_from(self: Properties, arg0: Properties) None
get(self: Properties, key: str, default_value: object = None) object
get_name(self: Properties) str
items(self: Properties) list
keys(self: Properties) list
static load_from_json(path: str) Properties
static load_from_xml(path: str) Properties
param(*args, **kwargs)

Overloaded function.

  1. param(self: imfusion._bindings.Properties, name: str, value: bool) -> bool

  2. param(self: imfusion._bindings.Properties, name: str, value: int) -> int

  3. param(self: imfusion._bindings.Properties, name: str, value: float) -> float

  4. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 3]]) -> numpy.ndarray[numpy.float64[3, 3]]

  5. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 4]]) -> numpy.ndarray[numpy.float64[4, 4]]

  6. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 4]]) -> numpy.ndarray[numpy.float64[3, 4]]

  7. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 3]]) -> numpy.ndarray[numpy.float32[3, 3]]

  8. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 4]]) -> numpy.ndarray[numpy.float32[4, 4]]

  9. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[2, 1]]) -> numpy.ndarray[numpy.float64[2, 1]]

  10. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 1]]) -> numpy.ndarray[numpy.float64[3, 1]]

  11. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 1]]) -> numpy.ndarray[numpy.float64[4, 1]]

  12. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[5, 1]]) -> numpy.ndarray[numpy.float64[5, 1]]

  13. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[2, 1]]) -> numpy.ndarray[numpy.float32[2, 1]]

  14. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 1]]) -> numpy.ndarray[numpy.float32[3, 1]]

  15. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 1]]) -> numpy.ndarray[numpy.float32[4, 1]]

  16. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.int32[2, 1]]) -> numpy.ndarray[numpy.int32[2, 1]]

  17. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.int32[3, 1]]) -> numpy.ndarray[numpy.int32[3, 1]]

  18. param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.int32[4, 1]]) -> numpy.ndarray[numpy.int32[4, 1]]

  19. param(self: imfusion._bindings.Properties, name: str, value: str) -> str

  20. param(self: imfusion._bindings.Properties, name: str, value: os.PathLike) -> os.PathLike

  21. param(self: imfusion._bindings.Properties, name: str, value: list[str]) -> list[str]

  22. param(self: imfusion._bindings.Properties, name: str, value: list[os.PathLike]) -> list[os.PathLike]

  23. param(self: imfusion._bindings.Properties, name: str, value: list[bool]) -> list[bool]

  24. param(self: imfusion._bindings.Properties, name: str, value: list[int]) -> list[int]

  25. param(self: imfusion._bindings.Properties, name: str, value: list[float]) -> list[float]

  26. param(self: imfusion._bindings.Properties, name: str, value: list[numpy.ndarray[numpy.float64[2, 1]]]) -> list[numpy.ndarray[numpy.float64[2, 1]]]

  27. param(self: imfusion._bindings.Properties, name: str, value: list[numpy.ndarray[numpy.float64[3, 1]]]) -> list[numpy.ndarray[numpy.float64[3, 1]]]

  28. param(self: imfusion._bindings.Properties, name: str, value: list[numpy.ndarray[numpy.float64[4, 1]]]) -> list[numpy.ndarray[numpy.float64[4, 1]]]

  29. param(self: imfusion._bindings.Properties, name: str, value: imfusion._bindings.Properties.EnumStringParam) -> imfusion._bindings.Properties.EnumStringParam

param_attributes(self: Properties, name: str) list[tuple[str, str]]
params(self: Properties) list[str]

Return a list of all param names.

Params inside sub-properties will be prefixed with the name of the sub-properties (e.g. ‘sub/var’). If with_sub_params is false, only the top-level params are returned.

remove_param(self: Properties, name: str) None
save_to_json(self: Properties, path: str) None
save_to_xml(self: Properties, path: str) None
set_name(self: Properties, name: str) None
set_param(*args, **kwargs)

Overloaded function.

  1. set_param(self: imfusion._bindings.Properties, name: str, value: bool) -> None

  2. set_param(self: imfusion._bindings.Properties, name: str, value: bool, default: bool) -> None

  3. set_param(self: imfusion._bindings.Properties, name: str, value: int) -> None

  4. set_param(self: imfusion._bindings.Properties, name: str, value: int, default: int) -> None

  5. set_param(self: imfusion._bindings.Properties, name: str, value: float) -> None

  6. set_param(self: imfusion._bindings.Properties, name: str, value: float, default: float) -> None

  7. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 3]]) -> None

  8. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 3]], default: numpy.ndarray[numpy.float64[3, 3]]) -> None

  9. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 4]]) -> None

  10. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 4]], default: numpy.ndarray[numpy.float64[4, 4]]) -> None

  11. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 4]]) -> None

  12. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 4]], default: numpy.ndarray[numpy.float64[3, 4]]) -> None

  13. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 3]]) -> None

  14. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 3]], default: numpy.ndarray[numpy.float32[3, 3]]) -> None

  15. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 4]]) -> None

  16. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 4]], default: numpy.ndarray[numpy.float32[4, 4]]) -> None

  17. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[2, 1]]) -> None

  18. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[2, 1]], default: numpy.ndarray[numpy.float64[2, 1]]) -> None

  19. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 1]]) -> None

  20. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[3, 1]], default: numpy.ndarray[numpy.float64[3, 1]]) -> None

  21. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 1]]) -> None

  22. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[4, 1]], default: numpy.ndarray[numpy.float64[4, 1]]) -> None

  23. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[5, 1]]) -> None

  24. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float64[5, 1]], default: numpy.ndarray[numpy.float64[5, 1]]) -> None

  25. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[2, 1]]) -> None

  26. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[2, 1]], default: numpy.ndarray[numpy.float32[2, 1]]) -> None

  27. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 1]]) -> None

  28. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[3, 1]], default: numpy.ndarray[numpy.float32[3, 1]]) -> None

  29. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 1]]) -> None

  30. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.float32[4, 1]], default: numpy.ndarray[numpy.float32[4, 1]]) -> None

  31. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.int32[2, 1]]) -> None

  32. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.int32[2, 1]], default: numpy.ndarray[numpy.int32[2, 1]]) -> None

  33. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.int32[3, 1]]) -> None

  34. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.int32[3, 1]], default: numpy.ndarray[numpy.int32[3, 1]]) -> None

  35. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.int32[4, 1]]) -> None

  36. set_param(self: imfusion._bindings.Properties, name: str, value: numpy.ndarray[numpy.int32[4, 1]], default: numpy.ndarray[numpy.int32[4, 1]]) -> None

  37. set_param(self: imfusion._bindings.Properties, name: str, value: str) -> None

  38. set_param(self: imfusion._bindings.Properties, name: str, value: str, default: str) -> None

  39. set_param(self: imfusion._bindings.Properties, name: str, value: os.PathLike) -> None

  40. set_param(self: imfusion._bindings.Properties, name: str, value: os.PathLike, default: os.PathLike) -> None

  41. set_param(self: imfusion._bindings.Properties, name: str, value: list[str]) -> None

  42. set_param(self: imfusion._bindings.Properties, name: str, value: list[str], default: list[str]) -> None

  43. set_param(self: imfusion._bindings.Properties, name: str, value: list[os.PathLike]) -> None

  44. set_param(self: imfusion._bindings.Properties, name: str, value: list[os.PathLike], default: list[os.PathLike]) -> None

  45. set_param(self: imfusion._bindings.Properties, name: str, value: list[bool]) -> None

  46. set_param(self: imfusion._bindings.Properties, name: str, value: list[bool], default: list[bool]) -> None

  47. set_param(self: imfusion._bindings.Properties, name: str, value: list[int]) -> None

  48. set_param(self: imfusion._bindings.Properties, name: str, value: list[int], default: list[int]) -> None

  49. set_param(self: imfusion._bindings.Properties, name: str, value: list[float]) -> None

  50. set_param(self: imfusion._bindings.Properties, name: str, value: list[float], default: list[float]) -> None

  51. set_param(self: imfusion._bindings.Properties, name: str, value: list[numpy.ndarray[numpy.float64[2, 1]]]) -> None

  52. set_param(self: imfusion._bindings.Properties, name: str, value: list[numpy.ndarray[numpy.float64[2, 1]]], default: list[numpy.ndarray[numpy.float64[2, 1]]]) -> None

  53. set_param(self: imfusion._bindings.Properties, name: str, value: list[numpy.ndarray[numpy.float64[3, 1]]]) -> None

  54. set_param(self: imfusion._bindings.Properties, name: str, value: list[numpy.ndarray[numpy.float64[3, 1]]], default: list[numpy.ndarray[numpy.float64[3, 1]]]) -> None

  55. set_param(self: imfusion._bindings.Properties, name: str, value: list[numpy.ndarray[numpy.float64[4, 1]]]) -> None

  56. set_param(self: imfusion._bindings.Properties, name: str, value: list[numpy.ndarray[numpy.float64[4, 1]]], default: list[numpy.ndarray[numpy.float64[4, 1]]]) -> None

  57. set_param(self: imfusion._bindings.Properties, name: str, value: imfusion._bindings.Properties.EnumStringParam) -> None

  58. set_param(self: imfusion._bindings.Properties, name: str, value: imfusion._bindings.Properties.EnumStringParam, default: imfusion._bindings.Properties.EnumStringParam) -> None

set_param_attributes(self: Properties, name: str, attributes: str) None
sub_properties(*args, **kwargs)

Overloaded function.

  1. sub_properties(self: imfusion._bindings.Properties, name: str, create_if_doesnt_exist: bool = False) -> imfusion._bindings.Properties

  2. sub_properties(self: imfusion._bindings.Properties) -> list[imfusion._bindings.Properties]

sub_properties_all(self: Properties, name: str) list
values(self: Properties) list
class imfusion.RealWorldMappingDataComponent(self: RealWorldMappingDataComponent)

Bases: DataComponentBase

class Mapping(self: Mapping)

Bases: pybind11_object

original_to_real_world(self: Mapping, value: float) float
storage_to_real_world(self: Mapping, image_descriptor: ImageDescriptor, value: float) float
property intercept
property slope
property type
property unit
class MappingType(self: MappingType, value: int)

Bases: pybind11_object

Members:

REAL_WORLD_VALUES

STANDARDIZED_UPTAKE_VALUES

REAL_WORLD_VALUES = <MappingType.REAL_WORLD_VALUES: 0>
STANDARDIZED_UPTAKE_VALUES = <MappingType.STANDARDIZED_UPTAKE_VALUES: 1>
property name
property value
REAL_WORLD_VALUES = <MappingType.REAL_WORLD_VALUES: 0>
STANDARDIZED_UPTAKE_VALUES = <MappingType.STANDARDIZED_UPTAKE_VALUES: 1>
property mappings
property units
class imfusion.ReductionMode(self: ReductionMode, value: int)

Bases: pybind11_object

Members:

LOOKUP

AVERAGE

MINIMUM

MAXIMUM

AVERAGE = <ReductionMode.AVERAGE: 1>
LOOKUP = <ReductionMode.LOOKUP: 0>
MAXIMUM = <ReductionMode.MAXIMUM: 3>
MINIMUM = <ReductionMode.MINIMUM: 2>
property name
property value
class imfusion.ReferenceImageDataComponent

Bases: DataComponentBase

property reference
class imfusion.RegionOfInterest(self: RegionOfInterest, arg0: ndarray[numpy.int32[3, 1]], arg1: ndarray[numpy.int32[3, 1]])

Bases: pybind11_object

property offset
property size
class imfusion.Selection(*args, **kwargs)

Bases: Configurable

Utility class for describing a selection of elements out of a set. Conceptually, a Selection pairs a list of bools describing selected items with the index of a “focus” item and provides syntactic sugar on top. For instance, the set of selected items could define which ones to show in general while the focus item is additionally highlighted. The class is fully separate from the item set of which it describes the selection. This means for instance that it cannot know the actual number of items in the set and the user/parent class must manually make sure that they match. Also, a Selection only manages indices and offers no way of accessing the underlying elements. In order to iterate over all selected indices, you can do for instance the following:

for index in range(selection.start, selection.stop):
        if selection[index]:
                ...

The same effect can also be achieved in a much more terse fashion:

for selected_index in selection.selected_indices():
...

For convenience, the selection can also be converted to a slice object (if the selection has a regular spacing, see below):

selected_subset = container[selection.slice()]

Sometimes it can be more convenient to “thin out” a selection by only selecting every N-th element. To this end, the Selection constructor takes the arguments start, stop and step. Setting step to N will only select every N-th element, mimicking the signature of range, slice, etc.

Overloaded function.

  1. __init__(self: imfusion._bindings.Selection) -> None

  2. __init__(self: imfusion._bindings.Selection, stop: int) -> None

  3. __init__(self: imfusion._bindings.Selection, start: int, stop: int, step: int = 1) -> None

  4. __init__(self: imfusion._bindings.Selection, indices: list[int]) -> None

class NonePolicy(self: NonePolicy, value: int)

Bases: pybind11_object

Members:

EMPTY

FOCUS

ALL

ALL = <NonePolicy.ALL: 2>
EMPTY = <NonePolicy.EMPTY: 0>
FOCUS = <NonePolicy.FOCUS: 1>
property name
property value
__getitem__(self: Selection, index: int) bool
__iter__(self: Selection) object
__setitem__(self: Selection, index: int, selected: bool) None
clamp(self: Selection, last_selected_index: int) None
is_selected(self: Selection, index: int, none_policy: NonePolicy) bool
select_only(self: Selection, index: int) None
select_up_to(self: Selection, stop: int) None
set_first_last_skip(self: Selection, arg0: int, arg1: int, arg2: int) None
ALL = <NonePolicy.ALL: 2>
EMPTY = <NonePolicy.EMPTY: 0>
FOCUS = <NonePolicy.FOCUS: 1>
property first_selected
property focus
property has_regular_skip
property is_none
property last_selected
property range
property selected_indices
property size
property skip
property start
property step
property stop
class imfusion.SharedImage(*args, **kwargs)

Bases: pybind11_object

A SharedImage instance represents an image that resides in different memory locations, i.e. in CPU memory or GPU memory.

A SharedImage can be directly converted from and to a numpy array:

>>> import numpy
>>> img = SharedImage(numpy.ones([10, 10, 1], dtype='uint8'))
>>> arr = numpy.array(img)

See MemImage for details.

Overloaded function.

  1. __init__(self: imfusion._bindings.SharedImage, mem_image: imfusion._bindings.MemImage) -> None

  2. __init__(self: imfusion._bindings.SharedImage, desc: imfusion._bindings.ImageDescriptor) -> None

  3. __init__(self: imfusion._bindings.SharedImage, desc: imfusion._bindings.ImageDescriptorWorld) -> None

  4. __init__(self: imfusion._bindings.SharedImage, type: imfusion._bindings.PixelType, width: int, height: int, slices: int = 1, channels: int = 1) -> None

  5. __init__(self: imfusion._bindings.SharedImage, array: numpy.ndarray[numpy.int8], greyscale: bool = False) -> None

  6. __init__(self: imfusion._bindings.SharedImage, array: numpy.ndarray[numpy.uint8], greyscale: bool = False) -> None

  7. __init__(self: imfusion._bindings.SharedImage, array: numpy.ndarray[numpy.int16], greyscale: bool = False) -> None

  8. __init__(self: imfusion._bindings.SharedImage, array: numpy.ndarray[numpy.uint16], greyscale: bool = False) -> None

  9. __init__(self: imfusion._bindings.SharedImage, array: numpy.ndarray[numpy.int32], greyscale: bool = False) -> None

  10. __init__(self: imfusion._bindings.SharedImage, array: numpy.ndarray[numpy.uint32], greyscale: bool = False) -> None

  11. __init__(self: imfusion._bindings.SharedImage, array: numpy.ndarray[numpy.float32], greyscale: bool = False) -> None

  12. __init__(self: imfusion._bindings.SharedImage, array: numpy.ndarray[numpy.float64], greyscale: bool = False) -> None

apply_shift_and_scale(arr)

Return a copy of the array with storage values converted to original values. The dtype of the returned array is always DOUBLE.

argmax(self: SharedImage) list[ndarray[numpy.int32[4, 1]]]

Return a list of the indices of maximum values, channel-wise. The indices are represented as (x, y, z, image index).

argmin(self: SharedImage) list[ndarray[numpy.int32[4, 1]]]

Return a list of the indices of minimum values, channel-wise. The indices are represented as (x, y, z, image index).

assign_array(arr, casting='same_kind')

Copies the contents of arr to the SharedImage. Automatically calls setDirtyMem.

The casting parameters behaves like numpy.copyto.

astype(self: SharedImage, pixelType: object) SharedImage

Create a copy of the current SharedImage instance with the requested Image format.

This function accepts either: - a PixelType (e.g. imfusion.PixelType.UInt); - most of the numpy’s dtypes (e.g. np.uint); - python’s float or int types.

If the requested PixelType already matches the PixelType of the provided SharedImage, then a clone of the current instance is returned.

channel_swizzle(self: SharedImage, indices: list[int]) SharedImage

Reorders the channels of an image based on the input indices, e.g. indices[0] will correspond to the first channel of the output image.

Parameters:

indices (List[int]) – List of channels indices to swizzle the channels of SharedImage.

clone(self: SharedImage) SharedImage
dimension(self: SharedImage) int
exclusive_mem(self: SharedImage) None

Clear representations that are not CPU memory

image_to_world(self: SharedImage, image_coordinates: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
max(self: SharedImage) ndarray[numpy.float64[m, 1]]

Return the list of the maximum elements of images, channel-wise.

mean(self: SharedImage) ndarray[numpy.float64[m, 1]]

Return a list of channel-wise average of image elements.

mem(self: SharedImage) MemImage
min(self: SharedImage) ndarray[numpy.float64[m, 1]]

Return the list of the minimum elements of images, channel-wise.

norm(self: SharedImage, order: object = 2) ndarray[numpy.float64[m, 1]]

Returns the norm of an image instance, channel-wise.

Parameters:

order (int, float, 'inf') – Order of the norm. Default is L2 norm.

numpy()

Convenience method for converting a MemImage or a SharedImage into a newly created numpy array with scale and shift already applied.

Shift and scale may determine a complex change of pixel type prior the conversion into numpy array:

  • as a first rule, even if the type of shift and scale is float, they will still be considered as integers if they are representing integers (e.g. a shift of 2.000 will be treated as 2);

  • if shift and scale are such that the pixel values range (determined by the pixel_type) would not be fitting into the pixel_type, e.g. a negative pixel value but the type is unsigned, then the pixel_type will be promoted into a signed type if possible, otherwise into a single precision floating point type;

  • if shift and scale are such that the pixel values range (determined by the pixel_type) would be fitting into a demoted pixel_type, e.g. the type is signed but the range of pixel values is unsigned, then the pixel_type will be demoted;

  • if shift and scale do not certainly determine that all the possible pixel values (in the range determined by the pixel_type) would become integers, then the pixel_type will be promoted into a single precision floating point type.

  • in any case, the returned numpy array will be returned with type up to 32-bit integers. If the integer type would require more bits, then the resulting pixel_type will be DOUBLE.

Parameters:

self – instance of a MemImage or of a SharedImage

Returns:

numpy.ndarray

pixel_to_world(self: SharedImage, pixel_coordinates: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
prepare(self: SharedImage, shift_only: bool = False) None
Prepare the image:

Integral types are converted to unsigned representation if applicable, double-precision will be converted to single-precision float. Furthermore, if shift_only is False it will rescale the present intensity range to [0..1] for floating point types or to the entire available value range for integral types.

prod(self: SharedImage) ndarray[numpy.float64[m, 1]]

Return a list of channel-wise production of image elements.

set_dirty_mem(self: SharedImage) None
sum(self: SharedImage) ndarray[numpy.float64[m, 1]]

Return a list of channel-wise sum of image elements.

sync(self: SharedImage) None
torch(device: device = None, dtype: dtype = None, same_as: Tensor = None) Tensor

Convert SharedImageSet or a SharedImage to a torch.Tensor.

Parameters:
  • self (DataElement | SharedImageSet | SharedImage) – Instance of SharedImageSet or SharedImage (this function bound as a method to SharedImageSet and SharedImage)

  • device (device) – Target device for the new torch.Tensor

  • dtype (dtype) – Type of the new torch.Tensor

  • same_as (Tensor) – Template tensor whose device and dtype configuration should be matched. device and dtype are still applied afterwards.

Returns:

New torch.Tensor

Return type:

Tensor

world_to_image(self: SharedImage, world_coordinates: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
world_to_pixel(self: SharedImage, world_coordinates: ndarray[numpy.float64[3, 1]]) ndarray[numpy.float64[3, 1]]
property channels
property deformation
property descriptor
property descriptor_world
property extent
property height
property image_to_world_matrix
property kind
property mask
property metric
property modality
property ndim
property pixel_to_world_matrix
property scale
property shape

Return a numpy compatible shape descripting the dimensions of this image.

The returned tuple has 4 entries: slices, height, width, channels

property shift
property slices
property spacing
property width
property world_to_image_matrix
property world_to_pixel_matrix
class imfusion.SharedImageSet(*args, **kwargs)

Bases: Data

Set of images independent of their storage location.

This class is the main high-level container for image data consisting of one or multiple images or volumes, and should be used both in algorithms and visualization classes. Both a single focus and multiple selection is featured, as well as providing transformation matrices for each image.

The focus image of a SharedImageSet can be directly converted from and to a numpy array:

>>> import numpy
>>> img = SharedImageSet(numpy.ones([1, 10, 10, 10, 1], dtype='uint8'))
>>> arr = numpy.array(img)

See MemImage for details.

Overloaded function.

  1. __init__(self: imfusion._bindings.SharedImageSet) -> None

Creates an empty SharedImageSet.

  1. __init__(self: imfusion._bindings.SharedImageSet, mem_image: imfusion._bindings.MemImage) -> None

  2. __init__(self: imfusion._bindings.SharedImageSet, shared_image: imfusion._bindings.SharedImage) -> None

  3. __init__(self: imfusion._bindings.SharedImageSet, array: numpy.ndarray[numpy.int8], greyscale: bool = False) -> None

  4. __init__(self: imfusion._bindings.SharedImageSet, array: numpy.ndarray[numpy.uint8], greyscale: bool = False) -> None

  5. __init__(self: imfusion._bindings.SharedImageSet, array: numpy.ndarray[numpy.int16], greyscale: bool = False) -> None

  6. __init__(self: imfusion._bindings.SharedImageSet, array: numpy.ndarray[numpy.uint16], greyscale: bool = False) -> None

  7. __init__(self: imfusion._bindings.SharedImageSet, array: numpy.ndarray[numpy.int32], greyscale: bool = False) -> None

  8. __init__(self: imfusion._bindings.SharedImageSet, array: numpy.ndarray[numpy.uint32], greyscale: bool = False) -> None

  9. __init__(self: imfusion._bindings.SharedImageSet, array: numpy.ndarray[numpy.float32], greyscale: bool = False) -> None

  10. __init__(self: imfusion._bindings.SharedImageSet, array: numpy.ndarray[numpy.float64], greyscale: bool = False) -> None

__getitem__(self: SharedImageSet, index: int) SharedImage
__iter__(self: SharedImageSet) Iterator[SharedImage]
add(*args, **kwargs)

Overloaded function.

  1. add(self: imfusion._bindings.SharedImageSet, shared_image: imfusion._bindings.SharedImage) -> None

  2. add(self: imfusion._bindings.SharedImageSet, mem_image: imfusion._bindings.MemImage) -> None

apply_shift_and_scale(arr)

Return a copy of the array with storage values converted to original values.

Parameters:
  • self – instance of a SharedImageSet which provides shift and scale

  • arr – array to be converted from storage values into original values

Returns:

numpy.ndarray

argmax(self: SharedImageSet) list[ndarray[numpy.int32[4, 1]]]

Return a list of the indices of maximum values, channel-wise. The indices are represented as (x, y, z, image index).

argmin(self: SharedImageSet) list[ndarray[numpy.int32[4, 1]]]

Return a list of the indices of minimum values, channel-wise. The indices are represented as (x, y, z, image index).

assign_array(arr)

Copies the contents of arr to the MemImage. Automatically calls setDirtyMem.

astype(self: SharedImageSet, pixel_type: object) SharedImageSet

Returns a new SharedImageSet formed by new SharedImage instances obtained by converting the original ones into the requested PixelType.

This function accepts either: - a PixelType (e.g. imfusion.PixelType.UInt); - most of the numpy’s dtypes (e.g. np.uint); - python’s float or int types.

If the requested type already matches the input type, the returned SharedImageSet will contain clones of the original images.

channel_swizzle(self: SharedImageSet, indices: list[int]) SharedImageSet

Reorders the channels of an image based on the input indices, e.g. indices[0] will correspond to the first channel of the output image.

Parameters:

indices (List[int]) – List of channels indices to swizzle the channels of SharedImageSet.

clear(self: SharedImageSet) None
clone(self: SharedImageSet, with_data: bool = True) SharedImageSet
deformation(self: SharedImageSet, which: int = -1) Deformation
descriptor(self: SharedImageSet, which: int = -1) ImageDescriptor
elementwise_components(self: SharedImageSet, which: int = -1) DataComponentList
classmethod from_torch(tensor: Tensor, get_metadata_from: SharedImageSet | None = None) SharedImageSet

Create a SharedImageSet from a torch Tensor. If you want to copy metadata from an existing SharedImageSet you can pass it as the get_metadata_from argument. If you are using this, make sure that the size of the tensor’s batch dimension and the number of images in the SIS are equal. If get_metadata_from is provided, properties will be copied from the SIS and world_to_image_matrix, spacing and modality from the contained SharedImages.

Parameters:
  • cls – Instance of type i.e. SharedImageSet (this function is bound as a classmethod to SharedImageSet)

  • tensor (Tensor) – Instance of torch.Tensor

  • get_metadata_from (SharedImageSet | None) – Instance of SharedImageSet from which metadata should be copied.

Returns:

New instance of SharedImageSet

Return type:

SharedImageSet

get(self: SharedImageSet, which: int = -1) SharedImage
mask(self: SharedImageSet, which: int = -1) Mask
matrix(self: SharedImageSet, which: int = -1) ndarray[numpy.float64[4, 4]]
matrix_from_world(self: SharedImageSet, which: int) ndarray[numpy.float64[4, 4]]
matrix_to_world(self: SharedImageSet, which: int) ndarray[numpy.float64[4, 4]]
max(self: SharedImageSet) ndarray[numpy.float64[m, 1]]

Return the list of the maximum elements of images, channel-wise.

mean(self: SharedImageSet) ndarray[numpy.float64[m, 1]]

Return a list of channel-wise average of image elements.

mem(self: SharedImageSet, which: int = -1) MemImage
min(self: SharedImageSet) ndarray[numpy.float64[m, 1]]

Return the list of the minimum elements of images, channel-wise.

norm(self: SharedImageSet, order: object = 2) ndarray[numpy.float64[m, 1]]

Returns the norm of an image instance, channel-wise.

Parameters:

order (int, float, 'inf') – Order of the norm. Default is L2 norm.

numpy()

Convenience method for reading a SharedImageSet as original values, with shift and scale already applied.

Parameters:

self – instance of a SharedImageSet

Returns:

numpy.ndarray

prod(self: SharedImageSet) ndarray[numpy.float64[m, 1]]

Return a list of channel-wise production of image elements.

remove(self: SharedImageSet, shared_image: SharedImage) None

Removes and deletes the SharedImage from the set.

selected_images(self: SharedImageSet, arg0: NonePolicy) list[SharedImage]
set_deformation(self: SharedImageSet, deformation: Deformation, which: int = -1) None
set_dirty_mem(self: SharedImageSet) None
set_mask(self: SharedImageSet, mask: Mask, which: int = -1) None
set_matrix(self: SharedImageSet, matrix: ndarray[numpy.float64[4, 4]], which: int = -1, update_all: bool = False) None
set_matrix_from_world(self: SharedImageSet, matrix: ndarray[numpy.float64[4, 4]], which: int, update_all: bool = False) None
set_matrix_to_world(self: SharedImageSet, matrix: ndarray[numpy.float64[4, 4]], which: int, update_all: bool = False) None
set_timestamp(self: SharedImageSet, time: float, which: int = -1) None
sum(self: SharedImageSet) ndarray[numpy.float64[m, 1]]

Return a list of channel-wise sum of image elements.

timestamp(self: SharedImageSet, which: int = -1) float
torch(device: device = None, dtype: dtype = None, same_as: Tensor = None) Tensor

Convert SharedImageSet or a SharedImage to a torch.Tensor.

Parameters:
  • self (DataElement | SharedImageSet | SharedImage) – Instance of SharedImageSet or SharedImage (this function bound as a method to SharedImageSet and SharedImage)

  • device (device) – Target device for the new torch.Tensor

  • dtype (dtype) – Type of the new torch.Tensor

  • same_as (Tensor) – Template tensor whose device and dtype configuration should be matched. device and dtype are still applied afterwards.

Returns:

New torch.Tensor

Return type:

Tensor

property all_same_descriptor
property all_timestamped
property focus
property modality
property properties
property selection
property shape

Return a numpy compatible shape descripting the dimensions of this image.

The returned tuple has 5 entries: #frames, slices, height, width, channels

property size
class imfusion.SignalConnection

Bases: pybind11_object

disconnect(self: SignalConnection) bool
property is_active
property is_blocked
property is_connected
class imfusion.SkippingMask(self: SkippingMask, shape: ndarray[numpy.int32[3, 1]], skip: ndarray[numpy.int32[3, 1]])

Bases: Mask

Basic mask where only every N-th pixel is considered inside.

property skip

Step size in pixels for the mask

class imfusion.SpacingMode(self: SpacingMode, value: int)

Bases: pybind11_object

Members:

EXACT

ADJUST

ADJUST = <SpacingMode.ADJUST: 1>
EXACT = <SpacingMode.EXACT: 0>
property name
property value
class imfusion.TrackedSharedImageSet(self: TrackedSharedImageSet)

Bases: SharedImageSet

add_tracking(self: TrackedSharedImageSet, tracking_sequence: TrackingSequence) None
clear_trackings(self: TrackedSharedImageSet) None
remove_tracking(self: TrackedSharedImageSet, num: int = -1) TrackingSequence
tracking(self: TrackedSharedImageSet, num: int = -1) TrackingSequence
property height
property num_tracking
property tracking_used
property trackings
property use_timestamps
property width
class imfusion.TrackerID(*args, **kwargs)

Bases: pybind11_object

Overloaded function.

  1. __init__(self: imfusion._bindings.TrackerID) -> None

  2. __init__(self: imfusion._bindings.TrackerID, id: str = ‘’, model_number: str = ‘’, name: str = ‘’) -> None

empty(self: TrackerID) bool
from_string(self: str) TrackerID
to_model_name_string(self: TrackerID, arg0: bool) str
to_string(self: TrackerID, arg0: bool) str
property id
property model_number
property name
class imfusion.TrackingSequence(self: TrackingSequence, name: str = '')

Bases: Data

add(*args, **kwargs)

Overloaded function.

  1. add(self: imfusion._bindings.TrackingSequence, mat: numpy.ndarray[numpy.float64[4, 4]]) -> None

  2. add(self: imfusion._bindings.TrackingSequence, mat: numpy.ndarray[numpy.float64[4, 4]], timestamp: float) -> None

  3. add(self: imfusion._bindings.TrackingSequence, mat: numpy.ndarray[numpy.float64[4, 4]], timestamp: float, quality: float) -> None

  4. add(self: imfusion._bindings.TrackingSequence, mat: numpy.ndarray[numpy.float64[4, 4]], timestamp: float, quality: float, flags: int) -> None

clear(self: TrackingSequence) None
flags(self: TrackingSequence, num: int = -1) int
matrix(*args, **kwargs)

Overloaded function.

  1. matrix(self: imfusion._bindings.TrackingSequence, num: int) -> numpy.ndarray[numpy.float64[4, 4]]

  2. matrix(self: imfusion._bindings.TrackingSequence, time: float) -> numpy.ndarray[numpy.float64[4, 4]]

quality(*args, **kwargs)

Overloaded function.

  1. quality(self: imfusion._bindings.TrackingSequence, num: int) -> float

  2. quality(self: imfusion._bindings.TrackingSequence, time: float, check_distance: bool = True, ignore_relative: bool = False) -> float

raw_matrix(self: TrackingSequence, num: int) ndarray[numpy.float64[4, 4]]
remove(self: TrackingSequence, pos: int, count: int = 1) None
set_raw_matrix(self: TrackingSequence, idx: int, value: ndarray[numpy.float64[4, 4]]) None
set_timestamp(self: TrackingSequence, idx: int, value: float) None
shift_timestamps(self: TrackingSequence, shift: float) None
timestamp(self: TrackingSequence, num: int = -1) float
property calibration
property center
property filename
property filter_mode
property filter_size
property has_timestamps
property instrument_id
property instrument_model
property instrument_name
property invert
property median_time_step
property registration
property relative_to_first
property relative_tracking
property size
property temporal_offset
property tracker_id
class imfusion.TransformationStashDataComponent(self: TransformationStashDataComponent)

Bases: DataComponentBase

property original
property transformations
class imfusion.View

Bases: pybind11_object

reset(self: View) None
property visible
class imfusion.VisualizerHandle

Bases: pybind11_object

The handle to a visualizer. It allows to close a specific visualizer when needed. Example:

>>> import imfusion
>>> data_path = "some/path"
>>> data = imfusion.load(data_path)
>>> visualizer_handle = imfusion.show(data, title="Title")
>>> print(visualizer_handle.title())
>>> import time
>>> time.sleep(2)
>>> visualizer_handle.close()
close(self: VisualizerHandle) None

Close the visualizer associated to this handle.

title(self: VisualizerHandle) str

Get the title of the visualizer associated to this handle.

class imfusion.VitalsDataComponent

Bases: DataComponentBase

DataComponent for storing a collection of time dependent vital signs like ECG, heart rate or pulse oximeter measurements.

class VitalsKind(self: VitalsKind, value: int)

Bases: pybind11_object

Members:

ECG

PULSE_OXIMETER

HEARTH_RATE

OTHER

ECG = <VitalsKind.ECG: 0>
HEARTH_RATE = <VitalsKind.HEARTH_RATE: 2>
OTHER = <VitalsKind.OTHER: 3>
PULSE_OXIMETER = <VitalsKind.PULSE_OXIMETER: 1>
property name
property value
class VitalsTimeSeries

Bases: pybind11_object

property signal
property timestamps
__getitem__(self: VitalsDataComponent, kind: VitalsKind) list[VitalsTimeSeries]
ECG = <VitalsKind.ECG: 0>
HEARTH_RATE = <VitalsKind.HEARTH_RATE: 2>
OTHER = <VitalsKind.OTHER: 3>
PULSE_OXIMETER = <VitalsKind.PULSE_OXIMETER: 1>
property kinds
imfusion.algorithmName(id: str) str

Returns the name of the algorithm with the given id.

imfusion.algorithm_properties(id: str, data: list) Properties

Returns the default properties of the given algorithm. This is useful to figure out what properties are supported by an algorithm.

imfusion.auto_window(image: SharedImageSet, change2d: bool = True, change3d: bool = True, lower_limit: float = 0.0, upper_limit: float = 0.0) None

Update window/level of input image to show the entire intensity range of the image.

Parameters:
  • image (SharedImageSet) – Image to change the windowing for.

  • change2d (bool) – Flag whether update the DisplayOptions2d attached to a img.

  • change3d (bool) – Flag whether update the DisplayOptions3d attached to a img.

  • lower_limit (double) – Ratio of lower values removed by the auto windowing.

  • upper_limit (double) – Ratio of upper values removed by the auto windowing.

imfusion.available_algorithms(sub_string: str = '', case_sensitive: bool = False) list[str]

Return a list of all available algorithm ids.

Optionally, a substring can be given to filter the list (case-insensitive by default).

imfusion.available_data_components() list[str]

Returns the Unique IDs of all DataComponents registered in DataComponentFactory.

imfusion.close_viewers() None

Close all the visualizers that were opened with show().

imfusion.create_algorithm(id: str, data: list = [], properties: Properties = None) object

Create the algorithm with the given id and but without executing it.

The algorithm will only be created if it is compatible with the given data. The optional Properties object will be used to configure the algorithm.

Parameters:
  • id – String identifier of the Algorithm to create.

  • data – List of input data that the Algorithm expects.

  • properties – Configuration for the Algorithm in the form of a Properties instance.

Example

>>> create_algorithm("Create Synthetic Data", [])  
<imfusion._bindings.BaseAlgorithm object at ...>
imfusion.create_data_component(id: str, properties: Properties = None) object

Instantiates a DataComponent specified by the given ID.

Parameters:
  • id – Unique ID of the DataComponent to create.

  • properties – Optional Properties object. If not None, it will used to configure the newly created DataComponent.

imfusion.deinit() None

De-initializes the framework.

Deletes the main OpenGL context and unloads all plugins.

This should only be called at the end of the application. Automatically called when the module is unloaded.

Does nothing if the framework was not initialized yet.

imfusion.execute_algorithm(id: str, data: list = [], properties: Properties = None) list

Execute the algorithm with the given id and returns its output.

The algorithm will only be executed if it is compatible with the given data. The optional Properties object will be used to configure the algorithm before executing it.

imfusion.gpu_info() object

Return string with information about GPU to check if hardware support for OpenGL is available.

imfusion.has_gl_context() bool
imfusion.info() FrameworkInfo

Provides general information about the framework.

imfusion.init(pluginFolders: list = [], initOpenGL: bool = True) None
imfusion.keep_data_alive(cls)
imfusion.list_viewers() list[VisualizerHandle]

Return a list of visualization handles that were created with show(). Please note that this method may return viewers that have been closed without VisualizerHandle.close() or close_viewers().

imfusion.load(path: str) list

Load the content of a file or folder as a list of Data.

The list can contain instances of any class deriving from Data, i.e. SharedImage, Mesh, PointCloud, etc…

Parameters:

path – can be path to a file containing a supported file formats, or a folder containing Dicom data, if the imfusion package was built with Dicom support.

Note

An IOError is raised if the file cannot be opened or a ValueError if the filetype is not supported. Some filetypes (like workspaces) cannot be opened by this function, but must be opened with imfusion.ApplicationController.open().

Example

>>> imfusion.load('ct_image.png')  
[imfusion.SharedImageSet(size: 1, [imfusion.SharedImage(USHORT width: 512 height: 512 spacing: 0.661813x0.661813x1 mm)])]  
>>> imfusion.load('multi_label_segmentation.nii.gz')  
[imfusion.SharedImageSet(size: 1, [imfusion.SharedImage(UBYTE width: 128 height: 128 slices: 128 channels: 3 spacing: 1x1x1 mm)])]  
>>> imfusion.load('ultrasound_sweep.imf')  
[imfusion.SharedImageSet(size: 159, [  
        imfusion.SharedImage(UBYTE width: 457 height: 320 spacing: 0.4x0.4x1 mm),  
        imfusion.SharedImage(UBYTE width: 457 height: 320 spacing: 0.4x0.4x1 mm),  
        ...  
        imfusion.SharedImage(UBYTE width: 457 height: 320 spacing: 0.4x0.4x1 mm)  
>>> imfusion.load('path_to_folder_containing_multiple_dcm_datasets')  
[imfusion.SharedImageSet(size: 1, [imfusion.SharedImage(FLOAT width: 400 height: 400 slices: 300 spacing: 2.03642x2.03642x3 mm)])]  
imfusion.load_plugin(path: str) None

Load a single ImFusionLib plugin from the given file. WARNING: This might execute arbitary code. Only use with trusted files!

imfusion.load_plugins(folder: str) None

Loads all ImFusionLib plugins from the given folder. WARNING: This might execute arbitary code. Only use with trusted folders!

imfusion.log_debug(message: str) None
imfusion.log_error(message: str) None
imfusion.log_fatal(message: str) None
imfusion.log_info(message: str) None
imfusion.log_level() int

Returns the level of the logging in the ImFusionSDK (Trace = 0, Debug = 1, Info = 2, Warning = 3, Error = 4, Fatal = 5, Quiet = 6)

imfusion.log_trace(message: str) None
imfusion.log_warn(message: str) None
imfusion.open(file: str) list

Open a file and load it as data.

Return a list of loaded datasets. An IOError is raised if the file cannot be opened or a ValueError if the filetype is not supported.

Some filetypes (like workspaces) cannot be opened by this function, but must be opened with imfusion.ApplicationController.open().

imfusion.open_in_suite(data: list[Data]) None

Starts the ImFusion Suite with the input data list. The ImFusionSuite executable must be in your PATH.

imfusion.py_doc_url() str
imfusion.register_algorithm(id, name, cls)

Register an Algorithm to the framework.

The Algorithm will be accessible through the given id. If the id is already used, the registration will fail.

cls must derive from Algorithm otherwise a TypeError is raised.

imfusion.save(*args, **kwargs)

Overloaded function.

  1. save(shared_image_set: imfusion._bindings.SharedImageSet, file_path: str, **kwargs) -> None

Save a SharedImageSet to the specified file path. The path extension is used to determine which file format to save to. Currently supported file formats are:

  • ImFusion File, extension imf

  • NIfTI File, extensions [nii, nii.gz]

Parameters:
  • shared_image_set – Instance of SharedImageSet.

  • file_path – Path to output file. The path extension is used to determine the file format.

  • \**kwargs

Raises:

RuntimeError if file_path extension is not supported. Currently supported extensions are ['imf', 'nii', 'nii.gz'].

Example

>>> image_set = SharedImageSet(...)
>>> imfusion.save(image_set, 'path/to/imf/file.imf')  # saves an ImFusionFile
>>> imfusion.save(image_set, 'path/to/nifti/file.nii.gz', keep_ras_coordinates=True)  # saves a NIfTI file
  1. save(data: imfusion._bindings.Data, file_path: str) -> None

Save a Data instance to the specified file path as an ImFusion file.

Parameters:
  • data – any instance of class deriving from Data can be saved with this methods, examples are SharedImageSet, Mesh and PointCloud.

  • file_path – Path to ImFusion file. The data is saved in a single file. File path must end with .imf.

Note

Raises a RuntimeError on failure or if file_path doesn’t end with .imf extension.

Example

>>> mesh = Mesh(...)
>>> imfusion.save(mesh, 'path/to/imf/file.imf')
  1. save(data_list: list[imfusion._bindings.Data], file_path: str) -> None

Save a list of data to the specified file path as an ImFusion file.

Parameters:
  • data_list – List of Data. Any class deriving from Data can be saved with this methods. Examples of Data are SharedImageSet, Mesh, PointCloud, etc.

  • file_path – Path to ImFusion file. The entire list of Data is saved in a single file. File path must end with .imf

Note

Raises a RuntimeError on failure or if file_path doesn’t end with .imf extension.

Example

>>> image_set = SharedImageSet(...)
>>> mesh = Mesh(...)
>>> point_cloud = PointCloud(...)
>>> another_image_set = SharedImageSet(...)
>>> imfusion.save([image_set, mesh, point_cloud, another_image_set], 'path/to/imf/file.imf')
  1. save(point_cloud: imfusion._bindings.PointCloud, file_path: str) -> None

Save a imfusion.SharedImageSet to the specified file path. The path extension is used to determine which file format to save to. Currently supported file formats are:

  • ImFusion File, extension imf

  • Point Cloud Data used inside Point Cloud Library (PCL), extension pcd

  • OBJ file format developed by Wavefront , extension obj

  • Polygon File Format or the Stanford Triangle Format, extension ply

Parameters:
  • point_cloud – Instance of imfusion.PointCloud.

  • file_path – Path to output file. The path extension is used to determine the file format.

Raises:

RuntimeError if file_path extension is not supported. Currently supported extensions are ['imf', 'pcd', 'obj', 'ply', 'txt', 'xyz'].

Example

>>> pc = PointCloud(...)
>>> imfusion.save(pc, 'path/to/imf/file.pcd')  # saves as a pcd file
  1. save(mesh: imfusion._bindings.Mesh, file_path: str) -> None

Save a imfusion.Mesh to the specified file path. The path extension is used to determine which file format to save to. Currently supported file formats are:

  • ImFusion File, extension imf

  • Polygon File Format or the Stanford Triangle Format, extension ply

  • STL file format used for 3D printing and computer-aided design (CAD), extension stl

  • Object File Format, extension off

  • OBJ file format developed by Wavefront , extension obj

  • Virtual Reality Modeling Language file format, extension wrl

  • Standard Starlink NDF (SUN/33) file format, extension surf

  • Raster GIS file format developed by Esri, extension grid

  • 3D Manufacturing Format , extension 3mf

Parameters:
  • mesh – Instance of imfusion.Mesh.

  • file_path – Path to output file. The path extension is used to determine the file format.

Raises:

RuntimeError if file_path extension is not supported. Currently supported extensions are [ply", "stl", "off", "obj", "wrl", "surf", "grid", "3mf].

Example

>>> mesh = ImFusion.Mesh(...)
>>> imfusion.save(mesh, 'path/to/imf/file.imf')  # saves an ImFusionFile
imfusion.set_log_level(level: int) None

Sets the level of the logging in the ImFusionSDK (Trace = 0, Debug = 1, Info = 2, Warning = 3, Error = 4, Fatal = 5, Quiet = 6).

The initial log level is 3 (Warning), but can be set explicitly with the IMFUSION_LOG_LEVEL environment variable.

Note

After calling transfer_logging_to_python() this function has no effect.

imfusion.show(*args, **kwargs)

Overloaded function.

  1. show(data: imfusion._bindings.Data, *, title: Optional[str] = None) -> imfusion._bindings.VisualizerHandle

Launch a visualizer displaying the input data (e.g. a SharedImageSet). A title can also optionally be assigned.

  1. show(data_list: list[imfusion._bindings.Data], *, title: Optional[str] = None) -> imfusion._bindings.VisualizerHandle

Launch a visualizer displaying the input list of data. A title can also optionally be assigned.

imfusion.transfer_logging_to_python() None

Transfers the control of logging from ImFusionLib to the “ImFusion” logger, which can obtained through the python’s logging module with logging.getLogger("ImFusion").

After calling transfer_logging_to_python, the configuration of the logger will be possible exclusively through the Python’s logging module interface, e.g. using logging.getLogger("ImFusion").setLevel. Besides, all the imfusion logs that happen after calling this function but before importing the logging module will not be captured.

Note

Please note that this redirection cannot be cancelled and that any subsequent calls to this functions will have no effect.

Warning

Due to the GIL, log messages from internal threads won’t be forwarded to the logger.

imfusion.try_import_imfusion_plugin(plugin: str) None
Parameters:

plugin (str) –

Return type:

None

imfusion.unregister_algorithm(name: str) None

Unregister a previously registered algorithm.

This only works for algorithm that where registered through the Python interface but not for built-in algorithm.

imfusion.wraps(wrapped, assigned=('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'), updated=('__dict__',))

Decorator factory to apply update_wrapper() to a wrapper function

Returns a decorator that invokes update_wrapper() with the decorated function as the wrapper argument and the arguments to wraps() as the remaining arguments. Default arguments are as for update_wrapper(). This is a convenience function to simplify applying partial() to update_wrapper().

imfusion.dicom

Submodules containing DICOM related functionalities.

class imfusion.dicom.GeneralEquipmentModuleDataComponent(self: GeneralEquipmentModuleDataComponent)

Bases: DataComponentBase

property anatomical_orientation_type
property device_serial_number
property gantry_id
property institution_address
property institution_name
property institutional_departmentname
property manufacturer
property manufacturers_model_name
property software_versions
property spatial_resolution
property station_name
class imfusion.dicom.RTStructureDataComponent(self: RTStructureDataComponent)

Bases: DataComponentBase

DataComponent for PointClouds loaded from a DICOM RTStructureSet.

Provides information about the original structure/grouping of the points. See RTStructureIoAlgorithm for details about how RTStructureSets are loaded.

Warning

Since this component uses fixed indices into the PointCloud’s points structure, it can only be used if the PointCloud remains unchanged!

class Contour

Bases: pybind11_object

Represents a single item in the original ‘Contour Sequence’ (3006,0040).

property length
property start_index
property type
class GeometryType(self: GeometryType, value: int)

Bases: pybind11_object

Defines how the points of a contour should be interpreted.

Members:

POINT

OPEN_PLANAR

CLOSED_PLANAR

OPEN_NONPLANAR

CLOSED_PLANAR = <GeometryType.CLOSED_PLANAR: 2>
OPEN_NONPLANAR = <GeometryType.OPEN_NONPLANAR: 3>
OPEN_PLANAR = <GeometryType.OPEN_PLANAR: 1>
POINT = <GeometryType.POINT: 0>
property name
property value
class ROIGenerationAlgorithm(self: ROIGenerationAlgorithm, value: int)

Bases: pybind11_object

Defines how the RT structure was generated

Members:

UNKNOWN

AUTOMATIC

SEMI_AUTOMATIC

MANUAL

AUTOMATIC = <ROIGenerationAlgorithm.AUTOMATIC: 1>
MANUAL = <ROIGenerationAlgorithm.MANUAL: 3>
SEMI_AUTOMATIC = <ROIGenerationAlgorithm.SEMI_AUTOMATIC: 2>
UNKNOWN = <ROIGenerationAlgorithm.UNKNOWN: 0>
property name
property value
property color
property contours
property generation_algorithm
property referenced_frame_of_reference_UID
class imfusion.dicom.ReferencedInstancesComponent(self: ReferencedInstancesComponent)

Bases: DataComponentBase

DataComponent to store DICOM instances that are referenced by the dataset.

A DICOM dataset can reference a number of other DICOM datasets that are somehow related. The references in this component are determined by the ReferencedSeriesSequence.

is_referencing(*args, **kwargs)

Overloaded function.

  1. is_referencing(self: imfusion.dicom.ReferencedInstancesComponent, arg0: imfusion.dicom.SourceInfoComponent) -> bool

    Returns true if the instances of the given SourceInfoComponent are referenced by this component.

    The instances and references have to only intersect for this to return true. This way, e.g. a segmentation would be considered referencing a CT if it only overlaps in a view slices.

  2. is_referencing(self: imfusion.dicom.ReferencedInstancesComponent, arg0: imfusion._bindings.SharedImageSet) -> bool

    Convenient method that calls the above method with SourceInfoComponent of sis.

    Only returns true if all elementwise SourceInfoComponents are referenced.

class imfusion.dicom.SourceInfoComponent(self: SourceInfoComponent)

Bases: DataComponentBase

property sop_class_uids
property sop_instance_uids
property source_uris
imfusion.dicom.load_file(file_path: str) list

Load a single file as DICOM.

Depending on the SOPClassUID of the DICOM file, this can result in:

For regular images, usually only one result is generated. If not it is usually an indication that the file could not be entirely reconstructed as a volume (e.g. the spacing between slices is not uniform).

For segmentations, multiple labelmaps will be returned if labels overlap (i.e. one pixel has at least 2 labels).

For RT Structure Sets, one PointCloud is returned per structure.

imfusion.dicom.load_folder(folder_path: str, recursive: bool = True, ignore_non_dicom: bool = True) list

Load all DICOM files from a folder.

Generally this produces one dataset per DICOM series, however, this might not always be the case. Check ImageInfoDataComponent for the actual series UID.

See imfusion.dicom.load_file() for a list of datasets that can be generated.

Either a path to a local folder or a URL is accepted. URLs support the file:// and pacs:// schemes. To load a series from PACS, use an URL with the following format: pacs://<hostname>:<port>/<PACS AE title>?series=<series instance uid>&study=<study instance uid> To receive DICOMs from the PACS, a temporary server will be started on the port defined by imfusion.dicom.set_pacs_client_config().

Parameters:
  • folder_path (str) – A path to a folder or an URL.

  • recursive (bool) – Whether subfolders should be scanned recursively for all DICOM files.

  • ignore_non_dicom (bool) – Whether files without a valid DICOM header should be ignored. This is usually faster and produces less warnings/errors, but technically the DICOM header is optional and might be missing. This is very rare though.

imfusion.dicom.rtstruct_to_labelmap(rtstruct_set: list[PointCloud], referenced_image: SharedImageSet, combine_label_maps: bool = False) list[SharedImageSet]

Algorithm to convert a PointCloud with a RTStructureDataComponent datacomponent to a labelmap.

This is currently only supported for CLOSED_PLANAR contours in RTStructureDataComponent. The algorithm requires a reference volume that determines the size of the labelmap. Each contour is expected to be planar on a slice in the reference volume. This algorithm works best when using the volume that is referenced by the original DICOM RTStructureDataSet (see imfusion.RTStructureDataComponent.referenced_frame_of_reference_UID).

Returns one labelmap per input RT Structure.

imfusion.dicom.save_file(image: SharedImageSet, file_path: str, referenced_image: SharedImageSet = None) None

Save an image as a single DICOM file.

The SOP Class that is used for the export is determined based on the modality of the image. For example, CT images will be exported as ‘Enhanced CT Image Storage’ and LABEL images as ‘Segmentation Storage’.

When exporting volumes, note that older software might not be able to load them. Use imfusion.dicom.save_folder() instead.

Optionally, the generated DICOMs can also reference another DICOM image, which is passed with the referenced_image argument. This referenced_image must have been loaded from DICOM and/or contain a elementwise SourceInfoComponent and a ImageInfoDataComponent contain a valid series instance UID. With such a reference, other software can determine whether different DICOMs are related. This is especially important when exporting segmentations with modality LABEL. The exported segmentations must reference the data that was used to generate the segmentation. If this reference is missing, the exported segmentations cannot be loaded in some software.

When exporting segmentations, only the slices containing non-zero labels will be exported. After re-importing the file, it therefore might have a different number of slices.

For saving RT Structures, see imfusion.dicom.save_rtstruct().

Parameters:
  • image (SharedImageSet) – The image to export

  • file_path (str) – File to write the resulting DICOM to. Existing files will be overwritten!

  • referenced_image (SharedImageSet) – An optional image that the exported image should reference.

Warning

At the moment, only exporting single frame CT and MR volumes is well supported. Since DICOM is an extensive standard, any other kind of image might lead to a non-standard or invalid DICOM.

imfusion.dicom.save_folder(image: SharedImageSet, folder_path: str, referenced_image: SharedImageSet = None) None

Save an image as a DICOM folder containing potentially multiple files.

The SOP Class that is used for the export is determined based on the modality of the image. For example, CT images will be exported as ‘CT Image Storage’.

Works like imfusion.dicom.save_file() except for using different SOP Class UIDs.

imfusion.dicom.save_rtstruct(*args, **kwargs)

Overloaded function.

  1. save_rtstruct(labelmap: imfusion._bindings.SharedImageSet, referenced_image: imfusion._bindings.SharedImageSet, file_path: str) -> None

    Save a labelmap as a RT Structure Set.

    The contours of a label inside the labelmap will be used as a contour in the RT Structure. Each slice of the labelmap generates seperate contours (RT Structure does not support 3D contours).

  2. save_rtstruct(rtstruct_set: list[imfusion._bindings.PointCloud], referenced_image: imfusion._bindings.SharedImageSet, file_path: str) -> None

    Save a list of PointCloud as a RT Structure Set.

    Each PointCloud must provide a RTStructureDataComponent.

imfusion.dicom.set_pacs_client_config(ae_title: str, port: int) None

Set the client configuration when connecting to a PACS.

To receive DICOMs from a PACS server, the AE title and port needs to be registered with the PACS as well (vendor specific and not done by this function!).

Warning

The values will be persisted on the system and will be restored when the application is restarted.

imfusion.imagemath

imfusion.imagemath - Bindings for ImageMath Operations

This module provides element-wise arithmetic operations for SharedImage and SharedImageSet. You can apply these imagemath functionalities directly to objects of SharedImage and SharedImageSet with eager evaluation. Alternatively, the module offers lazy evaluation functionality through the submodule lazy. You can create wrapper expressions using the Expression provided by lazy.

See Expression for details.

Example for eager evaluation:

>>> from imfusion import _bindings.imagemath as imagemath

Add si1 and si2, which are SharedImage instances:

>>> res = si1 + si2

res is a SharedImage instance.

>>> print(res)
imfusion.SharedImage(FLOAT width: 512 height: 512)

Example for lazy evaluation:

>>> from imfusion import _bindings.imagemath as imagemath

Create expressions from SharedImage instances:

>>> expr1 = imagemath.lazy.Expression(si1)
>>> expr2 = imagemath.lazy.Expression(si2)

Add expr1 and expr2:

>>> expr3 = expr1 + expr2

Alternatively, you could add expr1 and si2 or si1 and expr2. Any expression containing an instance of Expression will be converted to lazy evaluation expression.

>>> expr3 = expr1 + si2

Find the result with lazy evaluation:

>>> res = expr3.evaluate()

res is a SharedImage instance similar to eager evaluation case.

>>> print(res)
imfusion.SharedImage(FLOAT width: 512 height: 512)
imfusion.imagemath.absolute(*args, **kwargs)

Overloaded function.

  1. absolute(x: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Absolute value, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. absolute(x: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Absolute value, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.add(*args, **kwargs)

Overloaded function.

  1. add(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Addition, element-wise.

Parameters:
  1. add(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Addition, element-wise.

Parameters:
  1. add(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Addition, element-wise.

Parameters:
  1. add(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Addition, element-wise.

Parameters:
  1. add(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Addition, element-wise.

Parameters:
  1. add(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Addition, element-wise.

Parameters:
  1. add(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Addition, element-wise.

Parameters:
  1. add(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Addition, element-wise.

Parameters:
imfusion.imagemath.arctan2(*args, **kwargs)

Overloaded function.

  1. arctan2(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Trigonometric inverse tangent, element-wise.

Parameters:
imfusion.imagemath.argmax(*args, **kwargs)

Overloaded function.

  1. argmax(x: imfusion._bindings.SharedImage) -> list[numpy.ndarray[numpy.int32[4, 1]]]

Return a list of the indices of maximum values, channel-wise. The indices are represented as (x, y, z, image index).

Parameters:

x (SharedImage) – SharedImage instance.

  1. argmax(x: imfusion._bindings.SharedImageSet) -> list[numpy.ndarray[numpy.int32[4, 1]]]

Return a list of the indices of maximum values, channel-wise. The indices are represented as (x, y, z, image index).

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.argmin(*args, **kwargs)

Overloaded function.

  1. argmin(x: imfusion._bindings.SharedImage) -> list[numpy.ndarray[numpy.int32[4, 1]]]

Return a list of the indices of minimum values, channel-wise. The indices are represented as (x, y, z, image index).

Parameters:

x (SharedImage) – SharedImage instance.

  1. argmin(x: imfusion._bindings.SharedImageSet) -> list[numpy.ndarray[numpy.int32[4, 1]]]

Return a list of the indices of minimum values, channel-wise. The indices are represented as (x, y, z, image index).

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.channel_swizzle(*args, **kwargs)

Overloaded function.

  1. channel_swizzle(x: imfusion._bindings.SharedImage, indices: list[int]) -> imfusion._bindings.SharedImage

Reorders the channels of an image based on the input indices, e.g. indices[0] will correspond to the first channel of the output image.

Parameters:
  1. channel_swizzle(x: imfusion._bindings.SharedImageSet, indices: list[int]) -> imfusion._bindings.SharedImageSet

Reorders the channels of an image based on the input indices, e.g. indices[0] will correspond to the first channel of the output image.

Parameters:
imfusion.imagemath.cos(*args, **kwargs)

Overloaded function.

  1. cos(x: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Cosine, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. cos(x: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Cosine, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.divide(*args, **kwargs)

Overloaded function.

  1. divide(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Division, element-wise.

Parameters:
  1. divide(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Division, element-wise.

Parameters:
  1. divide(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Division, element-wise.

Parameters:
  1. divide(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Division, element-wise.

Parameters:
  1. divide(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Division, element-wise.

Parameters:
  1. divide(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Division, element-wise.

Parameters:
  1. divide(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Division, element-wise.

Parameters:
  1. divide(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Division, element-wise.

Parameters:
imfusion.imagemath.equal(*args, **kwargs)

Overloaded function.

  1. equal(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 == x2), element-wise.

Parameters:
imfusion.imagemath.exp(*args, **kwargs)

Overloaded function.

  1. exp(x: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Exponential operation, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. exp(x: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Exponential operation, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.greater(*args, **kwargs)

Overloaded function.

  1. greater(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 > x2), element-wise.

Parameters:
imfusion.imagemath.greater_equal(*args, **kwargs)

Overloaded function.

  1. greater_equal(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 >= x2), element-wise.

Parameters:
imfusion.imagemath.less(*args, **kwargs)

Overloaded function.

  1. less(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 < x2), element-wise.

Parameters:
imfusion.imagemath.less_equal(*args, **kwargs)

Overloaded function.

  1. less_equal(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 <= x2), element-wise.

Parameters:
imfusion.imagemath.log(*args, **kwargs)

Overloaded function.

  1. log(x: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Natural logarithm, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. log(x: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Natural logarithm, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.max(*args, **kwargs)

Overloaded function.

  1. max(x: imfusion._bindings.SharedImage) -> numpy.ndarray[numpy.float64[m, 1]]

Return the list of the maximum elements of images, channel-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. max(x: imfusion._bindings.SharedImageSet) -> numpy.ndarray[numpy.float64[m, 1]]

Return the list of the maximum elements of images, channel-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.maximum(*args, **kwargs)

Overloaded function.

  1. maximum(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return element-wise maximum of arguments.

Parameters:
imfusion.imagemath.mean(*args, **kwargs)

Overloaded function.

  1. mean(x: imfusion._bindings.SharedImage) -> numpy.ndarray[numpy.float64[m, 1]]

Return a list of channel-wise average of image elements.

Parameters:

x (SharedImage) – SharedImage instance.

  1. mean(x: imfusion._bindings.SharedImageSet) -> numpy.ndarray[numpy.float64[m, 1]]

Return a list of channel-wise average of image elements.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.min(*args, **kwargs)

Overloaded function.

  1. min(x: imfusion._bindings.SharedImage) -> numpy.ndarray[numpy.float64[m, 1]]

Return the list of the minimum elements of images, channel-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. min(x: imfusion._bindings.SharedImageSet) -> numpy.ndarray[numpy.float64[m, 1]]

Return the list of the minimum elements of images, channel-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.minimum(*args, **kwargs)

Overloaded function.

  1. minimum(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return element-wise minimum of arguments.

Parameters:
imfusion.imagemath.multiply(*args, **kwargs)

Overloaded function.

  1. multiply(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Multiplication, element-wise.

Parameters:
  1. multiply(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Multiplication, element-wise.

Parameters:
  1. multiply(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Multiplication, element-wise.

Parameters:
  1. multiply(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Multiplication, element-wise.

Parameters:
  1. multiply(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Multiplication, element-wise.

Parameters:
  1. multiply(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Multiplication, element-wise.

Parameters:
  1. multiply(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Multiplication, element-wise.

Parameters:
  1. multiply(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Multiplication, element-wise.

Parameters:
imfusion.imagemath.negative(*args, **kwargs)

Overloaded function.

  1. negative(x: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Numerical negative, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. negative(x: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Numerical negative, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.norm(*args, **kwargs)

Overloaded function.

  1. norm(x: imfusion._bindings.SharedImage, order: object = 2) -> numpy.ndarray[numpy.float64[m, 1]]

Returns the norm of an image instance, channel-wise.

Parameters:
  1. norm(x: imfusion._bindings.SharedImageSet, order: object = 2) -> numpy.ndarray[numpy.float64[m, 1]]

Returns the norm of an image instance, channel-wise.

Parameters:
imfusion.imagemath.not_equal(*args, **kwargs)

Overloaded function.

  1. not_equal(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Return the truth value of (x1 != x2), element-wise.

Parameters:
imfusion.imagemath.power(*args, **kwargs)

Overloaded function.

  1. power(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

The first argument is raised to powers of the second argument, element-wise.

Parameters:
imfusion.imagemath.prod(*args, **kwargs)

Overloaded function.

  1. prod(x: imfusion._bindings.SharedImage) -> numpy.ndarray[numpy.float64[m, 1]]

Return a list of channel-wise production of image elements.

Parameters:

x (SharedImage) – SharedImage instance.

  1. prod(x: imfusion._bindings.SharedImageSet) -> numpy.ndarray[numpy.float64[m, 1]]

Return a list of channel-wise production of image elements.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.sign(*args, **kwargs)

Overloaded function.

  1. sign(x: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Element-wise indication of the sign of image elements.

Parameters:

x (SharedImage) – SharedImage instance.

  1. sign(x: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Element-wise indication of the sign of image elements.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.sin(*args, **kwargs)

Overloaded function.

  1. sin(x: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Sine, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. sin(x: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Sine, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.sqrt(*args, **kwargs)

Overloaded function.

  1. sqrt(x: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Square-root operation, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. sqrt(x: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Square-root operation, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.square(*args, **kwargs)

Overloaded function.

  1. square(x: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Square operation, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. square(x: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Square operation, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.subtract(*args, **kwargs)

Overloaded function.

  1. subtract(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Addition, element-wise.

Parameters:
  1. subtract(x1: imfusion._bindings.SharedImage, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Addition, element-wise.

Parameters:
  1. subtract(x1: imfusion._bindings.SharedImage, x2: float) -> imfusion._bindings.SharedImage

Addition, element-wise.

Parameters:
  1. subtract(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImageSet

Addition, element-wise.

Parameters:
  1. subtract(x1: imfusion._bindings.SharedImageSet, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Addition, element-wise.

Parameters:
  1. subtract(x1: imfusion._bindings.SharedImageSet, x2: float) -> imfusion._bindings.SharedImageSet

Addition, element-wise.

Parameters:
  1. subtract(x1: float, x2: imfusion._bindings.SharedImage) -> imfusion._bindings.SharedImage

Addition, element-wise.

Parameters:
  1. subtract(x1: float, x2: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Addition, element-wise.

Parameters:
imfusion.imagemath.sum(*args, **kwargs)

Overloaded function.

  1. sum(x: imfusion._bindings.SharedImage) -> numpy.ndarray[numpy.float64[m, 1]]

Return a list of channel-wise sum of image elements.

Parameters:

x (SharedImage) – SharedImage instance.

  1. sum(x: imfusion._bindings.SharedImageSet) -> numpy.ndarray[numpy.float64[m, 1]]

Return a list of channel-wise sum of image elements.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

imfusion.imagemath.lazy

Lazy evaluation (imagemath.lazy)

class imfusion.imagemath.lazy.Expression(*args, **kwargs)

Bases: pybind11_object

Expressions to be used for lazy evaluation.

This class serves as a wrapper for SharedImage, SharedImageSet, and scalar values to be used for lazy evaluation. Lazy evaluation approach delays the actual evaluation until the point that the result is needed. If you prefer the eager evaluation approach, you can directly invoke operations on SharedImage and SharedImageSet objects.

Here is an example how to use lazy evaluation approach:

>>> from imfusion import _bindings.imagemath as imagemath

Create expressions from SharedImage instances:

>>> expr1 = imagemath.lazy.Expression(si1)
>>> expr2 = imagemath.lazy.Expression(si2)

Any operation with the expressions will return another expression. Expressions are stored in the expression tree and not evaluated yet without any evaluation.

>>> expr3 = expr1 + expr2

Expressions must be explicitly evaluated to get results. Use the evaluate() method for this purpose:

>>> res = expr3.evaluate()

Here, result is a SharedImage instance:

>>> print(res)
imfusion.SharedImage(FLOAT width: 512 height: 512)

Overloaded function.

  1. __init__(self: imfusion.imagemath.lazy.Expression, shared_image_set: imfusion._bindings.SharedImageSet) -> None

Creates an expression wrapping SharedImageSet instance.

Parameters:

shared_image_set (SharedImageSet) – SharedImageSet instance to be wrapped by Expression.

  1. __init__(self: imfusion.imagemath.lazy.Expression, shared_image: imfusion._bindings.SharedImage) -> None

Creates an expression wrapping SharedImage instance.

Parameters:

shared_image (SharedImage) – SharedImage instance to be wrapped by Expression.

  1. __init__(self: imfusion.imagemath.lazy.Expression, value: float) -> None

Creates an expression wrapping a scalar value.

Parameters:

value (float) – Scalar value to be wrapped by Expression.

  1. __init__(self: imfusion.imagemath.lazy.Expression, channel: int) -> None

Creates an expression wrapping a variable, e.g. a result of another computation which is not yet available during creation of the expr. Currently, only one per expression is allowed.

Parameters:

channel (int) – The channel of the variable wrapped by Expression.

__abs__(self: Expression) Expression

Expression for absolute value, element-wise.

__add__(*args, **kwargs)

Overloaded function.

  1. __add__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

  1. __add__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. __add__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

  1. __add__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:

x (float) – scalar value.

__eq__(*args, **kwargs)

Overloaded function.

  1. __eq__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 == x2), element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

  1. __eq__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 == x2), element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. __eq__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 == x2), element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

  1. __eq__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 == x2), element-wise.

Parameters:

x (float) – scalar value.

__ge__(*args, **kwargs)

Overloaded function.

  1. __ge__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 >= x2), element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

  1. __ge__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 >= x2), element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. __ge__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 >= x2), element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

  1. __ge__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 >= x2), element-wise.

Parameters:

x (float) – scalar value.

__getitem__(self: Expression, index: int) Expression

This method only works with SharedImageSet Expression instances. Returns a SharedImage Expression from a SharedImageSet Expression.

Parameters:

index (int) – The index of SharedImage Expression.

__gt__(*args, **kwargs)

Overloaded function.

  1. __gt__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 > x2), element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

  1. __gt__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 > x2), element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. __gt__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 > x2), element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

  1. __gt__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 > x2), element-wise.

Parameters:

x (float) – scalar value.

__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: imfusion.imagemath.lazy.Expression, shared_image_set: imfusion._bindings.SharedImageSet) -> None

Creates an expression wrapping SharedImageSet instance.

Parameters:

shared_image_set (SharedImageSet) – SharedImageSet instance to be wrapped by Expression.

  1. __init__(self: imfusion.imagemath.lazy.Expression, shared_image: imfusion._bindings.SharedImage) -> None

Creates an expression wrapping SharedImage instance.

Parameters:

shared_image (SharedImage) – SharedImage instance to be wrapped by Expression.

  1. __init__(self: imfusion.imagemath.lazy.Expression, value: float) -> None

Creates an expression wrapping a scalar value.

Parameters:

value (float) – Scalar value to be wrapped by Expression.

  1. __init__(self: imfusion.imagemath.lazy.Expression, channel: int) -> None

Creates an expression wrapping a variable, e.g. a result of another computation which is not yet available during creation of the expr. Currently, only one per expression is allowed.

Parameters:

channel (int) – The channel of the variable wrapped by Expression.

__le__(*args, **kwargs)

Overloaded function.

  1. __le__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 <= x2), element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

  1. __le__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 <= x2), element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. __le__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 <= x2), element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

  1. __le__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 <= x2), element-wise.

Parameters:

x (float) – scalar value.

__lt__(*args, **kwargs)

Overloaded function.

  1. __lt__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 < x2), element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

  1. __lt__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 < x2), element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. __lt__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 < x2), element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

  1. __lt__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 < x2), element-wise.

Parameters:

x (float) – scalar value.

__mul__(*args, **kwargs)

Overloaded function.

  1. __mul__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Multiplication, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

  1. __mul__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Multiplication, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. __mul__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Multiplication, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

  1. __mul__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression

Multiplication, element-wise.

Parameters:

x (float) – scalar value.

__ne__(*args, **kwargs)

Overloaded function.

  1. __ne__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 != x2), element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

  1. __ne__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 != x2), element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. __ne__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 != x2), element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

  1. __ne__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 != x2), element-wise.

Parameters:

x (float) – scalar value.

__neg__(self: Expression) Expression

Expression for numerical negative, element-wise.

__pow__(*args, **kwargs)

Overloaded function.

  1. __pow__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

The first argument is raised to powers of the second argument, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

  1. __pow__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

The first argument is raised to powers of the second argument, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. __pow__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

The first argument is raised to powers of the second argument, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

  1. __pow__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression

The first argument is raised to powers of the second argument, element-wise.

Parameters:

x (float) – scalar value.

__radd__(*args, **kwargs)

Overloaded function.

  1. __radd__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

  2. __radd__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

  3. __radd__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression

__req__(*args, **kwargs)

Overloaded function.

  1. __req__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

  2. __req__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

  3. __req__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression

__rge__(*args, **kwargs)

Overloaded function.

  1. __rge__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

  2. __rge__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

  3. __rge__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression

__rgt__(*args, **kwargs)

Overloaded function.

  1. __rgt__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

  2. __rgt__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

  3. __rgt__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression

__rle__(*args, **kwargs)

Overloaded function.

  1. __rle__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

  2. __rle__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

  3. __rle__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression

__rlt__(*args, **kwargs)

Overloaded function.

  1. __rlt__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

  2. __rlt__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

  3. __rlt__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression

__rmul__(*args, **kwargs)

Overloaded function.

  1. __rmul__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

  2. __rmul__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

  3. __rmul__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression

__rne__(*args, **kwargs)

Overloaded function.

  1. __rne__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

  2. __rne__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

  3. __rne__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression

__rpow__(*args, **kwargs)

Overloaded function.

  1. __rpow__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

  2. __rpow__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

  3. __rpow__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression

__rsub__(*args, **kwargs)

Overloaded function.

  1. __rsub__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

  2. __rsub__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

  3. __rsub__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression

__rtruediv__(*args, **kwargs)

Overloaded function.

  1. __rtruediv__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

  2. __rtruediv__(self: imfusion.imagemath.lazy.Expression, arg0: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

  3. __rtruediv__(self: imfusion.imagemath.lazy.Expression, arg0: float) -> imfusion.imagemath.lazy.Expression

__sub__(*args, **kwargs)

Overloaded function.

  1. __sub__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

  1. __sub__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. __sub__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

  1. __sub__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:

x (float) – scalar value.

__truediv__(*args, **kwargs)

Overloaded function.

  1. __truediv__(self: imfusion.imagemath.lazy.Expression, x: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Division, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

  1. __truediv__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Division, element-wise.

Parameters:

x (SharedImage) – SharedImage instance.

  1. __truediv__(self: imfusion.imagemath.lazy.Expression, x: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Division, element-wise.

Parameters:

x (SharedImageSet) – SharedImageSet instance.

  1. __truediv__(self: imfusion.imagemath.lazy.Expression, x: float) -> imfusion.imagemath.lazy.Expression

Division, element-wise.

Parameters:

x (float) – scalar value.

argmax(self: Expression) list[ndarray[numpy.int32[4, 1]]]

Return the expression for computing a list of the indices of maximum values, channel-wise. The indices are represented as (x, y, z, image index).

argmin(self: Expression) list[ndarray[numpy.int32[4, 1]]]

Return the expression for computing a list of the indices of minimum values, channel-wise. The indices are represented as (x, y, z, image index).

channel_swizzle(self: Expression, indices: list[int]) Expression

Returns the expression which reorders the channels of an image based on the input indices, e.g. indices[0] will correspond to the first channel of the output image.

Parameters:

indices (List[int]) – List of channels indices to swizzle the channels of the SharedImage or SharedImageSet expressions.

evaluate(self: Expression) object

Evalute the expression into an image object, which is SharedImage or SharedImageSet instance. Scalar expressions return None when evaluated. Until this method is called, the operands and operations are stored in an expression tree but not evaluated yet.

Returns: SharedImage or SharedImageSet instance depending on the end result of the expression tree.

max(self: Expression) ndarray[numpy.float64[m, 1]]

Return the expression for computing the list of the maximum elements of images, channel-wise.

mean(self: Expression) ndarray[numpy.float64[m, 1]]

Return the expression for computing a list of channel-wise average of image elements.

min(self: Expression) ndarray[numpy.float64[m, 1]]

Return the expression for computing the list of the minimum elements of images, channel-wise.

norm(self: Expression, order: object = 2) ndarray[numpy.float64[m, 1]]

Returns the expression for computing the norm of an image, channel-wise.

Parameters:

order (int, float, 'inf') – Order of the norm. Default is L2 norm.

prod(self: Expression) ndarray[numpy.float64[m, 1]]

Return the expression for computing a list of channel-wise production of image elements.

sum(self: Expression) ndarray[numpy.float64[m, 1]]

Return the expression for computing a list of channel-wise sum of image elements.

__annotations__ = {}
__hash__ = None
__module__ = 'imfusion.imagemath.lazy'
imfusion.imagemath.lazy.absolute(x: Expression) Expression

Expression for absolute value, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.add(*args, **kwargs)

Overloaded function.

  1. add(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
  1. add(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
  1. add(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
  1. add(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
  1. add(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
  1. add(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
  1. add(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
imfusion.imagemath.lazy.arctan2(*args, **kwargs)

Overloaded function.

  1. arctan2(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Trigonometric inverse tangent, element-wise.

Parameters:
  1. arctan2(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Trigonometric inverse tangent, element-wise.

Parameters:
imfusion.imagemath.lazy.argmax(x: Expression) list[ndarray[numpy.int32[4, 1]]]

Return the expression for computing a list of the indices of maximum values, channel-wise. The indices are represented as (x, y, z, image index).

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.argmin(x: Expression) list[ndarray[numpy.int32[4, 1]]]

Return the expression for computing a list of the indices of minimum values, channel-wise. The indices are represented as (x, y, z, image index).

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.channel_swizzle(x: Expression, indices: list[int]) Expression

Returns the expression which reorders the channels of an image based on the input indices, e.g. indices[0] will correspond to the first channel of the output image.

Parameters:
imfusion.imagemath.lazy.cos(x: Expression) Expression

Expression for cosine, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.divide(*args, **kwargs)

Overloaded function.

  1. divide(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Division, element-wise.

Parameters:
  1. divide(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Division, element-wise.

Parameters:
  1. divide(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Division, element-wise.

Parameters:
  1. divide(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Division, element-wise.

Parameters:
  1. divide(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Division, element-wise.

Parameters:
  1. divide(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Division, element-wise.

Parameters:
  1. divide(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Division, element-wise.

Parameters:
imfusion.imagemath.lazy.equal(*args, **kwargs)

Overloaded function.

  1. equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 == x2), element-wise.

Parameters:
  1. equal(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 == x2), element-wise.

Parameters:
imfusion.imagemath.lazy.exp(x: Expression) Expression

Expression for exponential operation, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.greater(*args, **kwargs)

Overloaded function.

  1. greater(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 > x2), element-wise.

Parameters:
  1. greater(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 > x2), element-wise.

Parameters:
imfusion.imagemath.lazy.greater_equal(*args, **kwargs)

Overloaded function.

  1. greater_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 >= x2), element-wise.

Parameters:
  1. greater_equal(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 >= x2), element-wise.

Parameters:
imfusion.imagemath.lazy.less(*args, **kwargs)

Overloaded function.

  1. less(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 < x2), element-wise.

Parameters:
  1. less(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 < x2), element-wise.

Parameters:
imfusion.imagemath.lazy.less_equal(*args, **kwargs)

Overloaded function.

  1. less_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 <= x2), element-wise.

Parameters:
  1. less_equal(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 <= x2), element-wise.

Parameters:
imfusion.imagemath.lazy.log(x: Expression) Expression

Expression for natural logarithm, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.max(x: Expression) ndarray[numpy.float64[m, 1]]

Return the expression for computing the list of the maximum elements of images, channel-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.maximum(*args, **kwargs)

Overloaded function.

  1. maximum(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return element-wise maximum of arguments.

Parameters:
  1. maximum(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return element-wise maximum of arguments.

Parameters:
imfusion.imagemath.lazy.mean(x: Expression) ndarray[numpy.float64[m, 1]]

Return the expression for computing a list of channel-wise average of image elements.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.min(x: Expression) ndarray[numpy.float64[m, 1]]

Return the expression for computing the list of the minimum elements of images, channel-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.minimum(*args, **kwargs)

Overloaded function.

  1. minimum(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return element-wise minimum of arguments.

Parameters:
  1. minimum(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return element-wise minimum of arguments.

Parameters:
imfusion.imagemath.lazy.multiply(*args, **kwargs)

Overloaded function.

  1. multiply(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Multiplication, element-wise.

Parameters:
  1. multiply(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Multiplication, element-wise.

Parameters:
  1. multiply(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Multiplication, element-wise.

Parameters:
  1. multiply(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Multiplication, element-wise.

Parameters:
  1. multiply(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Multiplication, element-wise.

Parameters:
  1. multiply(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Multiplication, element-wise.

Parameters:
  1. multiply(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Multiplication, element-wise.

Parameters:
imfusion.imagemath.lazy.negative(x: Expression) Expression

Expression for numerical negative, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.norm(x: Expression, order: object = 2) ndarray[numpy.float64[m, 1]]

Returns the expression for computing the norm of an image, channel-wise.

Parameters:
imfusion.imagemath.lazy.not_equal(*args, **kwargs)

Overloaded function.

  1. not_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 != x2), element-wise.

Parameters:
  1. not_equal(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Return the truth value of (x1 != x2), element-wise.

Parameters:
imfusion.imagemath.lazy.power(*args, **kwargs)

Overloaded function.

  1. power(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

The first argument is raised to powers of the second argument, element-wise.

Parameters:
  1. power(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

The first argument is raised to powers of the second argument, element-wise.

Parameters:
imfusion.imagemath.lazy.prod(x: Expression) ndarray[numpy.float64[m, 1]]

Return the expression for computing a list of channel-wise production of image elements.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.sign(x: Expression) Expression

Expression for element-wise indication of the sign of image elements.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.sin(x: Expression) Expression

Expression for sine, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.sqrt(x: Expression) Expression

Expression for the square-root operation, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.square(x: Expression) Expression

Expression for the square operation, element-wise.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.imagemath.lazy.subtract(*args, **kwargs)

Overloaded function.

  1. subtract(x1: imfusion.imagemath.lazy.Expression, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
  1. subtract(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImage) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
  1. subtract(x1: imfusion.imagemath.lazy.Expression, x2: imfusion._bindings.SharedImageSet) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
  1. subtract(x1: imfusion.imagemath.lazy.Expression, x2: float) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
  1. subtract(x1: imfusion._bindings.SharedImage, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
  1. subtract(x1: imfusion._bindings.SharedImageSet, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
  1. subtract(x1: float, x2: imfusion.imagemath.lazy.Expression) -> imfusion.imagemath.lazy.Expression

Addition, element-wise.

Parameters:
imfusion.imagemath.lazy.sum(x: Expression) ndarray[numpy.float64[m, 1]]

Return the expression for computing a list of channel-wise sum of image elements.

Parameters:

x (Expression) – Expression instance wrapping SharedImage instance, SharedImageSet instance, or scalar value.

imfusion.labels

This module offers a way of interacting with Labels projects from python. The central class in this module is the Project class. It allow you to either create a new local project or load an existing local project:

import imfusion
from imfusion import labels
new_project = labels.Project('New Project', 'path/to/new/project/folder')
existing_project = labels.Project.load('path/to/existing/project')
remote_project = labels.Project.load('http://example.com', '1', 'username', 'password123')

From there you can add new tag definitions, annotation definitions and data to the project:

project.add_tag('NewTag', labels.TagKind.Bool)
project.add_labelmap_layer('NewLabelmap')
project.add_descriptor(imfusion.io.open('/path/to/image')[0])

The Project instance is also the central way to access this kind of data:

new_tag = project.tags['NewTag']  # can also be indexed with an integer, i.e. tags[0]
new_labelmap = project.labelmap_layers['NewLabelmap']  # can also be indexed with an interger, i.e. labelmap_layers[0]
new_descriptor = project.descriptors[0]

The DataDescriptor class represents an entry in the project’s database and can be used to access the the entry’s metadata, tags and annotations. The interface for accessing tags and annotations is the same as in Project but also offers the additional value attribute to get the value of the tag / annotation:

name = descriptor.name
shape = (descriptor.n_images, descriptor.n_channels, descriptor.n_slices, descriptor.height, descriptor.width)
new_tag = descriptor.tags['NewTag']
tag_value = descriptor.tags['NewTag'].value
labelmap = descriptor.labelmap_layers['NewLabelmap'].load()
roi = descriptor.roi
image = descriptor.load_image(crop_to_roi=True)

Note

Keep in mind that all modifications made to a local project are stored in memory and will only be saved to disk if you call Project.save(). Modifications to remote projects are applied immediately. Alternatively, you can also use the Project as a context manager:

with Project('SomeName', /some/path) as project:
        ...  # will automatically save the project when exiting the context if there was no exception

Warning

Changing annotation data is the only exception to this rule. It is written immediately to disk (see :meth:LabelMapLayer.save_new_data`, :meth:LandmarkLayer.save_new_data`, :meth:BoundingBoxLayer.save_new_data`)

class imfusion.labels.BoundingBox

Bases: pybind11_object

property color
property descriptor
property index
property name
property project
class imfusion.labels.BoundingBoxAccessor

Bases: pybind11_object

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, index: int) -> imfusion.labels._bindings.BoundingBox

Retrieve an entry from this BoundingBoxAccessor by its index.

Parameters:

index – Integer index of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, name: str) -> imfusion.labels._bindings.BoundingBox

Retrieve an entry from this BoundingBoxAccessor by its name.

Parameters:

name – Name of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, slice: slice) -> imfusion.labels._bindings.BoundingBoxAccessor

Retrieve multiple entries from this BoundingBoxAccessor using Python’s slice notation ([start:stop:step]).

Parameters:

sliceslice instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.

  1. __getitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, selection: list[int]) -> imfusion.labels._bindings.BoundingBoxAccessor

Retrieve multiple entries from this BoundingBoxAccessor by using a list of indices.

Parameters:

selection – List of integer indices of the entries to be retrieved.

__setitem__(*args, **kwargs)

Overloaded function.

  1. __setitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, index: int, value: object) -> None

Change an existing entry by index.

Parameters:
  • index – Index of the entry to be changed.

  • value – Value to set at index.

  1. __setitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, name: str, value: object) -> None

Change an existing entry by name.

Parameters:
  • name – Name of the entry to be changed.

  • value – Value to set at name.

  1. __setitem__(self: imfusion.labels._bindings.BoundingBoxAccessor, index: slice, value: list) -> None

Change multiple entries denoted using Python’s slice notation ([start:stop:step]).

Parameters:
  • sliceslice instance that specifies the indices of entries to be changed. Can be implicitly constructed from Python’s slice notation or created explicitly with slice.

  • value – Value to set at indices specified by``slice``.

size(self: BoundingBoxAccessor) int
property names

List of the names of BoundingBoxs available through this BoundingBoxAccessor

class imfusion.labels.BoundingBoxLayer

Bases: pybind11_object

add_annotation(self: BoundingBoxLayer, name: str, color: tuple[int, int, int] = (255, 255, 255)) BoundingBox

Define a new entry in this boundingbox layer. The definition consists of only the name, the actual coordinates for it are stored in the BoxSet.

Parameters:
  • name (str) – Name of the new boundingbox.

  • color (tuple[int, int, int]) – Color for displaying this boundingbox in the UI.

add_boundingbox()

add_annotation(self: imfusion.labels._bindings.BoundingBoxLayer, name: str, color: tuple[int, int, int] = (255, 255, 255)) -> imfusion.labels._bindings.BoundingBox

Define a new entry in this boundingbox layer. The definition consists of only the name, the actual coordinates for it are stored in the BoxSet.

Parameters:
  • name (str) – Name of the new boundingbox.

  • color (tuple[int, int, int]) – Color for displaying this boundingbox in the UI.

load(self: BoundingBoxLayer) object
save_new_data(self: BoundingBoxLayer, value: object, lock_token: LockToken = LockToken(token='')) None

Change the data of this layer.

Warning

Beware that, unlike other modifications, new layer data is immediately written to disk, regardless of calls to Project.save().

property annotations
property boundingboxes
property descriptor
property folder
property id
property index
property name
property project
class imfusion.labels.BoundingBoxLayersAccessor

Bases: pybind11_object

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion.labels._bindings.BoundingBoxLayersAccessor, index: int) -> imfusion.labels._bindings.BoundingBoxLayer

Retrieve an entry from this BoundingBoxLayersAccessor by its index.

Parameters:

index – Integer index of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.BoundingBoxLayersAccessor, name: str) -> imfusion.labels._bindings.BoundingBoxLayer

Retrieve an entry from this BoundingBoxLayersAccessor by its name.

Parameters:

name – Name of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.BoundingBoxLayersAccessor, slice: slice) -> imfusion.labels._bindings.BoundingBoxLayersAccessor

Retrieve multiple entries from this BoundingBoxLayersAccessor using Python’s slice notation ([start:stop:step]).

Parameters:

sliceslice instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.

  1. __getitem__(self: imfusion.labels._bindings.BoundingBoxLayersAccessor, selection: list[int]) -> imfusion.labels._bindings.BoundingBoxLayersAccessor

Retrieve multiple entries from this BoundingBoxLayersAccessor by using a list of indices.

Parameters:

selection – List of integer indices of the entries to be retrieved.

size(self: BoundingBoxLayersAccessor) int
property active

Return the currently active layer or None if no layer is active.

The active layer is usually only relevant when using Python inside the application. It can be set by the user to defined the layer that can be modified with e.g. the brush tool.

It’s currently not possible to change the active layer through the Python API but only in the UI.

property names

List of the names of BoundingBoxLayers available through this BoundingBoxLayersAccessor

class imfusion.labels.BoxSet(self: BoxSet, names: list[str], n_frames: int)

Bases: pybind11_object

add(self: BoxSet, type: str, frame: int, top_left: ndarray[numpy.float64[3, 1]], lower_right: ndarray[numpy.float64[3, 1]]) None

Add a box to the set.

Parameters:
  • type (str) – Type of box that should be added.

  • frame (int) – Frame for which this box should be added.

  • top_left (tuple[int, int, int]) – Coordinates of the top left corner of the box.

  • lower_right (tuple[int, int, int]) – Coordinates of the lower right corner of the box.

asdict(self: BoxSet) dict

Convert this AnnotationSet into a dict. Modifying the dict does not reflect on the AnnotationSet.

frame(self: BoxSet, which: int) BoxSet

Select only the points that belong to the specified frame.

static from_descriptor(descriptor: Descriptor, layer_name: str) BoxSet

Create a BoxSet tailored to a specific annotation layer in a descriptor.

type(*args, **kwargs)

Overloaded function.

  1. type(self: imfusion.labels._bindings.BoxSet, type: str) -> imfusion.labels._bindings.BoxSet

Select only the points that belong to the specified type.

  1. type(self: imfusion.labels._bindings.BoxSet, type: int) -> imfusion.labels._bindings.BoxSet

Select only the points that belong to the specified type.

class imfusion.labels.DataType(self: DataType, value: int)

Bases: pybind11_object

Enum for specifying what is considered valid data in the project.

Members:

SingleChannelImages : Consider 2D greyscale images as valid data.

MultiChannelImages : Consider 2D color images as valid data.

SingleChannelVolumes : Consider 3D greyscale images as valid data.

MultiChannelVolumes : Consider 3D color images as valid data.

AnyDataType : Consider any kind of image data as valid data.

AnyDataType = DataType.AnyDataType
MultiChannelImages = DataType.MultiChannelImages
MultiChannelVolumes = DataType.MultiChannelVolumes
SingleChannelImages = DataType.SingleChannelImages
SingleChannelVolumes = DataType.SingleChannelVolumes
property name
property value
class imfusion.labels.Descriptor

Bases: pybind11_object

Class representing an entry in the project’s database. It holds, amongst other things, meta data about the image, annotations and the location of the image.

consider_frame_annotated(self: Descriptor, frame: int, annotated: bool) None
is_considered_annotated(self: Descriptor, frame: object = None) bool
load_image(self: Descriptor, crop_to_roi: bool) SharedImageSet
load_thumbnail(self: Descriptor, generate: bool = True) SharedImageSet

Return the image thumbnail as a SharedImageSet.

Parameters:

generate (bool) – Whether to generate the thumbnail if it’s missing. If this is False, the method will return None for missing thumbnails.

lock(self: Descriptor) LockToken
property boundingbox_layers
property byte_size
property comments
property grouping
property has_data
property height
property identifier
property import_time
property is_locked
property labelmap_layers
property landmark_layers
property latest_edit_time
property load_path
property modality
property n_channels
property n_images
property n_slices
property name
property original_data_path
property own_copy
property patient_name
property project
property region_of_interest
property roi
property scale
property series_instance_uid
property shift
property spacing
property sub_file_id
property tags
property thumbnail_path
property top_down
property type
property width
class imfusion.labels.Label(self: Label, name: str, kind: LayerKind, color: tuple[int, int, int] | None = None, value: int | None = None)

Bases: pybind11_object

A single Label of Layer that defines its name and color among other things.

property color
property id
property kind
property name
property value
class imfusion.labels.LabelLegacy

Bases: pybind11_object

property color
property descriptor
property index
property name
property project
property value
class imfusion.labels.LabelMapLayer

Bases: pybind11_object

add_annotation(self: LabelMapLayer, name: str, value: int, color: tuple[int, int, int] | None = None) LabelLegacy

Define a new entry in this labelmap layer. A label is represented by a name and a corresponding integer value for designating voxels in the labelmap.

Parameters:
  • name (str) – Name of the new label.

  • value (int) – Value for encoding this label in the labelmap.

  • color (tuple[int, int, int]) – Color for displaying this label in the UI. Int need to be in the range [0 255]. Default colors are picked if not provided.

add_label()

add_annotation(self: imfusion.labels._bindings.LabelMapLayer, name: str, value: int, color: Optional[tuple[int, int, int]] = None) -> imfusion.labels._bindings.LabelLegacy

Define a new entry in this labelmap layer. A label is represented by a name and a corresponding integer value for designating voxels in the labelmap.

Parameters:
  • name (str) – Name of the new label.

  • value (int) – Value for encoding this label in the labelmap.

  • color (tuple[int, int, int]) – Color for displaying this label in the UI. Int need to be in the range [0 255]. Default colors are picked if not provided.

create_empty_labelmap(self: LabelMapLayer) object

Create an empty labelmap that is compatible with this layer. The labelmap will have the same size and meta data as the image. The labelmap is completely independent of the layer and does not replace the existing labelmap of the layer! To use this labelmap for the layer, call LabelMapLayer.save_new_data().

has_data(self: LabelMapLayer) bool

Return whether the labelmap exists and is not empty.

load(self: LabelMapLayer) object

Load the labelmap as a SharedImagetSet. If the labelmap is completely empty, None is returned. To create a new labelmap use LabelMapLayer.create_empty_labelmap().

path(self: LabelMapLayer) str

Returns the path where the labelmap is stored on disk. Empty for remote projects.

save_new_data(self: LabelMapLayer, value: object, lock_token: LockToken = LockToken(token='')) None

Change the data of this layer.

Warning

Beware that, unlike other modifications, new layer data is immediately written to disk, regardless of calls to Project.save().

thumbnail_path(self: LabelMapLayer) str

Returns the path where the labelmap thumbnail is stored on disk. Empty for remote projects.

property annotations
property descriptor
property folder
property id
property index
property labels
property name
property project
class imfusion.labels.LabelMapsAccessor

Bases: pybind11_object

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion.labels._bindings.LabelMapsAccessor, index: int) -> imfusion.labels._bindings.LabelMapLayer

Retrieve an entry from this LabelMapsAccessor by its index.

Parameters:

index – Integer index of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.LabelMapsAccessor, name: str) -> imfusion.labels._bindings.LabelMapLayer

Retrieve an entry from this LabelMapsAccessor by its name.

Parameters:

name – Name of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.LabelMapsAccessor, slice: slice) -> imfusion.labels._bindings.LabelMapsAccessor

Retrieve multiple entries from this LabelMapsAccessor using Python’s slice notation ([start:stop:step]).

Parameters:

sliceslice instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.

  1. __getitem__(self: imfusion.labels._bindings.LabelMapsAccessor, selection: list[int]) -> imfusion.labels._bindings.LabelMapsAccessor

Retrieve multiple entries from this LabelMapsAccessor by using a list of indices.

Parameters:

selection – List of integer indices of the entries to be retrieved.

size(self: LabelMapsAccessor) int
property active

Return the currently active layer or None if no layer is active.

The active layer is usually only relevant when using Python inside the application. It can be set by the user to defined the layer that can be modified with e.g. the brush tool.

It’s currently not possible to change the active layer through the Python API but only in the UI.

property names

List of the names of LabelMaps available through this LabelMapsAccessor

class imfusion.labels.LabelsAccessor

Bases: pybind11_object

Like a list of Label, but allows indexing by index or name.

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion.labels._bindings.LabelsAccessor, index: int) -> imfusion.labels._bindings.Label

    Retrieve an entry from this LabelsAccessor by its index.

    Args:

    index: Integer index of the entry to be retrieved.

  2. __getitem__(self: imfusion.labels._bindings.LabelsAccessor, name: str) -> imfusion.labels._bindings.Label

    Retrieve an entry from this LabelsAccessor by its name.

    Args:

    name: Name of the entry to be retrieved.

  3. __getitem__(self: imfusion.labels._bindings.LabelsAccessor, slice: slice) -> imfusion.labels._bindings.LabelsAccessor

    Retrieve multiple entries from this LabelsAccessor using Python’s slice notation ([start:stop:step]).

    Args:

    slice: slice instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.

property names

List of the names of Labels available through this LabelsAccessor

class imfusion.labels.LabelsAccessorLegacy

Bases: pybind11_object

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, index: int) -> imfusion.labels._bindings.LabelLegacy

Retrieve an entry from this LabelsAccessorLegacy by its index.

Parameters:

index – Integer index of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, name: str) -> imfusion.labels._bindings.LabelLegacy

Retrieve an entry from this LabelsAccessorLegacy by its name.

Parameters:

name – Name of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, slice: slice) -> imfusion.labels._bindings.LabelsAccessorLegacy

Retrieve multiple entries from this LabelsAccessorLegacy using Python’s slice notation ([start:stop:step]).

Parameters:

sliceslice instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.

  1. __getitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, selection: list[int]) -> imfusion.labels._bindings.LabelsAccessorLegacy

Retrieve multiple entries from this LabelsAccessorLegacy by using a list of indices.

Parameters:

selection – List of integer indices of the entries to be retrieved.

__setitem__(*args, **kwargs)

Overloaded function.

  1. __setitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, index: int, value: object) -> None

Change an existing entry by index.

Parameters:
  • index – Index of the entry to be changed.

  • value – Value to set at index.

  1. __setitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, name: str, value: object) -> None

Change an existing entry by name.

Parameters:
  • name – Name of the entry to be changed.

  • value – Value to set at name.

  1. __setitem__(self: imfusion.labels._bindings.LabelsAccessorLegacy, index: slice, value: list) -> None

Change multiple entries denoted using Python’s slice notation ([start:stop:step]).

Parameters:
  • sliceslice instance that specifies the indices of entries to be changed. Can be implicitly constructed from Python’s slice notation or created explicitly with slice.

  • value – Value to set at indices specified by``slice``.

size(self: LabelsAccessorLegacy) int
property names

List of the names of LabelLegacys available through this LabelsAccessorLegacy

class imfusion.labels.Landmark

Bases: pybind11_object

property color
property descriptor
property index
property name
property project
class imfusion.labels.LandmarkLayer

Bases: pybind11_object

add_annotation(self: LandmarkLayer, name: str, color: tuple[int, int, int] = (255, 255, 255)) Landmark

Define a new entry in this landmark layer. The definition consists of only the name, the actual coordinates for it are stored in the LandmarkSet.

Parameters:
  • name (str) – Name of the new landmark.

  • color (tuple[int, int, int]) – Color for displaying this annotation in the UI.

add_landmark()

add_annotation(self: imfusion.labels._bindings.LandmarkLayer, name: str, color: tuple[int, int, int] = (255, 255, 255)) -> imfusion.labels._bindings.Landmark

Define a new entry in this landmark layer. The definition consists of only the name, the actual coordinates for it are stored in the LandmarkSet.

Parameters:
  • name (str) – Name of the new landmark.

  • color (tuple[int, int, int]) – Color for displaying this annotation in the UI.

load(self: LandmarkLayer) object
save_new_data(self: LandmarkLayer, value: object, lock_token: LockToken = LockToken(token='')) None

Change the data of this layer.

Warning

Beware that, unlike other modifications, new layer data is immediately written to disk, regardless of calls to Project.save().

property annotations
property descriptor
property folder
property id
property index
property landmarks
property name
property project
class imfusion.labels.LandmarkLayersAccessor

Bases: pybind11_object

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion.labels._bindings.LandmarkLayersAccessor, index: int) -> imfusion.labels._bindings.LandmarkLayer

Retrieve an entry from this LandmarkLayersAccessor by its index.

Parameters:

index – Integer index of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.LandmarkLayersAccessor, name: str) -> imfusion.labels._bindings.LandmarkLayer

Retrieve an entry from this LandmarkLayersAccessor by its name.

Parameters:

name – Name of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.LandmarkLayersAccessor, slice: slice) -> imfusion.labels._bindings.LandmarkLayersAccessor

Retrieve multiple entries from this LandmarkLayersAccessor using Python’s slice notation ([start:stop:step]).

Parameters:

sliceslice instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.

  1. __getitem__(self: imfusion.labels._bindings.LandmarkLayersAccessor, selection: list[int]) -> imfusion.labels._bindings.LandmarkLayersAccessor

Retrieve multiple entries from this LandmarkLayersAccessor by using a list of indices.

Parameters:

selection – List of integer indices of the entries to be retrieved.

size(self: LandmarkLayersAccessor) int
property active

Return the currently active layer or None if no layer is active.

The active layer is usually only relevant when using Python inside the application. It can be set by the user to defined the layer that can be modified with e.g. the brush tool.

It’s currently not possible to change the active layer through the Python API but only in the UI.

property names

List of the names of LandmarkLayers available through this LandmarkLayersAccessor

class imfusion.labels.LandmarkSet(self: LandmarkSet, names: list[str], n_frames: int)

Bases: pybind11_object

add(self: LandmarkSet, type: str, frame: int, world: ndarray[numpy.float64[3, 1]]) None

Add a keypoint to the set.

Parameters:
  • type (str) – Type of keypoint that should be added.

  • frame (int) – Frame for which this keypoint should be added.

  • world (tuple[int, int, int]) – Coordinates of the point.

asdict(self: LandmarkSet) dict

Convert this AnnotationSet into a dict. Modifying the dict does not reflect on the AnnotationSet.

frame(self: LandmarkSet, which: int) LandmarkSet

Select only the points that belong to the specified frame.

static from_descriptor(descriptor: Descriptor, layer_name: str) LandmarkSet

Create a LandmarkSet tailored to a specific annotation layer in a descriptor.

type(*args, **kwargs)

Overloaded function.

  1. type(self: imfusion.labels._bindings.LandmarkSet, type: str) -> imfusion.labels._bindings.LandmarkSet

Select only the points that belong to the specified type.

  1. type(self: imfusion.labels._bindings.LandmarkSet, type: int) -> imfusion.labels._bindings.LandmarkSet

Select only the points that belong to the specified type.

class imfusion.labels.LandmarksAccessor

Bases: pybind11_object

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion.labels._bindings.LandmarksAccessor, index: int) -> imfusion.labels._bindings.Landmark

Retrieve an entry from this LandmarksAccessor by its index.

Parameters:

index – Integer index of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.LandmarksAccessor, name: str) -> imfusion.labels._bindings.Landmark

Retrieve an entry from this LandmarksAccessor by its name.

Parameters:

name – Name of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.LandmarksAccessor, slice: slice) -> imfusion.labels._bindings.LandmarksAccessor

Retrieve multiple entries from this LandmarksAccessor using Python’s slice notation ([start:stop:step]).

Parameters:

sliceslice instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.

  1. __getitem__(self: imfusion.labels._bindings.LandmarksAccessor, selection: list[int]) -> imfusion.labels._bindings.LandmarksAccessor

Retrieve multiple entries from this LandmarksAccessor by using a list of indices.

Parameters:

selection – List of integer indices of the entries to be retrieved.

__setitem__(*args, **kwargs)

Overloaded function.

  1. __setitem__(self: imfusion.labels._bindings.LandmarksAccessor, index: int, value: object) -> None

Change an existing entry by index.

Parameters:
  • index – Index of the entry to be changed.

  • value – Value to set at index.

  1. __setitem__(self: imfusion.labels._bindings.LandmarksAccessor, name: str, value: object) -> None

Change an existing entry by name.

Parameters:
  • name – Name of the entry to be changed.

  • value – Value to set at name.

  1. __setitem__(self: imfusion.labels._bindings.LandmarksAccessor, index: slice, value: list) -> None

Change multiple entries denoted using Python’s slice notation ([start:stop:step]).

Parameters:
  • sliceslice instance that specifies the indices of entries to be changed. Can be implicitly constructed from Python’s slice notation or created explicitly with slice.

  • value – Value to set at indices specified by``slice``.

size(self: LandmarksAccessor) int
property names

List of the names of Landmarks available through this LandmarksAccessor

class imfusion.labels.Layer(self: Layer, name: str, kind: LayerKind, labels: list[Label] = [])

Bases: pybind11_object

A single layer that defines which labels can be annotated for each Descriptor.

add_label(self: Layer, arg0: Label) None
property id
property kind
property labels
property name
class imfusion.labels.LayerKind(self: LayerKind, value: int)

Bases: pybind11_object

The kind of a layer defines what can be labelled in that layer.

Members:

PIXELWISE

BOUNDINGBOX

LANDMARK

BOUNDINGBOX = <LayerKind.BOUNDINGBOX: 1>
LANDMARK = <LayerKind.LANDMARK: 2>
PIXELWISE = <LayerKind.PIXELWISE: 0>
property name
property value
class imfusion.labels.LayersAccessor

Bases: pybind11_object

Like a list of Layer, but allows indexing by index or name.

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion.labels._bindings.LayersAccessor, index: int) -> imfusion.labels._bindings.Layer

    Retrieve an entry from this LayersAccessor by its index.

    Args:

    index: Integer index of the entry to be retrieved.

  2. __getitem__(self: imfusion.labels._bindings.LayersAccessor, name: str) -> imfusion.labels._bindings.Layer

    Retrieve an entry from this LayersAccessor by its name.

    Args:

    name: Name of the entry to be retrieved.

  3. __getitem__(self: imfusion.labels._bindings.LayersAccessor, slice: slice) -> imfusion.labels._bindings.LayersAccessor

    Retrieve multiple entries from this LayersAccessor using Python’s slice notation ([start:stop:step]).

    Args:

    slice: slice instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.

property names

List of the names of Layers available through this LayersAccessor

class imfusion.labels.LockToken

Bases: pybind11_object

A token representing a lock of a DataDescriptor.

Only the holder of the token can modify the layers of a locked Descriptor. Locking is only supported in remote projects. Local projects ignore the locking mechanism. A LockToken can be acquired through lock(). It can be used as a context manager so that it is unlocked automatically, when exiting the context. Tokens expire automatically after a certain time depending on the server (default: after 5 minutes).

descriptor = project.descriptors[0]
with descriptor.lock() as lock:
        ...
unlock(self: LockToken) None

Releases the lock. The token will become invalid afters and should not be used anymore.

class imfusion.labels.Project(self: imfusion.labels._bindings.Project, name: str, project_path: str, data_type: imfusion.labels._bindings.DataType = <DataType.AnyDataType: 15>)

Bases: pybind11_object

Class that represents a Labels project. A project holds all information regarding defined annotations and data samples

Create a new local project. Doing so will also create a new project folder on disk.

Parameters:
  • name (str) – Name of the project.

  • project_path (str) – Folder that should contain all the project’s files.

  • data_type (DataType) – Type of data, which is allowed to be added to this project. By default, there are no restrictions on the type of data.

add_boundingbox_layer(self: Project, name: str) BoundingBoxLayer

Define a new boundingbox layer for this project.

Parameters:

name (str) – Name of the new boundingbox layer.

add_descriptor(*args, **kwargs)

Overloaded function.

  1. add_descriptor(self: imfusion.labels._bindings.Project, shared_image_set: imfusion._bindings.SharedImageSet, name: str = ‘’, own_copy: bool = False) -> object

Create a new entry in the project’s database from a given image. For local project, the descriptor to the dataset is returned immediately. For remote project, only the identifier of the descriptor is returned. The actual dataset will only become available after a call to sync().

Parameters:
  • name (str) – Name of the new database entry.

  • shared_image_set (SharedImageSet) – Image for which the new entry will be created.

  • own_copy (bool) – If True, Labels will save a copy of the image in the project folder. Automatically set to True if the image does not have a DataSourceComponent, as this implies that is was created rather then loaded.

  1. add_descriptor(self: imfusion.labels._bindings.Project, name: str, shared_image_set: imfusion._bindings.SharedImageSet, own_copy: bool = False) -> object

Create a new entry in the project’s database from a given image. For local project, the descriptor to the dataset is returned immediately. For remote project, only the identifier of the descriptor is returned. The actual dataset will only become available after a call to sync().

Parameters:
  • name (str) – Name of the new database entry.

  • shared_image_set (SharedImageSet) – Image for which the new entry will be created.

  • own_copy (bool) – If True, Labels will save a copy of the image in the project folder. Automatically set to True if the image does not have a DataSourceComponent, as this implies that is was created rather then loaded.

add_labelmap_layer(self: Project, name: str) LabelMapLayer

Define a new labelmap layer for this project.

Parameters:

name (str) – Name of the new labelmap layer.

add_landmark_layer(self: Project, name: str) LandmarkLayer

Define a new landmark layer for this project.

Parameters:

name (str) – Name of the new landmark layer.

add_tag(self: Project, name: str, kind: TagKind, color: tuple[int, int, int] = (255, 255, 255), options: list[str] = []) TagLegacy

Define a new tag for this project.

Parameters:
  • name (str) – Name of the new tag.

  • kind (TagKind) – Type of the new tag (Bool, Float or Enum).

  • color (tuple[int, int, int]) – Color of the tag in the UI.

  • options (list[str]) – Options that the user can choose from. Only applies to Enum tags.

static create(settings: ProjectSettings, path: str = '', username: str = '', password: str = '') Project

Create a new project with the given settings.

path can be either a path or URL.

Passing a folder will create a local project. The folder must be empty otherwise an exception is raised.

When passing a http(s) URL, it must point to the base URL of a Labels server (e.g. https://example.com and not https://example.com/api/v1/projects). Additionally, a valid username and password must be specified. The server might reject a project, e.g. because a project with the same name already exists. In this case, an exception is raised.

delete_descriptors(self: Project, descriptors: list[Descriptor]) None

“Remove the given descriptors from the project.

Parameters:

descriptors (list[Descriptors]) – list of descriptors that should be deleted from the project.

edit(self: Project, arg0: ProjectSettings) None

Edit the project settings by applying the given settings.

Editing a project is a potentially destructive action that cannot be reverted.

When adding new tags, layers or label their “id” field should be empty (an id will be automatically assigned).

Warning

Remote project are not edited in-place at the moment. After calling this method, you need to reload the project from the server. Otherwise, the project settings will be out of sync with the server.

static load(path: str, project_id: str | None = None, username: str | None = None, password: str | None = None) Project

Load an existing project from disk or from a remote server.

Parameters:
  • path (str) – Either a folder containing a local project or an URL to a remote project.

  • project_id (str) – the ID of the project to load

  • username (str) – the username with which to authenticate

  • password (str) – the password with which to authenticate

refresh_access_token(self: Project) None

Refresh the access token of a remote project. Access tokens expire after a predefined period of time, and need to be refreshed in order to make further requests.

save(self: Project) None

Save the modifications performed in memory to disk.

settings(self: Project) ProjectSettings

Return the current settings of a project.

The settings are not connected to the project, so changing the settings object does not change the project. Use edit() to apply new settings.

sync(self: Project) int

Synchronize the local state of a remote project. Any “event” that occured between the last sync() call and this one are replayed locally, such that the local Project reflects the last known state of the project on the server. An “event” refers to any change being made on the project data by any client (including this one), such as a dataset being added or deleted, a new label map being uploaded, a tag value being changed, etc.

Returns the number of events applied to the project.

property boundingbox_layers

Returns an BoundingBoxLayersAccessor to the boundingbox layers defined in the project.

property configuration
property data_type
property descriptors
property grouping_hierachy
property id

Return the unique id of a remote project.

property is_local

Returns whether the project is local

property is_remote

Returns whether the project is remote

property labelmap_layers

Returns an Accessor to the labelmap layers defined in the project.

property landmark_layers

Returns an LandmarkLayerAccessor to the landmark layers defined in the project.

property path
property tags

Returns an Accessor to the tags defined in the project.

class imfusion.labels.ProjectSettings(self: ProjectSettings, name: str, tags: list[Tag] = [], layers: list[Layer] = [])

Bases: pybind11_object

Contains the invididual elements that make up a project definition.

add_layer(self: ProjectSettings, arg0: Layer) None

Add a new layer to the settings.

add_tag(self: ProjectSettings, arg0: Tag) None

Add a new tag to the settings.

remove_layer(self: ProjectSettings, arg0: Layer) None
remove_tag(self: ProjectSettings, arg0: Tag) None
property layers
property name
property tags
class imfusion.labels.Tag(self: Tag, name: str, kind: TagKind, color: tuple[int, int, int] | None = None, options: list = [])

Bases: pybind11_object

A Tag definition. Tag values can be set on a Descriptor according to this definition.

property color
property id
property kind
property name
property options
class imfusion.labels.TagKind(self: TagKind, value: int)

Bases: pybind11_object

Enum for differentiating different kinds of tags.

Members:

Bool : Tag that stores a single boolean value.

Enum : Tag that stores a list of string options.

Float : Tag that stores a single float value.

Bool = <TagKind.Bool: 0>
Enum = <TagKind.Enum: 1>
Float = <TagKind.Float: 2>
property name
property value
class imfusion.labels.TagLegacy

Bases: pybind11_object

add_option(self: TagLegacy, option: str) None

Add a new value option for this tag (only works with enum tags).

Parameters:

option – New option to be added to this tag.

property color
property descriptor
property id
property index
property kind
property locked
property name
property options
property project
property value
class imfusion.labels.TagsAccessor

Bases: pybind11_object

Like a list of Tag, but allows indexing by index or name.

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion.labels._bindings.TagsAccessor, index: int) -> imfusion.labels._bindings.Tag

    Retrieve an entry from this TagsAccessor by its index.

    Args:

    index: Integer index of the entry to be retrieved.

  2. __getitem__(self: imfusion.labels._bindings.TagsAccessor, name: str) -> imfusion.labels._bindings.Tag

    Retrieve an entry from this TagsAccessor by its name.

    Args:

    name: Name of the entry to be retrieved.

  3. __getitem__(self: imfusion.labels._bindings.TagsAccessor, slice: slice) -> imfusion.labels._bindings.TagsAccessor

    Retrieve multiple entries from this TagsAccessor using Python’s slice notation ([start:stop:step]).

    Args:

    slice: slice instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.

property names

List of the names of Tags available through this TagsAccessor

class imfusion.labels.TagsAccessorLegacy

Bases: pybind11_object

__getitem__(*args, **kwargs)

Overloaded function.

  1. __getitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, index: int) -> imfusion.labels._bindings.TagLegacy

Retrieve an entry from this TagsAccessorLegacy by its index.

Parameters:

index – Integer index of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, name: str) -> imfusion.labels._bindings.TagLegacy

Retrieve an entry from this TagsAccessorLegacy by its name.

Parameters:

name – Name of the entry to be retrieved.

  1. __getitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, slice: slice) -> imfusion.labels._bindings.TagsAccessorLegacy

Retrieve multiple entries from this TagsAccessorLegacy using Python’s slice notation ([start:stop:step]).

Parameters:

sliceslice instance that specifies the indices of entries to be retrieved. Can be implicitly constructed using Python’s slice notation.

  1. __getitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, selection: list[int]) -> imfusion.labels._bindings.TagsAccessorLegacy

Retrieve multiple entries from this TagsAccessorLegacy by using a list of indices.

Parameters:

selection – List of integer indices of the entries to be retrieved.

__setitem__(*args, **kwargs)

Overloaded function.

  1. __setitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, index: int, value: object) -> None

Change an existing entry by index.

Parameters:
  • index – Index of the entry to be changed.

  • value – Value to set at index.

  1. __setitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, name: str, value: object) -> None

Change an existing entry by name.

Parameters:
  • name – Name of the entry to be changed.

  • value – Value to set at name.

  1. __setitem__(self: imfusion.labels._bindings.TagsAccessorLegacy, index: slice, value: list) -> None

Change multiple entries denoted using Python’s slice notation ([start:stop:step]).

Parameters:
  • sliceslice instance that specifies the indices of entries to be changed. Can be implicitly constructed from Python’s slice notation or created explicitly with slice.

  • value – Value to set at indices specified by``slice``.

size(self: TagsAccessorLegacy) int
property names

List of the names of TagLegacys available through this TagsAccessorLegacy

imfusion.labels.deprecate(old: str, new: str, owner: object, is_property=False)
Parameters:
imfusion.labels.wraps(wrapped, assigned=('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'), updated=('__dict__',))

Decorator factory to apply update_wrapper() to a wrapper function

Returns a decorator that invokes update_wrapper() with the decorated function as the wrapper argument and the arguments to wraps() as the remaining arguments. Default arguments are as for update_wrapper(). This is a convenience function to simplify applying partial() to update_wrapper().

imfusion.machinelearning

imfusion.machinelearning - Bindings for Machine Learning

This submodule provides Python bindings for the C++ ImFusion classes that can be used during the training of machine learning models.

exception imfusion.machinelearning.DataElementException

Bases: Exception

exception imfusion.machinelearning.DataItemException

Bases: Exception

exception imfusion.machinelearning.DataLoaderError

Bases: Exception

exception imfusion.machinelearning.ImageSamplerError

Bases: Exception

exception imfusion.machinelearning.MetricException

Bases: Exception

exception imfusion.machinelearning.OperationError

Bases: Exception

class imfusion.machinelearning.AbstractOperation(self: imfusion.machinelearning._bindings.Operation, name: str, processing_policy: imfusion.machinelearning._bindings.Operation.ProcessingPolicy = <ProcessingPolicy.EVERYTHING_BUT_LABELS: 0>, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None)

Bases: Operation

class imfusion.machinelearning.AddCenterBoxOperation(*args, **kwargs)

Bases: Operation

Add an additional channel to the input image with a binary box at its center. The purpose of that operation is to give a location information to the model.

Parameters:
  • box_half_width – Half-width of the box in pixels.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.AddCenterBoxOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.AddCenterBoxOperation, box_half_width: int, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.AddDegradedLabelAsChannelOperation(*args, **kwargs)

Bases: Operation

Append a channel to the image that contains a degraded version of the label. Given provided blob coordinates, the channel is zero except for at blobs at specified locations. The nonzero values are positive/negative based on whether the values are inside/outside of a label that has been eroded/dilated based on the label_dilation parameter.

Parameters:
  • blob_radius – Radius of each blob, in pixel coordinates. Default: 5.0

  • invert – Extra channel is positive/negative based on the label values except for at the blobs, where it is zero. Default: False

  • blob_coordinates – Centers of the blobs in pixel coordinates. Default: []

  • only_positive – If true, output channel is clamped to zero from below. Default: False

  • label_dilation – The dilation (if positive) or erosion (if negative), none if zero. Default: 0.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.AddDegradedLabelAsChannelOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.AddDegradedLabelAsChannelOperation, blob_radius: float = 5.0, invert: bool = False, blob_coordinates: list[numpy.ndarray[numpy.float64[3, 1]]] = [], only_positive: bool = False, label_dilation: float = 0.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.AddPixelwisePredictionChannelOperation(*args, **kwargs)

Bases: Operation

Run an existing pixelwise model and add result to the input image as additional channels. The prediction is automatically resampled to the input image resolution.

Parameters:
  • config_path – path to the YAML configuration file of the pixelwise model

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.AddPixelwisePredictionChannelOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.AddPixelwisePredictionChannelOperation, config_path: str, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.AddPositionChannelOperation(*args, **kwargs)

Bases: Operation

Add additional channels with the position of the pixels. Execute the algorithm AddPositionAsChannelAlgorithm internally, and uses the same configuration (parameter names and values).

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.AddPositionChannelOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.AddPositionChannelOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.AddRandomNoiseOperation(*args, **kwargs)

Bases: Operation

Apply a pixelwise random noise to the image intensities.

For type == "uniform": noise is drawn in \([-\textnormal{intensity}, \textnormal{intensity}]\).
For type == "gaussian: noise is drawn from a Gaussian with zero mean and standard deviation equal to \(\textnormal{intensity}\).
For type == "gamma": noise is drawn from a Gamma distribution with \(k = \theta = \textnormal{intensity}\) (note that this noise has a mean of 1.0 so it is biased).
For type == "shot": noise is drawn from a Gaussian with zero mean and standard deviation equal to \(\textnormal{intensity} * \sqrt{\textnormal{pixel_value}}\).
Parameters:
  • type – Distribution of the noise (‘uniform’, ‘gaussian’, ‘gamma’, ‘shot’). Default: ‘uniform’

  • intensity – Value related to the standard deviation of the generated noise. Default: 0.2

  • probability – Value in [0.0, 1.0] indicating the probability of this operation to be performed. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.AddRandomNoiseOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.AddRandomNoiseOperation, type: str = ‘uniform’, intensity: float = 0.2, probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.AdjustShiftScaleOperation(*args, **kwargs)

Bases: Operation

Apply a shift and scale to each channel of the input image. If shift and scale are vectors with multiple values, then for each channel c, \(\textnormal{output}_c = (\textnormal{input}_c + \textnormal{shift}_c) / \textnormal{scale}_c\). If shift and scale have a single value, then for each channel c, \(\textnormal{output}_c = (\textnormal{input} + \textnormal{shift}_c) / \textnormal{scale}\).

Parameters:
  • shift – Shift parameters as double (one value per channel, or one single value for all channels). Default: [0.0]

  • scale – Scaling parameter as double (one value per channel, or one single value for all channels). Default: [1.0]

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.AdjustShiftScaleOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.AdjustShiftScaleOperation, shift: list[float] = [0.0], scale: list[float] = [1.0], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ApplyTopDownFlagOperation(*args, **kwargs)

Bases: Operation

Flip the input image if it has a topDown flag set to false.

Note

The topDown flag is not accessible from Python

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • axes: [‘y’]

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ApplyTopDownFlagOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ApplyTopDownFlagOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ApproximateToHigherResolutionOperation(*args, **kwargs)

Bases: Operation

Replicate the input image of the operation from the original reference image (in ReferenceImageDataComponent) This operation is to be used mainly as post-processing, when a model produces a filtered image at a sub-resolution: it then tries to replicate the output from the original image so that no resolution is lost. It consists in estimating a multiplicative scalar field between the input and the downsampled original image, upsample it and then re-apply it on the original image.

Parameters:
  • epsilon – Used to avoid division by zero in case the original image has zero values. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ApproximateToHigherResolutionOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ApproximateToHigherResolutionOperation, epsilon: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ArgMaxOperation(*args, **kwargs)

Bases: Operation

Create a label map with the indices corresponding of the input channel with the highest value. The output of this operation is zero indexed, i.e. no matter which channels where selected the output is always in range [0; n - 1] where n is the number of selected channels (+ 1 if background threshold selected).

Parameters:
  • selected_channels – List of channels to be selected for the argmax. If empty, use all channels (default). Indices are zero indexed, e.g. [0, 1, 2, 3] selects the first 4 channels.

  • background_threshold – If set, the arg-max operation assumes the background is not explicitly encoded, and is only set when all activations are below background_threshold. The output then encodes 0 as the background. E.g. if the first 4 channels were selected, the possible output values would be [0, 1, 2, 3, 4] with 0 for the background and the rest for the selected channels.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ArgMaxOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ArgMaxOperation, selected_channels: list[int] = [], background_threshold: Optional[float] = None, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.AxisFlipOperation(*args, **kwargs)

Bases: Operation

Flip image content along specified set of axes.

Parameters:
  • axes – List of strings from {‘x’,’y’,’z’} specifying the axes to flip. For 2D images, only ‘x’ and ‘y’ are valid.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.AxisFlipOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.AxisFlipOperation, axes: list[str], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.AxisRotationOperation(*args, **kwargs)

Bases: Operation

Rotate image around image axis with axis-specific rotation angles that are signed multiples of 90 degrees.

Parameters:
  • axes – List of strings from {‘x’,’y’,’z’} specifying the axes to rotate around. For 2D images, only [‘z’] is valid.

  • angles – List of integers (with same lengths as axis) specifying the rotation angles in degrees. Only +- 0/90/180/270 are valid.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.AxisRotationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.AxisRotationOperation, axes: list[str], angles: list[int], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.BakeDeformationOperation(*args, **kwargs)

Bases: Operation

Deform an image with its attached Deformation and store the result into the returned output image. This operation will return a clone of the input image if it does not have any deformation attached. The output image will not have an attached Deformation.

Parameters:
  • adjust_size – whether the size of the output image would be automatically adjusted to fit the deformed content. Default: True

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.BakeDeformationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.BakeDeformationOperation, adjust_size: bool = True, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.BakePhotometricInterpretationOperation(*args, **kwargs)

Bases: Operation

Bake the Photometric Interpretation into the intensities of the image. If the image has a Photometric Interpretation of MONOCHROME1, the intensities will run be inverted using: \(\textnormal{output} = \textnormal{max} - (\textnormal{input} - \textnormal{min})\)

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.BakePhotometricInterpretationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.BakePhotometricInterpretationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.BakeTransformationOperation(*args, **kwargs)

Bases: Operation

Apply the rotation contained in the input image matrix.

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.BakeTransformationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.BakeTransformationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.BlobsFromKeypointsOperation(*args, **kwargs)

Bases: Operation

Transforms keypoints into an actual image (blob map with the same size of the image). Requires an input image called “data” (can be overwritten with the parameter image_field_name) and some keypoints called “keypoints” (can be overwritten with the parameter apply_to).

Parameters:
  • blob_radius – Size of the generated blobs in mm. Default: 5.0

  • image_field_name – Field name of the reference image. Default: “data”

  • blobs_field_name – Field name of the output blob map. Default: “label”

  • label_map_mode – Generate ubyte label map instead of multi-channel gaussian blobs. Default: False

  • sharp_blobs – Specifies whether to sharpen the profiles of the blob function, making its support more compact. Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.BlobsFromKeypointsOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.BlobsFromKeypointsOperation, blob_radius: float = 5.0, image_field_name: str = ‘data’, blobs_field_name: str = ‘label’, label_map_mode: bool = False, sharp_blobs: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.BoundingBoxElement(self: BoundingBoxElement, boundingbox_set: BoundingBoxSet)

Bases: DataElement

Initialize a BoundingBoxElement

Parameters:

boundingbox_set – In case the argument is a numpy array, the array shape is expected to be [N, C, B, 2, 3], where N is the batch size, C the number of different keypoint types (channel), B the number of instances of the same box type. Each Box is expected to have dimensions [2, 3]. If the argument is a nested list, the same concept applies also to the size of each level of nesting.

property boxes

Access to the underlying BoundingBoxSet.

class imfusion.machinelearning.BoundingBoxSet(*args, **kwargs)

Bases: Data

Class for managing sets of bounding boxes

The class is meant to be used in parallel with SharedImageSet. For each frame in the set, and for each type of bounding box (i.e. car, airplane, lung, cat), there is a list of boxes that encompass an instance of that type in the reference image. In terms of tensor dimensions, this would be represented as [N, C, B], where N is the batch size, C is the number of channels (i.e. types of boxes), and B is the number of boxes for the same instance type. Each Box has a dimension of [2, 3], consisting of a pair of vec3 for describing center and extent. See the Box class for more information.

Note

The API for this class is experimental and may change soon.

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.BoundingBoxSet, boxes: list[list[list[imfusion.machinelearning._bindings.Box]]]) -> None

  2. __init__(self: imfusion.machinelearning._bindings.BoundingBoxSet, boxes: list[list[list[tuple[numpy.ndarray[numpy.float64[3, 1]], numpy.ndarray[numpy.float64[3, 1]]]]]]) -> None

  3. __init__(self: imfusion.machinelearning._bindings.BoundingBoxSet, boxes: list[list[list[tuple[list[float], list[float]]]]]) -> None

  4. __init__(self: imfusion.machinelearning._bindings.BoundingBoxSet, array: numpy.ndarray[numpy.float64]) -> None

static load(location: str) BoundingBoxSet | None

Load a BoundingBoxSet from an ImFusion file.

Parameters:

location (str) – input path.

save(self: BoundingBoxSet, location: str) None

Save a BoundingBoxSet as an ImFusion file.

Parameters:

location (str) – output path.

property data
class imfusion.machinelearning.Box(*args, **kwargs)

Bases: pybind11_object

Bounding Box class for ML tasks. Since bounding boxes are axis aligned by definition, a Box is represented by its center and its extent. This representation allows for easy rotation, augmentation etc.

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.Box, center: numpy.ndarray[numpy.float64[3, 1]], extent: numpy.ndarray[numpy.float64[3, 1]]) -> None

  2. __init__(self: imfusion.machinelearning._bindings.Box, center_and_extent: tuple[numpy.ndarray[numpy.float64[3, 1]], numpy.ndarray[numpy.float64[3, 1]]]) -> None

property center
property extent
class imfusion.machinelearning.CenterROISampler(*args, **kwargs)

Bases: ImageROISampler

Sampler which samples one ROI from the input image and label map with a target size. The ROI is centered on the image center. The arrays will be padded if the target size is larger than the input image.

Parameters:
  • roi_size – Target size of the ROIs to be extracted as [Width, Height, Slices]

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • label_padding_mode: EnumStringParam assigned to clamp in {clamp; mirror; zero}

  • padding_mode: EnumStringParam assigned to clamp in {clamp; mirror; zero}

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.CenterROISampler, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.CenterROISampler, roi_size: numpy.ndarray[numpy.int32[3, 1]], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.CheckDataOperation(*args, **kwargs)

Bases: Operation

Checks if all input data match a set of expected conditions. If parameters are zero or empty, they are not checked.

Parameters:
  • num_dimensions – Expected number of dimensions in the input. Set to 0 to skip this check. Default: 0

  • num_images – Expected number of images in the input. Set to 0 to skip this check. Default: 0

  • num_channels – Expected number of channels in the input. Set to 0 to skip this check. Default: 0

  • data_type – Expected datatype of input. Must be one of: [“”, “float”, “uint8”, “int8”, “uint16”, “int16”, “uint32”, “int32”, “double”]. Empty string skips this check. Default: “”

  • dimensions – Expected spatial dimensions [width, height, depth] of input image. Set all dimensions to 0 to skip checking it. Default: [0,0,0]

  • spacing – Expected spacing [x, y, z] of input image in mm. Set all components to 0 to skip checking it. Default: [0,0,0]

  • label_match_input – Whether label dimensions and channel count must match the input image. Default: False

  • label_type – Expected datatype of labels. Must be one of: [“”, “float”, “uint8”, “int8”, “uint16”, “int16”, “uint32”, “int32”, “double”]. Empty string skips this check. Default: “”

  • label_values – List of required label values (excluding 0). No other values are allowed. When check_label_values_are_subset is false, all must be present. Empty list skips this check. Default: []

  • label_dimensions – Expected spatial dimensions [width, height, depth] of label image. Set all dimensions to 0 to skip checking it. Default: [0,0,0]

  • label_channels – Expected number of channels in the label image. Set to 0 to skip this check. Default: 0

  • check_rotation_matrix – Whether to verify the input image has no rotation matrix. Default: False

  • check_deformation – Whether to verify the input image has no deformation. Default: False

  • check_shift_scale – Whether to verify the input image has identity intensity transformation. Default: False

  • fail_on_error – Whether to raise an exception on validation failure (True) or just log an error (False). Default: True

  • save_path_on_error – Path where to save the failing input as an ImFusion file (.imf) when validation fails. Empty string disables saving. Default: “”

  • check_label_values_are_subset – Whether to allow label values not listed in label_values. When True, unlisted values are permitted. Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.CheckDataOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.CheckDataOperation, num_dimensions: int = 0, num_images: int = 0, num_channels: int = 0, data_type: str = ‘’, dimensions: numpy.ndarray[numpy.int32[3, 1]] = array([0, 0, 0], dtype=int32), spacing: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), label_match_input: bool = False, label_type: str = ‘’, label_values: list[int] = [], label_dimensions: numpy.ndarray[numpy.int32[3, 1]] = array([0, 0, 0], dtype=int32), label_channels: int = 0, check_rotation_matrix: bool = False, check_deformation: bool = False, check_shift_scale: bool = False, fail_on_error: bool = True, save_path_on_error: str = ‘’, check_label_values_are_subset: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ClipOperation(*args, **kwargs)

Bases: Operation

Clip the intensities to a minimum and maximum value: all intensities outside this range will be clipped to the range border.

Parameters:
  • min – Minimum intensity of the output image. Default: 0.0

  • max – Maximum intensity of the output image. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ClipOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ClipOperation, min: float = 0.0, max: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ComputingDevice(*args, **kwargs)

Bases: pybind11_object

Members:

FORCE_CPU

GPU_IF_GL_IMAGE

GPU_IF_OPENGL

FORCE_GPU

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ComputingDevice, value: int) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ComputingDevice, arg0: str) -> None

FORCE_CPU = <ComputingDevice.FORCE_CPU: 0>
FORCE_GPU = <ComputingDevice.FORCE_GPU: 3>
GPU_IF_GL_IMAGE = <ComputingDevice.GPU_IF_GL_IMAGE: 1>
GPU_IF_OPENGL = <ComputingDevice.GPU_IF_OPENGL: 2>
property name
property value
class imfusion.machinelearning.ConcatenateNeighboringFramesToChannelsOperation(*args, **kwargs)

Bases: Operation

This function iterates over each frame, augmenting the channel dimension by appending or adding information from neighboring frames from both sides. For instance, with radius=1 concatenation, an image with dimensions (10, 1, 256, 256, 1) becomes an (10, 1, 256, 256, 3) image, meaning each frame will now include its predecessor (channel 0), itself (channel 1), and its successor (channel 2). For multi-channel inputs, only the first channel is used for concatenation; other channels are appended after these in the output. With reduction_mode, central and augmented frames can be reduced to a single frame to preserve the original number of channels.

Parameters:
  • radius – Defines the number of neighboring frames added to each side within the channel dimension. Default: 0

  • reduction_mode – Determines if and how to reduce neighboring frames. Options: “none” (default, concatenates), “average”, “maximum”.

  • same_padding – Use frame replication (not zero-padding) at sequence edges. Default: True

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ConcatenateNeighboringFramesToChannelsOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ConcatenateNeighboringFramesToChannelsOperation, radius: int = 0, reduction_mode: str = ‘none’, same_padding: bool = True, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ConvertSlicesToVolumeOperation(*args, **kwargs)

Bases: Operation

Stacks a set of 2D images extracted along a specified axis into an actual 3D volume.

Parameters:
  • axis – Axis along which to extract slices (must be either ‘x’, ‘y’ or ‘z’)

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ConvertSlicesToVolumeOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ConvertSlicesToVolumeOperation, axis: str = ‘z’, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ConvertToGrayOperation(*args, **kwargs)

Bases: Operation

Convert the input image to a single channel image by averaging all channels.

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ConvertToGrayOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ConvertToGrayOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ConvertVolumeToSlicesOperation(*args, **kwargs)

Bases: Operation

Unstacks a 3D volume to a set of 2D images extracted along one of the axes.

Parameters:
  • axis – Axis along which to extract slices (must be either ‘x’, ‘y’ or ‘z’)

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ConvertVolumeToSlicesOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ConvertVolumeToSlicesOperation, axis: str = ‘z’, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ConvolutionalCRFOperation(*args, **kwargs)

Bases: Operation

Adapt segmentation map or raw output of model to image content.

Parameters:
  • adaptiveness – Indicates how much the segmentation should be adapted to the image content. Range [0, 1]. Default: 0.5

  • smooth_weight – Weight of the smoothness kernel. Higher values create a greater penalty for nearby pixels having different labels. Default: 0.1

  • radius – Radius of the message passing window in pixels. Default: 5

  • downsampling – Amount of downsampling used in message passing, makes the effective radius of the message passing window larger. Default: 2

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • label_compatibilites:

  • convergence_threshold: 0.00100000004749745

  • max_num_iter: 50

  • smoothness_sigma: 1.0

  • appearance_sigma: 0.25

  • positive_label_score: 1.0

  • negative_label_score: -1.0

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ConvolutionalCRFOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ConvolutionalCRFOperation, adaptiveness: float = 0.5, smooth_weight: float = 0.1, radius: int = 5, downsampling: int = 2, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.CopyOperation(*args, **kwargs)

Bases: Operation

Copies a set of fields of a data item.

Parameters:
  • source – list of the elements to be copied

  • target – list of names of the new elements (must match the size of source)

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.CopyOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.CopyOperation, source: list[str], target: list[str], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.CropAroundLabelMapOperation(*args, **kwargs)

Bases: Operation

Crops the input image and label to the bounds of the specified label value, and sets the label value to 1 and all other values to zero in the resulting label.

Parameters:
  • label_values – Label values to select. Default: [1]

  • margin – Margin, in pixels. Default: 1

  • reorder – Whether label values in result should be mapped to 1,2,3… based on input in label_values. Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.CropAroundLabelMapOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.CropAroundLabelMapOperation, label_values: list[int] = [1], margin: int = 1, reorder: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.CropOperation(*args, **kwargs)

Bases: Operation

Crop input images and label maps with a given size and offset.

Parameters:
  • size – List of integers representing the target dimensions of the image to be cropped. If -1 is specified, the whole dimension will be kept, starting from the corresponding offset.

  • offset – List of integers representing the position of the lower corner of the cropped image

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.CropOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.CropOperation, size: numpy.ndarray[numpy.int32[3, 1]], offset: numpy.ndarray[numpy.int32[3, 1]], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.CutOutOperation(*args, **kwargs)

Bases: Operation

Cut out input images and label maps with a given size, offset and fill values.

Parameters:
  • size – List of 3-dim vectors representing the target dimensions of the image to be cut out. Default: [1, 1, 1]

  • offset – List of 3-dim vectors representing the position of the lower corner of the cut out area. Default: [0, 0, 0]

  • fill_value – List of intensity value (floats) for filling out cutout region. Default: [0.0]

  • size_units – Units of the size parameter. Default: MM

  • offset_units – Units of the offset parameter. Default: VOXEL

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.CutOutOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.CutOutOperation, size: list[numpy.ndarray[numpy.float64[3, 1]]] = [array([1., 1., 1.])], offset: list[numpy.ndarray[numpy.float64[3, 1]]] = [array([0., 0., 0.])], fill_value: list[float] = [0.0], size_units: imfusion.machinelearning._bindings.ParamUnit = MM, offset_units: imfusion.machinelearning._bindings.ParamUnit = VOXEL, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.DataElement

Bases: pybind11_object

__iter__() Iterator
Return type:

Iterator

clone(self: DataElement, with_data: bool = True) DataElement

Create a copy of the element.

numpy(copy=False)
Parameters:

self (DataElement) –

split(self: DataElement) list[DataElement]

Split an element into several ones along the batch dimension.

static stack(elements: list[DataElement]) DataElement

Stack several elements along the batch dimension.

tag_as_target(self: DataElement) None

Mark this element as being a target.

torch(device: device = None, dtype: dtype = None, same_as: Tensor = None) Tensor

Convert SharedImageSet or a SharedImage to a torch.Tensor.

Parameters:
  • self (DataElement | SharedImageSet | SharedImage) – Instance of SharedImageSet or SharedImage (this function bound as a method to SharedImageSet and SharedImage)

  • device (device) – Target device for the new torch.Tensor

  • dtype (dtype) – Type of the new torch.Tensor

  • same_as (Tensor) – Template tensor whose device and dtype configuration should be matched. device and dtype are still applied afterwards.

Returns:

New torch.Tensor

Return type:

Tensor

untag_as_target(self: DataElement) None

Remove target status from this element.

property batch_size

Returns the batch size of the element.

property components

Returns the list of DataComponents for this element.

property content

Access to the underlying Data.

property dimension

Returns the dimensionality of the underlying data

property is_target

Returns true if this element is marked as a target.

property ndim

Returns the dimensionality of the underlying data

property type

Returns the type of the underlying data

class imfusion.machinelearning.DataItem(self: DataItem, elements: dict[str, DataElement] = {})

Bases: Data

Class managing a dictionary of DataElements. This class is used as the container for applying Operations to a collection of heterogeneous data in a consistent way. This class implements the concept of batch size for the contained elements. As such, a DataItem can be split or stacked along the batch axis like the contained DataElements, but because of that it enforces that all DataElement stored have consistent batch size.

Construct a DataItem with existing Elements if provided.

Parameters:

elements (Dict[str, imfusion.DataElement]) – elements to be inserted into the DataItem, default: {}

__getitem__(self: DataItem, arg0: str) DataElement
__iter__(self: DataItem) Iterator[tuple[str, DataElement]]
__setitem__(*args, **kwargs)

Overloaded function.

  1. __setitem__(self: imfusion.machinelearning._bindings.DataItem, field: str, element: imfusion.machinelearning._bindings.DataElement) -> None

Set a DataElement to the DataItem.

Parameters:
  • field (str) – field name

  • element (DataElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. __setitem__(self: imfusion.machinelearning._bindings.DataItem, field: str, element: imfusion.machinelearning._bindings.ImageElement) -> None

Set a ImageElement to the DataItem.

Parameters:
  • field (str) – field name

  • element (ImageElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. __setitem__(self: imfusion.machinelearning._bindings.DataItem, field: str, element: imfusion.machinelearning._bindings.KeypointsElement) -> None

Set a KeypointsElement to the DataItem.

Parameters:
  • field (str) – field name

  • element (KeypointsElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. __setitem__(self: imfusion.machinelearning._bindings.DataItem, field: str, element: imfusion.machinelearning._bindings.BoundingBoxElement) -> None

Set a BoundingBoxElement to the DataItem.

Parameters:
  • field (str) – field name

  • element (BoundingBoxElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. __setitem__(self: imfusion.machinelearning._bindings.DataItem, field: str, element: imfusion.machinelearning._bindings.VectorElement) -> None

Set a VectorElement to the DataItem.

Parameters:
  • field (str) – field name

  • element (VectorElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. __setitem__(self: imfusion.machinelearning._bindings.DataItem, field: str, element: imfusion.machinelearning._bindings.TensorElement) -> None

Set a TensorElement to the DataItem.

Parameters:
  • field (str) – field name

  • element (TensorElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. __setitem__(self: imfusion.machinelearning._bindings.DataItem, field: str, shared_image_set: imfusion._bindings.SharedImageSet) -> None

Set a SharedImageSet to the DataItem.

Parameters:
  • field (str) – field name

  • element (SharedImageSet) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. __setitem__(self: imfusion.machinelearning._bindings.DataItem, field: str, keypoint_set: imfusion.machinelearning._bindings.KeypointSet) -> None

Set a KeypointSet to the DataItem.

Parameters:
  • field (str) – field name

  • element (imfusion.KeypointSet) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. __setitem__(self: imfusion.machinelearning._bindings.DataItem, field: str, bboxes: imfusion.machinelearning._bindings.BoundingBoxSet) -> None

Set a BoundingBoxSet to the DataItem.

Parameters:
  • field (str) – field name

  • element (BoundingBoxSet) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. __setitem__(self: imfusion.machinelearning._bindings.DataItem, field: str, tensor: imfusion.machinelearning._bindings.Tensor, batch_size: int = 1) -> None

Set a Tensor to the DataItem.

Parameters:
  • field (str) – field name

  • element (Tensor) – element to be inserted into the DataItem, if the field exists it’s overwritten.

clear(self: DataItem) None

Clears the data item and leaves it empty.

clone(self: DataItem, with_data: bool = True) DataItem

Returns a deep copy of the data item.

contains(self: DataItem, arg0: str) bool

Checks if the data item contains a field with the given name.

get(*args, **kwargs)

Overloaded function.

  1. get(self: imfusion.machinelearning._bindings.DataItem, field: str) -> imfusion.machinelearning._bindings.DataElement

Returns a reference to an element (raises a KeyError if field is not in item)

Parameters:

field (str) – Name of the field to retrieve.

  1. get(self: imfusion.machinelearning._bindings.DataItem, field: str, default: imfusion.machinelearning._bindings.DataElement) -> imfusion.machinelearning._bindings.DataElement

Returns a reference to an element (or the default value if field is not in item)

Parameters:
  • field (str) – Name of the field to retrieve.

  • default (DataElement) – default value to return if field is not in DataItem.

get_all(self: DataItem, arg0: ElementType) set[DataElement]

Returns a list of all elements of the specified type.

items(self: DataItem) Iterator[tuple[str, DataElement]]
keys(self: DataItem) Iterator[str]
static load(location: str) DataItem

Load data item from ImFusion file.

Parameters:

location (str) – input path.

static merge(items: list[DataItem]) DataItem

Merge several data items by setting all their fields to the output item

Parameters:

items (List[DataItem]) – List of input items to merge.

Note

Raises an exception is the same field is contained in more than one item.

pop(self: DataItem, field: str) DataElement

Remove the DataElement associated to the given field and returns it.

Parameters:

field (str) – Name of the field to remove.

save(self: DataItem, location: str) None

Save data item as ImFusion file.

Parameters:

location (str) – output path.

set(*args, **kwargs)

Overloaded function.

  1. set(self: imfusion.machinelearning._bindings.DataItem, field: str, element: imfusion.machinelearning._bindings.DataElement) -> None

Set a DataElement to the DataItem.

Parameters:
  • field (str) – field name

  • element (DataElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. set(self: imfusion.machinelearning._bindings.DataItem, field: str, element: imfusion.machinelearning._bindings.ImageElement) -> None

Set a ImageElement to the DataItem.

Parameters:
  • field (str) – field name

  • element (ImageElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. set(self: imfusion.machinelearning._bindings.DataItem, field: str, element: imfusion.machinelearning._bindings.KeypointsElement) -> None

Set a KeypointsElement to the DataItem.

Parameters:
  • field (str) – field name

  • element (KeypointsElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. set(self: imfusion.machinelearning._bindings.DataItem, field: str, element: imfusion.machinelearning._bindings.BoundingBoxElement) -> None

Set a BoundingBoxElement to the DataItem.

Parameters:
  • field (str) – field name

  • element (BoundingBoxElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. set(self: imfusion.machinelearning._bindings.DataItem, field: str, element: imfusion.machinelearning._bindings.VectorElement) -> None

Set a VectorElement to the DataItem.

Parameters:
  • field (str) – field name

  • element (VectorElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. set(self: imfusion.machinelearning._bindings.DataItem, field: str, element: imfusion.machinelearning._bindings.TensorElement) -> None

Set a TensorElement to the DataItem.

Parameters:
  • field (str) – field name

  • element (TensorElement) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. set(self: imfusion.machinelearning._bindings.DataItem, field: str, shared_image_set: imfusion._bindings.SharedImageSet) -> None

Set a SharedImageSet to the DataItem.

Parameters:
  • field (str) – field name

  • element (SharedImageSet) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. set(self: imfusion.machinelearning._bindings.DataItem, field: str, keypoint_set: imfusion.machinelearning._bindings.KeypointSet) -> None

Set a KeypointSet to the DataItem.

Parameters:
  • field (str) – field name

  • element (KeypointSet) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. set(self: imfusion.machinelearning._bindings.DataItem, field: str, bounding_box_set: imfusion.machinelearning._bindings.BoundingBoxSet) -> None

Set a BoundingBoxSet to the DataItem.

Parameters:
  • field (str) – field name

  • element (BoundingBoxSet) – element to be inserted into the DataItem, if the field exists it’s overwritten.

  1. set(self: imfusion.machinelearning._bindings.DataItem, field: str, tensor: imfusion.machinelearning._bindings.Tensor, batch_size: int = 1) -> None

Set a Tensor to the DataItem.

Parameters:
  • field (str) – field name

  • element (Tensor) – element to be inserted into the DataItem, if the field exists it’s overwritten.

static split(item: DataItem) list[DataItem]

Split a data item along the batch channels into items, each with batch size 1

Parameters:

item (DataItem) – Item to split.

static stack(items: list[DataItem]) DataItem

Stack several data items along the batch dimension.

Parameters:

items (List[DataItem]) – List of input items to stack.

update(self: DataItem, other: DataItem, clone: bool = True) None

Update the contents of self with elements from other

Parameters:
  • other (DataItem) – DataItem that holds the information that should be added to self.

  • clone (bool) – Indicates whether other should be cloned before the update. If False other will be invalidated, default=True.

Note

Raises an exception if the batch_size of other does not match self.

values(self: DataItem) Iterator[DataElement]
property batch_size

Returns the batch size of the fields, zero if no elements are present, or None if there are inconsistencies within them.

property dimension

Returns the dimensionality of the elements, or zero if no elements are present or if there are inconsistencies within them.

property fields

Returns the set of fields contained in the data item.

property ndim

Returns the dimensionality of the elements, or zero if no elements are present or if there are inconsistencies within them.

class imfusion.machinelearning.DataLoaderSpecs(self: DataLoaderSpecs, arg0: str, arg1: Properties, arg2: Phase, arg3: list[str], arg4: str)

Bases: pybind11_object

property configuration
property inputs
property name
property output
property phase
class imfusion.machinelearning.Dataset(*args, **kwargs)

Bases: pybind11_object

Class for creating an iterable dataset by chaining data loading and transforming operations executed in a lazy fashion. The Dataset implements an iterable interface, which allows to use iter() and next() built-ins as well as range based loops.

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.Dataset, data_lists: list[tuple[dict[int, str], list[str]]], shuffle: bool = False, verbose: bool = False) -> None

Constructs a dataset from lists of filenames.

Parameters:
  • data_lists (list) – list txt files, each listing the file paths. Complete type: List[Str]

  • shuffle (bool) – shuffle file order. Default: false

  • verbose (bool) – enable verbose logging. Default: false

  1. __init__(self: imfusion.machinelearning._bindings.Dataset, read_from: str, reader_properties: imfusion._bindings.Properties, verbose: bool = False) -> None

Constructs a dataset by specifying a reader type as a string.

Parameters:
  • read_from (string) – specifies the type of reader that is created implicitly. Options: “filesystem”.

  • reader_properties (Properties) – properties used to configure the reader.

  • verbose (bool) – print debug information when running the data loader. Default: false

  1. __init__(self: imfusion.machinelearning._bindings.Dataset, verbose: bool = False) -> None

Constructs an empty dataset.

Parameters:

verbose (bool) – print debug information when running the data loader. Default: false

__iter__(self: Dataset) Dataset
__next__(self: Dataset) DataItem
static available_decorators() list[str]

Returns the keys of the register decorator functions

static available_filter_functions() list[str]

Returns filter function keys to be used in Dataset.filter decorator function

static available_map_functions() list[str]

Returns map function keys to be used in Dataset.map decorator function

batch(self: Dataset, batch_size: int = 1, pad: bool = False, overlap: int = 0) Dataset

Batches the next batch_size items in a single one before returning it.

Parameters:
  • batch_size (int) – batch size.

  • pad (bool) – last batch gets filled up with the last data item until the batch matches batch_size.

  • overlap (int) – number of items overlapping in consecutive batches. Must be less than batch_size.

build_pipeline(self: imfusion.machinelearning._bindings.Dataset, property_list: list[imfusion.machinelearning._bindings.DataLoaderSpecs], config_phase: imfusion.machinelearning._bindings.Phase = <Phase.ALWAYS: 7>) None

Configures the Dataset decorators to be used based on a list of Properties.

Parameters:
  • property_list (list) – list of imfusion.Properties or list of dict()

  • config_phase (Phase) – configuration phase.

cache(self: Dataset, make_exclusive_cpu: bool = True, lazy: bool = True, compression_level: int = 0, shuffle: bool = False) Dataset

Deprecated - please use ‘memory_cache’ now.

disk_cache(self: Dataset, location: str = '', lazy: bool = True, reload_from_disk: bool = True, compression: bool = False, shuffle: bool = False) Dataset

Caches the dataset already loaded in a persistent manner (on a disk location) Raises a DataLoaderError if the dataset is not countable.

Parameters:
  • location (string) – path to the folder where all the data will be cached.

  • lazy (bool) – if false, the cache is filled upon construction (otherwise as items are requested).

  • reload_from_disk (bool) – try to reload the cache from a previous session (reload is the deprecated name of this parameter).

  • compression (bool) – use ZStandard compression.

  • shuffle (bool) – re-shuffle the cache order every epoch.

filter(*args, **kwargs)

Overloaded function.

  1. filter(self: imfusion.machinelearning._bindings.Dataset, func: Callable[[imfusion.machinelearning._bindings.DataItem], bool]) -> imfusion.machinelearning._bindings.Dataset

Filters the dataset according to a user defined function. Note: Filtering makes the dataset uncountable, since the func output is conditional.

Parameters:

func (def func(dict) -> bool) – filtering criterion to be applied to each input item. The input must be of the form dict[str, SharedImageSet]

  1. filter(self: imfusion.machinelearning._bindings.Dataset, func_name: str) -> imfusion.machinelearning._bindings.Dataset

Filters the dataset according to a user defined function. Note: Filtering makes the dataset uncountable, since the func output is conditional.

Parameters:

func_name (str) – name of a registered filter function specifying a criterion to be applied to each input item. The input must be of the form dict[str, SharedImageSet]

map(*args, **kwargs)

Overloaded function.

  1. map(self: imfusion.machinelearning._bindings.Dataset, func: Callable[[imfusion.machinelearning._bindings.DataItem], None], num_parallel_calls: int = 1) -> imfusion.machinelearning._bindings.Dataset

Applies a mapping to each item of the dataset. Optionally specify the number num_parallel_calls of asynchronous threads which are used for the mapping.

Parameters:
  • func (def func(dict) -> dict) – function mapping the input items. The input and output must be of the form dict[str, SharedImageSet]

  • num_parallel_calls (int) – specify the number num_parallel_calls of asynchronous threads which are used for the mapping. Defaults to 1.

  1. map(self: imfusion.machinelearning._bindings.Dataset, func_name: str, num_parallel_calls: int = 1) -> imfusion.machinelearning._bindings.Dataset

Applies a mapping to each item of the dataset. Optionally specify the number num_parallel_calls of asynchronous threads which are used for the mapping.

Parameters:
  • func_name (str) – name of a registered function, mapping the input items. The input and output must be of the form dict[str, SharedImageSet]

  • num_parallel_calls (int) – specify the number num_parallel_calls of asynchronous threads which are used for the mapping. Defaults to 1.

memory_cache(self: Dataset, make_exclusive_cpu: bool = True, lazy: bool = True, compression_level: int = 0, shuffle: bool = False, num_threads: int = 1) Dataset

Caches the dataset already loaded. Raises a DataLoaderError if the dataset is not countable. Raises a MemoryError if the system runs out of memory.

Parameters:
  • make_exclusive_cpu (bool) – keep the data exclusively on CPU.

  • lazy (bool) – if false, the cache is filled upon construction (otherwise as items are requested).

  • compression_level (int) – controls compression, valid values are between 0 and 20. Higher means more compression, but slower. 0 disables compression.

  • shuffle (bool) – re-shuffle the cache order every epoch.

  • num_threads (int) – number of threads to use for copying from the cache

prefetch(self: Dataset, prefetch_size: int, sync_to_gl: bool = True) Dataset

Prefetches items from the underlying loader in a background thread.

Parameters:
  • prefetch_size (int) – number of items to prefetch.

  • sync_to_gl (bool) – synchronize the objects to GL memory after being prefetched.

preprocess(*args, **kwargs)

Overloaded function.

  1. preprocess(self: imfusion.machinelearning._bindings.Dataset, preprocessing_pipeline: list[tuple[str, imfusion._bindings.Properties, imfusion.machinelearning._bindings.Phase]], exec_phase: imfusion.machinelearning._bindings.Phase = <Phase.ALWAYS: 7>, num_parallel_calls: int = 1) -> imfusion.machinelearning._bindings.Dataset

Adds a generic preprocessing step to the data pipeline. The processing is performed by the underlying sequence of Operation.

Parameters:
  • preprocessing_pipeline – List of specifications to construct the underlying OperationsSequence. Each specification must be a tuple consisting of the name of the operation, its Phase, and Properties for configuring it.

  • exec_phase

    Execution phase for the entire preprocessing pipeline. The execution will run only those operations whose phase (specified in the specs) corresponds to the current exec_phase, with the following exceptions:

    1. Operations marked with phase == Phase.Always are always run regardless of the exec_phase.

    2. If exec_phase == Phase.Always, all operations in the preprocessing pipeline are run regardless of their individual phase.

  • num_parallel_calls – specifies the number num_parallel_calls of asynchronous threads which are used for the preprocessing. Defaults to 1.

  1. preprocess(self: imfusion.machinelearning._bindings.Dataset, operations: list[imfusion.machinelearning._bindings.Operation], num_parallel_calls: int = 1) -> imfusion.machinelearning._bindings.Dataset

Adds a generic preprocessing step to the data pipeline. The processing is performed by the underlying sequence of Operation.

Parameters:
  • operations – List of operations that will do the actual processing.

  • num_parallel_calls – specifies the number num_parallel_calls of asynchronous threads which are used for the preprocessing. Defaults to 1.

read(self: Dataset, reader_type: str, reader_properties: Properties, verbose: bool = False) Dataset

Constructs a dataset by specifying a reader type as a string.

Parameters:
  • reader_type – specifies the type of reader that is created implicitly. Options: “filesystem” (MemoryReader needs to be fixed to work with properties)

  • reader_properties – properties used to configure the reader.

  • verbose – print debug information when running the data loader. Default: false

reinit(self: Dataset) None

Reinit the dataset, clearing state surviving reset() (i.e. data caches).

repeat(self: Dataset, num_epoch_repetitions: int, num_item_repetitions: int = 1) Dataset

Repeats the dataset num_epoch_repetitions times and each individual item num_item_repetitions times.

Parameters:
  • num_epoch_repetitions (int) – number of times the underlying dataset epoch is repeated. If num_epoch_repetitions == -1, it repeats the dataset infinitely.

  • num_item_repetitions (int) – number of times each item is repeated. If num_item_repetitions == -1, it repeats the item infinitely.

reset(self: Dataset) None

Resets the data loader.

sample(*args, **kwargs)

Overloaded function.

  1. sample(self: imfusion.machinelearning._bindings.Dataset, sampling_pipeline: list[tuple[str, imfusion._bindings.Properties]], *, num_parallel_calls: int = 1, sampler_selection_seed: int = 1) -> imfusion.machinelearning._bindings.Dataset

Adds a ROI sampling step to the data pipeline. During this step the loaded image is reduced to a region of interest (ROI). The strategy for sampling this regions location is determined by the ImageROISamplers, which is randomly chosen from the underlying sampler set each time this step executes.

Parameters:
  • sampling_set_config – List of tuples of sampler name and corresponding Properties for configuring it.

  • num_parallel_calls – Number of asynchronous threads which are used for the preprocessing. Defaults to 1.

  • sampler_selection_seed – Seed for the random generator of the samplers selection

  1. sample(self: imfusion.machinelearning._bindings.Dataset, samplers: list[imfusion.machinelearning._bindings.ImageROISampler], weights: Optional[list[float]] = None, *, sampler_selection_seed: int = 1, num_parallel_calls: int = 1) -> imfusion.machinelearning._bindings.Dataset

Adds a ROI sampling step to the data pipeline. During this step the loaded image is reduced to a region of interest (ROI). The strategy for sampling this regions location is determined by the ImageROISamplers, which is randomly chosen from the underlying sampler set each time this step executes.

Parameters:
  • samplers – List of sampler to choose from when sampling.

  • weights – Probability weights for the samplers specifying the relative probability of choosing each sampler.

  • num_parallel_calls – Number of asynchronous threads which are used for the preprocessing. Defaults to 1.

  • sampler_selection_seed (unsigned int) – Seed for the random generator of the samplers selection

  1. sample(self: imfusion.machinelearning._bindings.Dataset, sampler: imfusion.machinelearning._bindings.ImageROISampler, *, num_parallel_calls: int = 1) -> imfusion.machinelearning._bindings.Dataset

Adds a ROI sampling step to the data pipeline. During this step the loaded image is reduced to a region of interest (ROI). The strategy for sampling this regions location is determined by the ImageROISamplers, which is randomly chosen from the underlying sampler set each time this step executes.

Parameters:
  • sampler – Sampler to choose from when sampling.

  • num_parallel_calls – Number of asynchronous threads which are used for the preprocessing. Defaults to 1.

set_random_seed(self: Dataset, seed: int) None

Seeds the data loading pipeline.

shuffle(self: Dataset, shuffle_buffer: int = -1, seed: int = -1) Dataset

Shuffles the next how_many items of the dataset. Defaults to -1, i.e. shuffles the entire dataset. If how_many is not specified and the dataset is not countable, it throws a DataLoaderError.

Parameters:
  • shuffle_buffer (int) – number of consecutive items to shuffle. Defaults to all items of the dataset

  • seed (int) – seed for the random shuffling.

split(self: Dataset, num_items: int = -1) Dataset

Splits the content of the SharedImagesSets into SIS containing a single image.

Parameters:

num_items – Keep only the first num_items frames. Default is -1, which keeps all frames.

Note

Calling this method will make the dataset uncountable

property size

Returns the length of the dataset or None if the set is uncountable.

property verbose

Flag indicating whether extra information is logged when fetching data items.

class imfusion.machinelearning.DefaultROISampler(*args, **kwargs)

Bases: ImageROISampler

Sampler which simply returns the image and the label map, after padding of a specified dimension divisor: each spatial dimension of the output arrays will be divisible by dimension_divisor.

Parameters:
  • dimension_divisor – Divisor of dimensions of the output images

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • label_padding_mode: EnumStringParam assigned to clamp in {clamp; mirror; zero}

  • padding_mode: EnumStringParam assigned to clamp in {clamp; mirror; zero}

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.DefaultROISampler, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.DefaultROISampler, dimension_divisor: int, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.DeformationOperation(*args, **kwargs)

Bases: Operation

Apply a deformation to the image using a specified control point grid and specified displacements.

Parameters:
  • num_subdivisions – list specifying the number of subdivisions for each dimension (the number of control points is subdivisions+1). Default: [1, 1, 1]

  • displacements – list of 3-dim vectors specifying the displacement (mm) for each control point. Should have length equal to the number of control points. Default: []

  • padding_mode – defines which type of padding is used. Default: ZERO

  • adjust_size – configures whether the resulting image should adjust its size to encompass the deformation. Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.DeformationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.DeformationOperation, num_subdivisions: numpy.ndarray[numpy.int32[3, 1]] = array([1, 1, 1], dtype=int32), displacements: list[numpy.ndarray[numpy.float32[3, 1]]] = [], padding_mode: imfusion._bindings.PaddingMode = <PaddingMode.ZERO: 0>, adjust_size: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.DiceMetric(self: DiceMetric, ignore_background: bool = True)

Bases: Metric

compute_dice(self: DiceMetric, arg0: SharedImageSet, arg1: SharedImageSet) list[dict[int, float]]
class imfusion.machinelearning.ElementType(self: ElementType, value: int)

Bases: pybind11_object

Members:

IMAGE

KEYPOINT

BOUNDING_BOX

VECTOR

TENSOR

BOUNDING_BOX = <ElementType.BOUNDING_BOX: 1>
IMAGE = <ElementType.IMAGE: 0>
KEYPOINT = <ElementType.KEYPOINT: 2>
TENSOR = <ElementType.TENSOR: 4>
VECTOR = <ElementType.VECTOR: 3>
property name
property value
class imfusion.machinelearning.Engine(self: Engine, name: str)

Bases: pybind11_object

available_providers(self: Engine) list[ExecutionProvider]

Returns the execution providers available to the Engine

check_input_fields(self: Engine, input: DataItem) None

Checks that input fields specified in the model yaml config are present in the input item.

check_output_fields(self: Engine, input: DataItem) None

Checks the output fields specified in the model yaml config are present in the item returned by predict.

configure(self: Engine, properties: Properties) None

Configures the Engine.

connect_signals(self: Engine) None

Connects signals like on_model_file_changed, on_force_cpu_changed.

init(self: Engine, properties: Properties) None

Initializes the Engine.

is_identical(self: Engine, other: Engine) bool

Compares this engine instance with another one.

on_force_cpu_changed(self: Engine) None

Signal triggered when p_force_cpu changes.

on_model_file_changed(self: Engine) None

Signal triggered when p_model_file changes.

predict(self: Engine, input: DataItem) DataItem

Runs the prediction.

provider(self: Engine) ExecutionProvider | None

Returns the execution provider currently used by the Engine.

property force_cpu

If set, forces the model to run on CPU.

property input_fields

Names of the model input heads.

property model_file

Path to the yaml model configuration.

property name
property output_fields

Names of the model output heads.

property output_fields_to_ignore

Model output heads to discard.

property version

Version of the model configuration.

class imfusion.machinelearning.EngineConfiguration

Bases: pybind11_object

configure(self: EngineConfiguration, properties: Properties) None

Configures the EngineConfiguration.

to_properties(self: EngineConfiguration) Properties

Converts the EngineConfiguration to a Properties object.

default_input_name = 'Input'
default_output_name = 'Prediction'
property engine_specific_parameters

Parameter that are specific to the type of Engine.

property force_cpu

If set, forces the model to run on CPU.

property input_fields

Names of the model input heads.

property model_file

Path to the yaml model configuration.

property output_fields

Names of the model output heads.

property output_fields_to_ignore

Model output heads to discard.

property type

Type of Engine, i.e. torch, onnx, openvino…

property version

Version of the model configuration.

class imfusion.machinelearning.EnsureExplicitMaskOperation(*args, **kwargs)

Bases: Operation

Converts the existing mask of all input images into explicit masks. If an image does not have a mask, no mask will be created. Warning: This operation might be computationally extensive since it processes every frame of the SharedImageSet independently.

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.EnsureExplicitMaskOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.EnsureExplicitMaskOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.EnsureOneToOneMatrixMappingOperation(*args, **kwargs)

Bases: Operation

Ensures that it is possible to get/set the matrix of each frame of the input image set independently. This operation is targeted at TrackedSharedImageSets, which might define their matrices via a tracking sequence with timestamps (there is then no one-to-one correspondence between matrices and images, but matrices are looked-up and interpolated via their timestamps). In such cases, the operation creates a new tracking sequence with as many samples as images and turns off the timestamp usage.

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.EnsureOneToOneMatrixMappingOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.EnsureOneToOneMatrixMappingOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ExecutionProvider(self: ExecutionProvider, value: int)

Bases: pybind11_object

Members:

CPU

CUDA

CUSTOM

DIRECTML

MPS

OPENVINO

CPU = <ExecutionProvider.CPU: 0>
CUDA = <ExecutionProvider.CUDA: 2>
CUSTOM = <ExecutionProvider.CUSTOM: 1>
DIRECTML = <ExecutionProvider.DIRECTML: 3>
MPS = <ExecutionProvider.MPS: 5>
OPENVINO = <ExecutionProvider.OPENVINO: 4>
property name
property value
class imfusion.machinelearning.ExtractRandomSubsetOperation(*args, **kwargs)

Bases: Operation

Extracts a random subset from a SharedImageSet.

Parameters:
  • subset_size – Size of the extracted subset of images. Default: 1

  • keep_order – If true the extracted subset will have the same ordering as the input. Default: False

  • probability – Probability of applying this Operation. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ExtractRandomSubsetOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ExtractRandomSubsetOperation, subset_size: int = 1, keep_order: bool = False, probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ExtractSubsetOperation(*args, **kwargs)

Bases: Operation

Extracts a subset from a SharedImageSet.

Parameters:
  • subset – Indices of the selected images.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ExtractSubsetOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ExtractSubsetOperation, subset: list[int], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ForegroundGuidedLabelUpsamplingOperation(*args, **kwargs)

Bases: Operation

Generates a high-resolution label map by upsampling a multi-class softmax prediction guided by a high-resolution binary segmentation. This operation combines a high-resolution binary segmentation (e.g., from a sigmoid prediction) with a lower-resolution multi-class one-hot encoded segmentation (e.g., from a softmax prediction) to produce a refined high-resolution multi-class label map. The approach is inspired by pan-sharpening techniques used in remote sensing (https://arxiv.org/abs/1504.04531). The multi-class one hot image should contain the background class as the first channel.

Parameters:
  • apply_to – List of field names for input images, expected order: [“highResSigmoid”, “lowResSoftmax”]

  • output_field – Name for the output field. If not specified, overwrites first input field

  • remove_fields – Remove input fields after processing. Default: True

  • apply_sigmoid – Use sigmoid intensities to guide foreground/background decision. If False, outputs most likely non-background class (if any, otherwise background) from softmax. Default: True

  • guidance_weight – Weight of sigmoid vs softmax for foreground decision [0-1]. Lower values can reduce false positives. Ignored if apply_sigmoid=False. Default: 1.0

  • boundary_refinement_max_iter – Maximum iterations for boundary refinement at output resolution. Higher values may be needed for larger resolution differences. Ideal values depend on the data and boundary_refinement_smooth. Default: 3

  • boundary_refinement_smooth – Smoothing factor for boundary refinement. Larger values remove smaller label patches. Default: 1.0

  • boundary_refinement_add_only – Optional list of label values to restrict the refinement to additions only. Default: []

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ForegroundGuidedLabelUpsamplingOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ForegroundGuidedLabelUpsamplingOperation, apply_to: list[str] = [‘highResSigmoid’, ‘lowResSoftmax’], output_field: Optional[str] = None, remove_fields: bool = True, apply_sigmoid: bool = True, guidance_weight: float = 1.0, boundary_refinement_max_iter: int = 3, boundary_refinement_smooth: float = 1.0, boundary_refinement_add_only: list[int] = [], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.GammaCorrectionOperation(*args, **kwargs)

Bases: Operation

Apply a gamma correction which changes the overall contrast (see https://en.wikipedia.org/wiki/Gamma_correction)

Parameters:
  • gamma – Power applied to the normalized intensities. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.GammaCorrectionOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.GammaCorrectionOperation, gamma: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.HighPassOperation(*args, **kwargs)

Bases: Operation

Smooths the input image with a Gaussian kernel with half_kernel_size, then subtracts the smoothed image from the input, resulting in a reduction of low-frequency components.

Parameters:
  • half_kernel_size – half kernel size in pixels. Corresponding standard deviation is half_kernel_size / 3.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.HighPassOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.HighPassOperation, half_kernel_size: int, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ImageElement(self: ImageElement, image: SharedImageSet)

Bases: SISBasedElement

Initialize an ImageElement from a SharedImageSet.

Parameters:

image (SharedImageSet) – image to be converted to a ImageElement

from_torch()
class imfusion.machinelearning.ImageMattingOperation(*args, **kwargs)

Bases: Operation

Refine edges of label-map based on the intensities of the input image. This can make coarse predictions smoother or may correct wrong predictions on the boundaries. It applies the method from the paper “Guided Image Filtering” by Kaiming He et al.

Parameters:
  • img_size – target image dimension. No downsampling if 0.

  • kernel_size – guided filter kernel size.

  • epsilon – guided filter epsilon.

  • num_iters – guided filter number of iterations.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ImageMattingOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ImageMattingOperation, img_size: int, kernel_size: int, epsilon: float, num_iters: int, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ImageROISampler

Bases: Operation

Base class for ROI samplers

static available_cpp_samplers() list[str]

Returns the list of registered C++ samplers.

compute_roi(self: ImageROISampler, image: SharedImageSet) RegionOfInterest | None

Compute ROI on the given image.

extract_roi(self: ImageROISampler, image: SharedImageSet, roi: RegionOfInterest | None) SharedImageSet

Extract ROIs from an image.

property label_padding_mode

The label padding mode property.

property padding_mode

The image padding mode property.

property requires_label

Bool indicating whether ROI must be computed on the label map.

class imfusion.machinelearning.ImagewiseClassificationMetrics(self: ImagewiseClassificationMetrics, num_classes: int = 2)

Bases: Metric

class Result

Bases: pybind11_object

property confusion_matrix
property prediction
property target
compute_results(self: ImagewiseClassificationMetrics, prediction: SharedImageSet, target: SharedImageSet) list[Result]
class imfusion.machinelearning.InterleaveMode(*args, **kwargs)

Bases: pybind11_object

Members:

Alternate

Proportional

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.InterleaveMode, value: int) -> None

  2. __init__(self: imfusion.machinelearning._bindings.InterleaveMode, arg0: str) -> None

Alternate = <InterleaveMode.Alternate: 0>
Proportional = <InterleaveMode.Proportional: 1>
property name
property value
class imfusion.machinelearning.InvertOperation(*args, **kwargs)

Bases: Operation

Invert the intensities of the image: \(\textnormal{output} = -\textnormal{input}\).

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.InvertOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.InvertOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.KeepLargestComponentOperation(*args, **kwargs)

Bases: Operation

Create a label map with the largest components above the specified threshold. The output label map encodes each component with a different label value (1 for the largest, 2 for the second largest, etc.). Input images may be float or integer, output are unsigned 8-bit integer images (i.e. max 255 components). The operation will automatically set the default processing policy based on its input (if the input contains more than than one image, then only the label maps will be processed).

Parameters:
  • max_number_components – the maximum number of components to keep. Default: 1

  • min_component_size – the minimum size of a component to keep. Default: -1, i.e. no minimum

  • max_component_size – the maximum size of a component to keep Default: -1, i.e. no maximum

  • threshold – the threshold to use for the binarization. Default: 0.5

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.KeepLargestComponentOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.KeepLargestComponentOperation, max_number_components: int = 1, min_component_size: int = -1, max_component_size: int = -1, threshold: float = 0.5, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.KeypointSet(*args, **kwargs)

Bases: Data

Class for managing sets of keypoints

The class is meant to be used in parallel with SharedImageSet. For each frame in the set, and for each type of keypoint (i.e. body, pedicles, etc..), there is a list of points indicating an instance of that type in the reference image. In terms of tensor dimensions, this would be represented as [N, C, K], where N is the batch size, C is the number of channels (i.e. types of keypoints), and K is the number of keypoints for the same instance type. Each Keypoint is a vec3 having a further dimension [3].

Note

This class API is experimental and might change soon.

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.KeypointSet, points: list[list[list[numpy.ndarray[numpy.float64[3, 1]]]]]) -> None

  2. __init__(self: imfusion.machinelearning._bindings.KeypointSet, points: list[list[list[list[float]]]]) -> None

  3. __init__(self: imfusion.machinelearning._bindings.KeypointSet, array: numpy.ndarray[numpy.float64]) -> None

static load(location: str) KeypointSet | None

Load a KeypointSet from an ImFusion file.

Parameters:

location (str) – input path.

save(self: KeypointSet, location: str) None

Save a KeypointSet as an ImFusion file.

Parameters:

location (str) – output path.

property data
class imfusion.machinelearning.KeypointsElement(self: KeypointsElement, keypoint_set: KeypointSet)

Bases: DataElement

Initialize a KeypointsElement.

Parameters:

keypoint_set – In case the argument is a numpy array, the array shape is expected to be [N, C, K, 3], where N is the batch size, C the number of different keypoint types (channel), K the number of instances of the same point type, which are expected to have dimension 3. If the argument is a nested list, the same concept applies also to the size of each level of nesting.

property keypoints

Access to the underlying KeypointSet.

class imfusion.machinelearning.KeypointsFromBlobsOperation(*args, **kwargs)

Bases: Operation

Extracts keypoints from blob image. Takes ImageElement specified in :code:’apply_to’ as input. If :code:’apply_to’ is not specified and there is only one image in the data item, this image will automatically be selected.

Parameters:
  • keypoints_field_name – Field name of the output keypoints. Default: “keypoints”

  • keypoint_extraction_mode – Extraction mode: 0: Max, 1: Mean, 2: Local Max. Default: 0

  • blob_intensity_cutoff – Minimum blob intensity to be considered in analysis. Default: 0.02

  • min_cluster_distance – In case of local aggregation methods, minimum distance allowed among clusters. Default: 10.0

  • min_cluster_weight – In case of local aggregation methods, minimum intensity for cluster to be consider independent. Default: 0.1

  • max_internal_clusters – In case of local aggregation methods, maximum number of internal clusters to be considered; to avoid excessive numbers that stall the algorithm. If there are more, the lower weighted ones are removed first. Default: 1000

  • run_smoothing – Runs a Gaussian smoothing with 1 pixel standard deviation to improve stability of local maxima. Default: False

  • smoothing_half_kernel – Runs a Gaussian smoothing with 1 pixel standard deviation to improve stability of local maxima. Default: 2

  • run_intensity_based_refinement – Runs blob intensity based refinement of clustered keypoints. Default: False

  • apply_to – Field containing the blob image. If not specified and if there is only one image in the data item, this image will automatically be selected. Default: []

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.KeypointsFromBlobsOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.KeypointsFromBlobsOperation, keypoints_field_name: str = ‘keypoints’, keypoint_extraction_mode: int = 0, blob_intensity_cutoff: float = 0.02, min_cluster_distance: float = 10.0, min_cluster_weight: float = 0.1, max_internal_clusters: int = 1000, run_smoothing: bool = False, smoothing_half_kernel: int = 2, run_intensity_based_refinement: bool = False, apply_to: list[str] = [], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.LabelROISampler(*args, **kwargs)

Bases: ImageROISampler

Sampler which samples ROIs from the input image and label map, such that one particular label appears. For each ROI, one of the labels_values will be selected and the sampler will make sure that the ROI includes this label. If the sample_boundaries_only flag is set to true, regions will at least have two different label values. If the constraints are not feasible, the sampler will either extract a random ROI with the target size or return an empty image, based on the flag fallback_to_random. (The actual purpose of returning an empty image is to actually chain this sampler with a FilterDataLoader, so that images without a valid label are just completely skipped).

Parameters:
  • roi_size – Target size of the ROIs to be extracted as [Width, Height, Slices]

  • labels_values – List of integers representing the target labels

  • sample_boundaries_only – Make sure that the ROI contains a boundary (i.e. at least two different label values)

  • fallback_to_random – Whether to sample a random ROI or return an empty one when the target label values are not found. Default: True

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • label_padding_mode: EnumStringParam assigned to clamp in {clamp; mirror; zero}

  • padding_mode: EnumStringParam assigned to clamp in {clamp; mirror; zero}

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.LabelROISampler, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.LabelROISampler, roi_size: numpy.ndarray[numpy.int32[3, 1]], labels_values: list[int], sample_boundaries_only: bool, fallback_to_random: bool = True, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.LazyModule(name: str)

Bases: object

Wrapper that delays importing a package until its attributes are accessed. We need this to keep the import time of the ìmfusion package reasonable.

Note

This wrapper is fairly basic and does not support assignments to the modules, i.e. no monkey-patching.

Parameters:

name (str) –

class imfusion.machinelearning.LinearIntensityMappingOperation(*args, **kwargs)

Bases: Operation

Apply a linear shift and scale to the image intensities. \(\textnormal{output} = \textnormal{factor} * \textnormal{input} + \textnormal{bias}\)

Parameters:
  • factor – Multiplying factor (see formula). Default: 1.0

  • bias – Additive bias (see formula). Default: 0.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.LinearIntensityMappingOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.LinearIntensityMappingOperation, factor: float = 1.0, bias: float = 0.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.MRIBiasFieldCorrectionOperation(*args, **kwargs)

Bases: Operation

Perform bias field correction using an implicitly trained neural network (see MRIBiasFieldCorrectionAlgorithm for more details and the parameters description).

Parameters:
  • iterations – For values > 1, the field is iteratively refined. Default: 1

  • config_path – Path of the machine learning model (use “GENERIC3D” or “GENERIC2D” for the default models). Default: “GENERIC3D”

  • field_smoothing_half_kernel – For values > 0, additional smoothing with a Gaussian kernel. Default: -1

  • preserve_mean_intensity – Preserve the mean image intensity in the output. Default: True

  • output_is_field – Produce the field, not the corrected image. Default: False

  • field_dimensions – Internal field dimensions (zeroes represent the model default dimensions). Default: [0, 0, 0]

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.MRIBiasFieldCorrectionOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.MRIBiasFieldCorrectionOperation, iterations: int = 1, config_path: str = ‘GENERIC3D’, field_smoothing_half_kernel: int = -1, preserve_mean_intensity: bool = True, output_is_field: bool = False, field_dimensions: numpy.ndarray[numpy.int32[3, 1]] = array([0, 0, 0], dtype=int32), *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.MRIBiasFieldGenerationOperation(*args, **kwargs)

Bases: Operation

Apply or generate a multiplicative intensity modulation field. If the output is a field, it is shifted as close to mean 1 as possible while remaining positive everywhere. If the output is not a field, the image intensity is shifted so that the mean intensity of the input image is preserved.

Parameters:
  • length_scale_mm – Length scale (in mm) of the Gaussian radial basis function. Default: 100.0

  • field_amplitude – Total field amplitude (centered around one). I.e. 0.4 for a 40% field. Default: 0.4

  • center – Relative center of the Gaussian with respect to the image axes. Values from [0..1] for locations inside the image. Default: [0.25, 0.25, 0.25]

  • distance_scaling – Relative scaling of the x, y, z world coordinates for field anisotropy. Default: [1, 1, 1]

  • invert_field – Invert the final field: field <- 2 - field. Default: False

  • output_is_field – Produce the field, not the corrupted image. Note, the additive normalization method depends on this. Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.MRIBiasFieldGenerationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.MRIBiasFieldGenerationOperation, length_scale_mm: float = 100.0, field_amplitude: float = 0.4, center: numpy.ndarray[numpy.float64[3, 1]] = array([0.25, 0.25, 0.25]), distance_scaling: numpy.ndarray[numpy.float64[3, 1]] = array([1., 1., 1.]), invert_field: bool = False, output_is_field: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.MachineLearningModel(self: imfusion.machinelearning._bindings.MachineLearningModel, config_path: str, default_prediction_output: imfusion.machinelearning._bindings.PredictionOutput = <PredictionOutput.UNKNOWN: -1>)

Bases: pybind11_object

Class for creating a MachineLearningModel.

Create a MachineLearningModel. If the resource required by the MachineLearningModel could not be acquired, raises a RuntimeError.

Parameters:
  • config_path (str) – Path to the configuration file used to create ModelConfiguration object owned by the model.

  • default_prediction_output (PredictionOutput) – Parameter used to specify the prediction output of a model if this is missing from the config file. The prediction output type must be specified either here or in the configuration file under the key PredictionOutput. If it is specified in both places, the one from the config file is used.

engine(self: MachineLearningModel) Engine

Returns the underlying engine used by the model. This can be useful for setting CPU/GPU mode, querying whether CUDA is available, etc.

predict(*args, **kwargs)

Overloaded function.

  1. predict(self: imfusion.machinelearning._bindings.MachineLearningModel, input: imfusion.machinelearning._bindings.DataItem) -> imfusion.machinelearning._bindings.DataItem

Method to execute a generic multiple input/multiple output model The input and output type of a machine learning model is the DataItem, which allows to give and retrieve an heterogeneous map-type container of the data needed and returned by the model.

Parameters:

input (DataItem) – Input data item containing all data to be used for inference

  1. predict(self: imfusion.machinelearning._bindings.MachineLearningModel, images: imfusion._bindings.SharedImageSet) -> imfusion._bindings.SharedImageSet

Convenience method to execute a single-input/single-output image-based model.

Parameters:

images (SharedImageSet) – Input image set to be used for inference

property label_names

Dict of the list of label names for each output. Keys are the engine output names if specified, else “Prediction”.

class imfusion.machinelearning.MakeFloatOperation(*args, **kwargs)

Bases: Operation

Convert the input image to float with original values (internal shifts and scales are baked in).

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.MakeFloatOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.MakeFloatOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.MarkAsTargetOperation(*args, **kwargs)

Bases: Operation

Mark elements from the input data item as learning “target” which might affect the behaviour of the subsequent operations that rely on ProcessingPolicy or use other custom target-specific logic.

Parameters:
  • apply_to – fields to mark as targets (will initialize the underlying apply_to parameter)

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.MarkAsTargetOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.MarkAsTargetOperation, apply_to: list[str], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.MergeAsChannelsOperation(*args, **kwargs)

Bases: Operation

Merge multiple DataElements into a single one along the channel dimension. Only applicable for ImageElements and VectorElements.

Parameters:
  • apply_to – fields which should be merged.

  • output_field – name of the resulting field.

  • remove_fields – remove fields used for merging from the data item. Default: True

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.MergeAsChannelsOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.MergeAsChannelsOperation, apply_to: list[str] = [], output_field: str = ‘’, remove_fields: bool = True, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.Metric

Bases: pybind11_object

compute(self: Metric, item: DataItem) list[dict[str, ndarray[numpy.float64[m, n]]]]
configuration(self: Metric) Properties
configure(self: Metric, properties: Properties) None
property data_scheme
class imfusion.machinelearning.ModelType(*args, **kwargs)

Bases: pybind11_object

Members:

NEURAL_NETWORK

NEURAL_NETWORK_LEGACY

RANDOM_FOREST

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ModelType, value: int) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ModelType, arg0: str) -> None

NEURAL_NETWORK = <ModelType.NEURAL_NETWORK: 2>
NEURAL_NETWORK_LEGACY = <ModelType.NEURAL_NETWORK_LEGACY: 1>
RANDOM_FOREST = <ModelType.RANDOM_FOREST: 0>
property name
property value
class imfusion.machinelearning.MorphologicalFilterOperation(*args, **kwargs)

Bases: Operation

Runs a morphological operation on the input.

Parameters:
  • mode – name of the operation in [‘dilation’, ‘erosion’, ‘opening’, ‘closing’]

  • op_size – size of the structuring element

  • use_l1_distance – flag to use L1 (absolute) or L2 (squared) distance in the local computations

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.MorphologicalFilterOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.MorphologicalFilterOperation, mode: str, op_size: int, use_l1_distance: bool, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.NormalizeMADOperation(*args, **kwargs)

Bases: Operation

Normalize the input image based on robust statistics. The image is shifted so that the median corresponds to 0 and normalized with the median absolute deviation (see https://en.wikipedia.org/wiki/Median_absolute_deviation). The operation is performed channel-wise.

Parameters:
  • selected_channels – channels selected for MAD normalization. If empty, all channels are normalized (default).

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • fix_median: False

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.NormalizeMADOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.NormalizeMADOperation, selected_channels: list[int] = [], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.NormalizeNormalOperation(*args, **kwargs)

Bases: Operation

Normalize the input image so that it has a zero-mean and a unit-standard deviation. A particular intensity value can be set to be ignored during the computations.

Parameters:
  • keep_background – Should ignore all intensities with background_value. Default: False

  • background_value – Intensity value to be potentially ignored. Default: 0.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.NormalizeNormalOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.NormalizeNormalOperation, keep_background: bool = False, background_value: float = 0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.NormalizePercentileOperation(*args, **kwargs)

Bases: Operation

Normalize the input image based on its intensity distribution, in particular on a lower and upper percentile. The output image is not guaranteed to be in [0;1] but the lower percentile will be mapped to 0 and the upper one to 1.

Parameters:
  • min_percentile – Lower percentile in [0;1]. Default: 0.0

  • max_percentile – Lower percentile in [0;1], Default: 1.0

  • clamp_values – Intensities are clipped to the new range. Default: False

  • ignore_zeros – Whether to ignore zeros when computing the percentiles. Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.NormalizePercentileOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.NormalizePercentileOperation, min_percentile: float = 0.0, max_percentile: float = 1.0, clamp_values: bool = False, ignore_zeros: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.NormalizeUniformOperation(*args, **kwargs)

Bases: Operation

Normalize the input image based so their minimum/maximum intensity so that the output image has a [min; max] range. The operation is performed channel-wise.

Parameters:
  • min – New minimum value of the image after normalization. Default: 0.0

  • max – New maximum value of the image after normalization. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.NormalizeUniformOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.NormalizeUniformOperation, min: float = 0.0, max: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.OneHotOperation(*args, **kwargs)

Bases: Operation

Encode a single channel label image, to a one-hot representation of ‘channels’ channels. If encode_background is off, label ‘0’ will denote the background and doesn’t encode to anything, Label ‘1’ will set the value ‘1’ in the first channel, Label ‘2’ will set the value ‘1’ in the second channels, etc. If encode_background is on, label ‘0’ will be the background and set the value ‘1’ in the first channel, Label ‘1’ will set the value ‘1’ in the second channel, etc. The number of channels must be large enough to contain this encoding.

Parameters:
  • num_channels – Number of channels in the output. Must be equal or larger to the highest possible label value. Default: 0

  • encode_background – whether to encode background in first channel. Default: True

  • to_ubyte – return label as ubyte (=int8) instead of float. Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.OneHotOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.OneHotOperation, num_channels: int = 0, encode_background: bool = True, to_ubyte: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.Operation(self: imfusion.machinelearning._bindings.Operation, name: str, processing_policy: imfusion.machinelearning._bindings.Operation.ProcessingPolicy = <ProcessingPolicy.EVERYTHING_BUT_LABELS: 0>, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None)

Bases: pybind11_object

class ProcessingPolicy(*args, **kwargs)

Bases: pybind11_object

Members:

EVERYTHING_BUT_LABELS

EVERYTHING

ONLY_LABELS

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.Operation.ProcessingPolicy, value: int) -> None

  2. __init__(self: imfusion.machinelearning._bindings.Operation.ProcessingPolicy, arg0: str) -> None

EVERYTHING = <ProcessingPolicy.EVERYTHING: 1>
EVERYTHING_BUT_LABELS = <ProcessingPolicy.EVERYTHING_BUT_LABELS: 0>
ONLY_LABELS = <ProcessingPolicy.ONLY_LABELS: 2>
property name
property value
class Specs(*args, **kwargs)

Bases: pybind11_object

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.Operation.Specs) -> None

  2. __init__(self: imfusion.machinelearning._bindings.Operation.Specs, name: str, configuration: imfusion._bindings.Properties, when_to_apply: imfusion.machinelearning._bindings.Phase) -> None

property name
property prop
property when_to_apply
configuration(self: Operation) Properties
configure(self: Operation, properties: Properties) bool
process(*args, **kwargs)

Overloaded function.

  1. process(self: imfusion.machinelearning._bindings.Operation, item: imfusion.machinelearning._bindings.DataItem) -> None

Execute the operation on the input DataItem in-place, i.e. the input item will be modified.

  1. process(self: imfusion.machinelearning._bindings.Operation, images: imfusion._bindings.SharedImageSet, in_place: bool = False) -> imfusion._bindings.SharedImageSet

Execute the operation on the input images and returns its output.
Args:

images (SharedImageSet): the input images. in_place (bool): If False, the input is guaranteed to be unchanged and the function will return a new object. If True, the input will be changed and the function will return it. (Default: False).

  1. process(self: imfusion.machinelearning._bindings.Operation, points: imfusion.machinelearning._bindings.KeypointSet, in_place: bool = False) -> imfusion.machinelearning._bindings.KeypointSet

Execute the operation on the input keypoints. The output will always be a different set of keypoints, i.e. this function never works in-place.
Args:

points (SharedImageSet): the input points. in_place (bool): if True, the input will be changed and the function will return it. If False, the input is guaranteed to be unchanged and the function will return a new object (Default: False).

  1. process(self: imfusion.machinelearning._bindings.Operation, boxes: imfusion.machinelearning._bindings.BoundingBoxSet, in_place: bool = False) -> imfusion.machinelearning._bindings.BoundingBoxSet

Execute the operation on the input bounding boxes. The output will always be a different set of bounding boxes, i.e. this function never works in-place.
Args:

boxes (SharedImageSet): the input boxes. in_place (bool): If False, the input is guaranteed to be unchanged and the function will return a new object. If True, the input will be changed and the function will return it. (Default: False).

seed_random_engine(self: Operation, seed: int) None
EVERYTHING = <ProcessingPolicy.EVERYTHING: 1>
EVERYTHING_BUT_LABELS = <ProcessingPolicy.EVERYTHING_BUT_LABELS: 0>
ONLY_LABELS = <ProcessingPolicy.ONLY_LABELS: 2>
property active_fields

Fields in the data item that this operation will process.

property computing_device

The computing device property.

property does_not_modify_input
property error_on_unexpected_behaviour

Treat unexpected behaviour warnings as errors.

property name
property processing_policy

The processing_policy property. Resetting it overrides the default operation behaviour on label.

property seed
class imfusion.machinelearning.OperationsSequence(*args, **kwargs)

Bases: pybind11_object

Helper class that executes a list of operations sequentially. This class tries to minimize the number of intermediate copies and should be used for performance reasons.

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.OperationsSequence) -> None

Default constructor that initializes the class with an empty list of operations.

  1. __init__(self: imfusion.machinelearning._bindings.OperationsSequence, pipeline_config: list[tuple[str, imfusion._bindings.Properties, imfusion.machinelearning._bindings.Phase]]) -> None

Init the sequential processing with a pipeline of Operations and their relative specs. The operations are executed according to their pipeline order.

Parameters:

pipeline_config – List of specs for the operations to add to the sequence.

add_operation(*args, **kwargs)

Overloaded function.

  1. add_operation(self: imfusion.machinelearning._bindings.OperationsSequence, operation: imfusion.machinelearning._bindings.Operation, phase: imfusion.machinelearning._bindings.Phase = <Phase.ALWAYS: 7>) -> bool

Add an operation to the sequential processing. The operations are executed according to the addition order.

Parameters:
  • operation – operation instance to add to the sequence.

  • phase – when to execute the added operation. Default: Phase.Always

  1. add_operation(self: imfusion.machinelearning._bindings.OperationsSequence, name: str, properties: imfusion._bindings.Properties, phase: imfusion.machinelearning._bindings.Phase = <Phase.ALWAYS: 7>) -> None

Add an operation to the sequential processing. The operations are executed according to the addition order.

Parameters:
  • name – name of the operation to add to the sequence. You must use the name used for registering the op in the operation factory. A list of the available ops can be retrieved by available_operations())().

  • properties – properties to configure the operation.

  • phase – specifies at which execution phase should the operation be run.

static available_cpp_operations() list[str]

Returns the list of registered C++ operations available for usage in OperationsSequence.

static available_operations() list[str]

Returns the list of all registered operations available for usage in OperationsSequence.

static available_py_operations() list[str]

Returns the list of registered Python operations available for usage in OperationsSequence.

ok(self: OperationsSequence) bool

Returns whether operation setup was successful.

operation_names(self: OperationsSequence) list[str]

Returns the operation names added to the sequence.

process(*args, **kwargs)

Overloaded function.

  1. process(self: imfusion.machinelearning._bindings.OperationsSequence, input: imfusion._bindings.SharedImageSet, exec_phase: imfusion.machinelearning._bindings.Phase = <Phase.ALWAYS: 7>, in_place: bool = True) -> imfusion._bindings.SharedImageSet

Execute the preprocessing pipeline on the given input images. This function never works in-place.

Parameters:
  • input – input image

  • exec_phase

    specifies the execution phase of the preprocessing pipeline. The execution will run only those operations whose phase (specified in the specs) corresponds to the current exec_phase, with the following exceptions:

    1. Operations marked with phase == Phase.Always are always run regardless of the exec_phase.

    2. If exec_phase == Phase.Always, all operations in the preprocessing pipeline are run regardless of their individual phase.

in_place (bool): If False, the input is guaranteed to be unchanged and the function will return a new object. If True, the input will be changed and the function will return it. (Default: False).

  1. process(self: imfusion.machinelearning._bindings.OperationsSequence, input: imfusion.machinelearning._bindings.DataItem, exec_phase: imfusion.machinelearning._bindings.Phase = <Phase.ALWAYS: 7>) -> bool

Execute the preprocessing pipeline on the given input. This function always works in-place, i.e. the input DataItem will be modified.

Parameters:
  • input – DataItem to be processed

  • exec_phase

    specifies the execution phase of the preprocessing pipeline. The execution will run only those operations whose phase (specified in the specs) corresponds to the current exec_phase, with the following exceptions:

    1. Operations marked with phase == Phase.Always are always run regardless of the exec_phase.

    2. If exec_phase == Phase.Always, all operations in the preprocessing pipeline are run regardless of their individual phase.

set_error_on_unexpected_behaviour(self: OperationsSequence, arg0: bool) None

Set flag on all operations whether to throw an error when an operation warn about an unexpected behaviour.

property operations
class imfusion.machinelearning.OrientedROISampler(*args, **kwargs)

Bases: ImageROISampler

The OrientedROISampler draws num_samples randomly ROIs of size roi_size with spacing roi_spacing per dataset. The sampler takes n_guided = floor(sample_from_labels_proportion * num_samples) label guided samples, and uniformly random samples for the rest of the samples. Labelmaps and Keypoints are supported for label guided sampling; for labelmap sampling, the labelmap is interpreted as a probabilistic output and sampled accordingly (thus negative values break the sampling, and labelmaps need to be one-hot encoded in case of multiple label values). Random augmentations can applied, including rotation, flipping, shearing, scaling and jitter. These augmentations are directly changing the matrix of the sample, thus the samples are not guaranteed to be affine or even in a right-handed coordinate system. The samples retain their matrices, so they can be viewed in their original position. May throw an ImageSamplerError

Parameters:
  • roi_size – Target size of the ROIs to be extracted as [Width, Height, Slices]

  • roi_spacing – Target spacing of the ROIs to be extracted in mm

  • num_samples – Number of samples to draw from one image

  • random_rotation_range – Vector defining deviation in quaternion rotation over the corresponding axis. Default: [0, 0, 0]

  • random_flipping_chance – Vector defining the change that corresponding dimension gets flipped. Default: [0, 0, 0]

  • random_shearing_range – Vector defining the range of proportional shearing in each dimension. Default: [0, 0, 0]

  • random_scaling_range – Vector defining the range of scaling in each dimension. Default: [0, 0, 0]

  • random_jitter_range – Vector defining the range of jitter applied on top of the crop location, defined as the standard deviation in mm in each dimension. Default: [0, 0, 0]

  • sample_from_labels_proportion – Proportion of ROIs that is sampled from the label values. Default: 0

  • avoid_borders – When taking random samples, the samples avoid to see the border if this is turned on. Default: off

  • align_crop – Align crop to image grid system, before applying augmentations

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • padding_mode: EnumStringParam assigned to zero in {clamp; mirror; zero}

  • squeeze: False

  • label_padding_mode: EnumStringParam assigned to zero in {clamp; mirror; zero}

  • y_axis_down: False

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.OrientedROISampler, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.OrientedROISampler, roi_size: numpy.ndarray[numpy.int32[3, 1]], roi_spacing: numpy.ndarray[numpy.float64[3, 1]], num_samples: int, random_rotation_range: numpy.ndarray[numpy.float64[3, 1]], random_flipping_chance: numpy.ndarray[numpy.float64[3, 1]], random_shearing_range: numpy.ndarray[numpy.float64[3, 1]], random_scaling_range: numpy.ndarray[numpy.float64[3, 1]], random_jitter_range: numpy.ndarray[numpy.float64[3, 1]], sample_from_labels_proportion: float = 0.0, avoid_borders: bool = False, align_crop: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.PadDimsOperation(*args, **kwargs)

Bases: Operation

This operation expands an image by adding padding pixels to any or all sides. The value of the border can be specified by the padding mode.

The padding mode can be one of the following: - Clamp: The border pixels are the same as the closest image pixel. - Mirror: The border pixels are the same as the closest image pixel. - Zero: Constant padding with zeros or, if provided, with paddingValue.

Note: the padding widths are evenly distributed to the left and right of the input image.

If the difference delta between the target dimensions and the input dimensions is odd, the padding is distributed as delta / 2 to the left and delta / 2 + 1 to the right.

Parameters:
  • target_dims – Target dimensions [width, height, depth] for the padded image. Default: [1, 1, 1]

  • padding_mode – Mode for padding (Clamp, Mirror, Zero). Default: PaddingMode.CLAMP

  • padding_value – Value to use for padding when using Zero mode (optional). Default: None

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.PadDimsOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.PadDimsOperation, target_dims: numpy.ndarray[numpy.int32[3, 1]] = array([1, 1, 1], dtype=int32), padding_mode: imfusion._bindings.PaddingMode = <PaddingMode.CLAMP: 2>, padding_value: Optional[float] = None, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.PadOperation(*args, **kwargs)

Bases: Operation

Pad an image to a specific padding size in each dimension

This operation expands an image by adding padding pixels to any or all sides. The value of the border can be specified by the padding mode.

The padding mode can be one of the following: - Clamp: The border pixels are the same as the closest image pixel. - Mirror: The border pixels are the same as the closest image pixel. - Zero: Constant padding with zeros or, if provided, with paddingValue.

Note: Padding sizes are specified in pixels, and can be positive, negative or mixed. Negative padding means cropping. Note: Both GPU and CPU implementations are provided.

Parameters:
  • pad_size_x – Padding width in pixels for X dimension [left, right]. Default: [0, 0]

  • pad_size_y – Padding width in pixels for Y dimension [top, bottom]. Default: [0, 0]

  • pad_size_z – Padding width in pixels for Z dimension [front, back]. Default: [0, 0]

  • padding_mode – Mode for padding (Clamp, Mirror, Zero). Default: PaddingMode.CLAMP

  • padding_value – Value to use for padding when using Zero mode (optional). Default: None

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.PadOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.PadOperation, pad_size_x: numpy.ndarray[numpy.int32[2, 1]] = array([0, 0], dtype=int32), pad_size_y: numpy.ndarray[numpy.int32[2, 1]] = array([0, 0], dtype=int32), pad_size_z: numpy.ndarray[numpy.int32[2, 1]] = array([0, 0], dtype=int32), padding_mode: imfusion._bindings.PaddingMode = <PaddingMode.CLAMP: 2>, padding_value: Optional[float] = None, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ParamUnit(*args, **kwargs)

Bases: pybind11_object

Members:

MM

VOXEL

FRACTION

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ParamUnit, value: int) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ParamUnit, arg0: str) -> None

FRACTION = FRACTION
MM = MM
VOXEL = VOXEL
property name
property value
class imfusion.machinelearning.Phase(*args, **kwargs)

Bases: pybind11_object

Members:

TRAIN

VALIDATION

TEST

ALWAYS

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.Phase, value: int) -> None

  2. __init__(self: imfusion.machinelearning._bindings.Phase, arg0: str) -> None

  3. __init__(self: imfusion.machinelearning._bindings.Phase, arg0: list[str]) -> None

ALWAYS = <Phase.ALWAYS: 7>
TEST = <Phase.TEST: 4>
TRAIN = <Phase.TRAIN: 1>
VALIDATION = <Phase.VALIDATION: 2>
property name
property value
class imfusion.machinelearning.PixelwiseClassificationMetrics(self: PixelwiseClassificationMetrics)

Bases: Metric

compute_per_label(self: PixelwiseClassificationMetrics, arg0: SharedImageSet, arg1: SharedImageSet) list[dict[str, dict[int, float]]]
class imfusion.machinelearning.PolyCropOperation(*args, **kwargs)

Bases: Operation

Masks the image with a convex polygon as described in Markova et al. 2022. (https://arxiv.org/abs/2205.03439)

Parameters:
  • points – Each point (texture coordinates) in points defines a plane (perpendicular to the direction from the center to the point), this plane splits the volume in two parts, the part of the image that doesn’t contain the image center is discarded.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.PolyCropOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.PolyCropOperation, points: list[numpy.ndarray[numpy.float64[3, 1]]] = [], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.PredictionOutput(*args, **kwargs)

Bases: pybind11_object

Members:

UNKNOWN

VECTOR

IMAGE

KEYPOINTS

BOUNDING_BOXES

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.PredictionOutput, value: int) -> None

  2. __init__(self: imfusion.machinelearning._bindings.PredictionOutput, arg0: str) -> None

BOUNDING_BOXES = <PredictionOutput.BOUNDING_BOXES: 3>
IMAGE = <PredictionOutput.IMAGE: 1>
KEYPOINTS = <PredictionOutput.KEYPOINTS: 2>
UNKNOWN = <PredictionOutput.UNKNOWN: -1>
VECTOR = <PredictionOutput.VECTOR: 0>
property name
property value
class imfusion.machinelearning.PredictionType(*args, **kwargs)

Bases: pybind11_object

Members:

UNKNOWN

CLASSIFICATION

REGRESSION

OBJECT_DETECTION

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.PredictionType, value: int) -> None

  2. __init__(self: imfusion.machinelearning._bindings.PredictionType, arg0: str) -> None

CLASSIFICATION = <PredictionType.CLASSIFICATION: 0>
OBJECT_DETECTION = <PredictionType.OBJECT_DETECTION: 2>
REGRESSION = <PredictionType.REGRESSION: 1>
UNKNOWN = <PredictionType.UNKNOWN: -1>
property name
property value
class imfusion.machinelearning.ProcessingRecordComponent(self: ProcessingRecordComponent)

Bases: DataComponentBase

class imfusion.machinelearning.RandomAddDegradedLabelAsChannelOperation(*args, **kwargs)

Bases: Operation

Append a channel to the image that contains a randomly degraded version of the label.

Parameters:
  • blob_radius – Radius of each blob, in pixel coordinates. Default: 5.0.

  • probability_no_blobs – Probability that zero blobs are chosen. Default: 0.1

  • probability_invert – Probability of inverting the blobs, in this case the extra channel is positive/negative based on the label except at blobs, where it is zero. Default: 0.0

  • mean_num_blobs – Mean of (Poisson-distributed) number of blobs to draw, conditional on probability_no_blobs. Default: 100.0

  • only_positive – If true, output channel is clamped to zero from below. Default: False

  • label_dilation_range – The label_dilation parameter of the underlying AddDegradedLabelAsChannelOperation is uniformly drawn from this range. Default: [0.0, 0.0]

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • dilation_range: 0 0

  • probability: 1.0

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomAddDegradedLabelAsChannelOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomAddDegradedLabelAsChannelOperation, blob_radius: float = 5.0, probability_no_blobs: float = 0.1, probability_invert: float = 0.0, mean_num_blobs: float = 100.0, only_positive: bool = False, dilation_range: numpy.ndarray[numpy.float64[2, 1]] = array([0., 0.]), *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomAddRandomNoiseOperation(*args, **kwargs)

Bases: Operation

Apply AddRandomNoiseOperation to images with randomized intensity parameter.

Parameters:
  • type – Distribution of the noise (‘uniform’, ‘gaussian’, ‘gamma’,’shot’). Default: ‘uniform’. See AddRandomNoiseOperation. intensity_range: Range of the interval used to draw the intensity parameter. Default: [0.0, 0.0]. Absolute values of drawn values are taken.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

  • probability – Float in [0, 1] defining the probability for the operation to be executed. Default: 1.0

Other parameters accepted by configure():
  • intensity_range: 0 0

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomAddRandomNoiseOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomAddRandomNoiseOperation, type: str = ‘uniform’, intensity_range: numpy.ndarray[numpy.float64[2, 1]] = array([0., 0.]), probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomAxisFlipOperation(*args, **kwargs)

Bases: Operation

Flip image content along specified set of axes, with independent sampling for each axis.

Parameters:
  • axes – List of strings from {‘x’,’y’,’z’} specifying the axes to flip. For 2D images, only ‘x’ and ‘y’ are valid.

  • probability – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomAxisFlipOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomAxisFlipOperation, axes: list[str] = [], probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomAxisRotationOperation(*args, **kwargs)

Bases: Operation

Rotate image around image axis with independently drawn axis-specific random rotation angle of +-{90, 180, 270} degrees.

Parameters:
  • axes – List of strings from {‘x’,’y’,’z’} specifying the axes to rotate around. For 2D images, only [‘z’] is valid.

  • probability – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomAxisRotationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomAxisRotationOperation, axes: list[str] = [], probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomChoiceOperation(*args, **kwargs)

Bases: Operation

Meta-operation that picks one operation from its configuration randomly and executes it. This is particularly useful for image samplers, where we might want to alternate between different ways of sampling the input images.

Parameters:
  • operation_specs – List of operation Specs to configure the operations to be added.

  • operation_weights – Weights associated to the each operation during the sampling process. A higher relative weight given to an operation means that this operation will be sampled more often.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomChoiceOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomChoiceOperation, operation_specs: list[imfusion.machinelearning._bindings.Operation.Specs] = [], operation_weights: list[float] = [], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  3. __init__(self: imfusion.machinelearning._bindings.RandomChoiceOperation, operation_specs: list[tuple[str, imfusion._bindings.Properties, imfusion.machinelearning._bindings.Phase]], operation_weights: list[float] = []) -> None

Meta-operation that picks one operation from its configuration randomly and executes it. This is particularly useful for image samplers, where we might want to alternate between different ways of sampling the input images.

Parameters:
  • operation_specs – List of operation (name, Properties, Phase) casted into Specs to configure the operations to be added.

  • operation_weights – Weights associated to the each operation during the sampling process. A higher relative weight given to an operation means that this operation will be sampled more often.

  1. __init__(self: imfusion.machinelearning._bindings.RandomChoiceOperation, operations: list[imfusion.machinelearning._bindings.Operation], operation_weights: list[float]) -> None

Meta-operation that picks one operation from its configuration randomly and executes it. This is particularly useful for image samplers, where we might want to alternate between different ways of sampling the input images.

Parameters:
  • operations – List of operations to be added.

  • operation_weights – Weights associated to the each operation during the sampling process. A higher relative weight given to an operation means that this operation will be sampled more often.

class imfusion.machinelearning.RandomCropAroundLabelMapOperation(*args, **kwargs)

Bases: Operation

Crops the input image and label to the bounds of a random label value, and sets the label value to 1 and all other values to zero in the resulting label.

Parameters:
  • margin – Margin, in pixels. Default: 1

  • reorder – Whether label value in result should be mapped to 1. Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • probability: 1.0

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomCropAroundLabelMapOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomCropAroundLabelMapOperation, margin: int = 1, reorder: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomCropOperation(*args, **kwargs)

Bases: Operation

Crop input images and label maps with a matching random size and offset.

Parameters:
  • crop_range – List of floats from [0;1] specifying the maximum percentage of the dimension to crop. Default: [0.0, 0.0, 0.0]

  • probability – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomCropOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomCropOperation, crop_range: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomCutOutOperation(*args, **kwargs)

Bases: Operation

Apply a random cutout to the image.

Parameters:
  • cutout_size_lower – List of doubles specifying the lower bound of the cutout region size for each dimension in mm. Default: [0, 0, 0]

  • cutout_size_upper – List of doubles specifying the upper bound of the cutout region size for each dimension in mm. Default: [0, 0, 0]

  • cutout_value_range – List of floats specifying the minimum and maximum fill value for cutout regions. Default: [0, 0]

  • cutout_number_range – List of integers specifying the minimum and maximum number of cutout regions. Default: [0, 0]

  • cutout_size_units – Units of the cutout size. Default: MM

  • probability – Float in [0;1] defining the probability for the operation to be executed.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomCutOutOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomCutOutOperation, cutout_size_lower: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), cutout_size_upper: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), cutout_value_range: numpy.ndarray[numpy.float32[2, 1]] = array([0., 0.], dtype=float32), cutout_number_range: numpy.ndarray[numpy.int32[2, 1]] = array([0, 0], dtype=int32), cutout_size_units: imfusion.machinelearning._bindings.ParamUnit = MM, probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomDeformationOperation(*args, **kwargs)

Bases: Operation

Apply a deformation to the image using a specified control point grid and random displacements

Parameters:
  • num_subdivisions – list specifying the number of subdivisions for each dimension (the number of control points is subdivisions+1). Default: [1, 1, 1]

  • max_abs_displacement – absolute value of the maximum possible displacement (mm). Default: 1

  • padding_mode – defines which type of padding is used. Default: ZERO

  • probability – probability of applying this Operation. Default: 1.0

  • adjust_size – configures whether the resulting image should adjust its size to encompass the deformation. Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomDeformationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomDeformationOperation, num_subdivisions: numpy.ndarray[numpy.int32[3, 1]] = array([1, 1, 1], dtype=int32), max_abs_displacement: float = 1, padding_mode: imfusion._bindings.PaddingMode = <PaddingMode.ZERO: 0>, probability: float = 1.0, adjust_size: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomGammaCorrectionOperation(*args, **kwargs)

Bases: Operation

Apply a random gamma correction to the image intensities. Output = Unnormalize(pow(Normalize(Input), gamma)) where gamma is drawn uniformly in [1-random_range; 1+random_range].

Parameters:
  • random_range – Range of the interval used to draw the gamma correction, typically in [0; 0.5]. Default: 0.2

  • probability – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomGammaCorrectionOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomGammaCorrectionOperation, random_range: float = 0.2, probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomImageFromLabelOperation(*args, **kwargs)

Bases: Operation

Creates a random image from a label map, each label is sampled from a Gaussian distribution. Each Gaussian distribution parameters (mean and standard deviation) are uniformly sampled withing the provided intervals (respectively mean_range and standard_dev_range).

Parameters:
  • mean_range – Range of means for the intensities’ Gaussian distributions.

  • standard_dev_range – Range of standard deviations for the intensities’ Gaussian distributions.

  • output_field – Output field for the generated image.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomImageFromLabelOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomImageFromLabelOperation, mean_range: numpy.ndarray[numpy.float64[2, 1]], standard_dev_range: numpy.ndarray[numpy.float64[2, 1]], output_field: str = ‘ImageFromLabel’, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomInvertOperation(*args, **kwargs)

Bases: Operation

Invert the intensities of the image: \(\textnormal{output} = -\textnormal{input}\).

Parameters:
  • probability – Float in [0;1] defining the probability for the operation to be executed. Default: 0.5

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomInvertOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomInvertOperation, probability: float = 0.5, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomKeypointJitterOperation(*args, **kwargs)

Bases: Operation

Adds an individually and randomly sampled offset to each keypoint of each KeypointElement.

Parameters:
  • offset_std_dev – standard deviation of the normal distribution used to sample the jitter in mm

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomKeypointJitterOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomKeypointJitterOperation, offset_std_dev: float, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomLinearIntensityMappingOperation(*args, **kwargs)

Bases: Operation

Apply a random linear shift and scale to the image intensities. \(\textnormal{output} = \textnormal{factor}_\textnormal{random} * \textnormal{input} + \textnormal{bias}_\textnormal{random}\) ,

where \(\textnormal{factor}_\textnormal{random}\) is drawn uniformly in \([1-\textnormal{random_range}, 1+\textnormal{random_range}]\)
and \(\textnormal{bias}_\textnormal{random}\) is drawn uniformly in \([-\textnormal{random_range}*(\max(\textnormal{input})-\min(\textnormal{input})), \textnormal{random_range}*(\max(\textnormal{input})-\min(\textnormal{input}))]\).
Parameters:
  • random_range – Perturbation amplitude, typically in [0.0, 1.0]. Default: 0.2

  • probability – Float in [0, 1] defining the probability for the operation to be executed. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomLinearIntensityMappingOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomLinearIntensityMappingOperation, random_range: float = 0.2, probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomMRIBiasFieldGenerationOperation(*args, **kwargs)

Bases: Operation

Apply or generate a random multiplicative intensity modulation field. If the output is a field, it is shifted as close to mean 1 as possible while remaining positive everywhere. If the output is not a field, the image intensity is shifted so that the mean intensity of the input image is preserved.

Parameters:
  • center_beta_dist_params – Beta distribution parameters for sampling the relative center coordinate locations. Default: [0.0, 1.0]

  • field_amplitude_random_range – Amplitude of the field. Default: [0.2, 0.5]

  • length_scale_mm_random_range – Range of length scale of the distance kernel in mm. Default: [50.0, 400.0]

  • distance_scaling_random_range – Range of relative scaling of scanner space coordinates for anisotropic fields. Default: [0.5, 1.0]

  • invert_probability – Probability to invert the field (before normalization): field <- 2.0 - field. Default: 0.0

  • output_is_field – Produce field instead of corrupted image. Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • probability: 1.0

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomMRIBiasFieldGenerationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomMRIBiasFieldGenerationOperation, center_beta_dist_params: numpy.ndarray[numpy.float64[2, 1]] = array([0., 1.]), field_amplitude_random_range: numpy.ndarray[numpy.float64[2, 1]] = array([0.2, 0.5]), length_scale_mm_random_range: numpy.ndarray[numpy.float64[2, 1]] = array([ 50., 400.]), distance_scaling_random_range: numpy.ndarray[numpy.float64[2, 1]] = array([0.5, 1. ]), invert_probability: float = 0.0, output_is_field: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomPolyCropOperation(*args, **kwargs)

Bases: Operation

Masks the image with a random convex polygon as described in Markova et al. 2022 (https://arxiv.org/abs/2205.03439) The convex polygon mask is constructed by sampling random planes, each plane splits the volume in two parts, the part of the image that doesn’t contain the image center is discarded.

Parameters:
  • number_range – Range of integers specifying the minimum and maximum number of cutting planes. Default: [5, 10]

  • min_radius – The minimum distance a cutting plane must have from the center (image coordinates are normalized to [-1, 1]). Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • probability: 1.0

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomPolyCropOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomPolyCropOperation, number_range: numpy.ndarray[numpy.int32[2, 1]] = array([ 5, 10], dtype=int32), min_radius: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomROISampler(*args, **kwargs)

Bases: ImageROISampler

Sampler which randomly samples ROIs from the input image and label map with a target The images will be padded if the target size is larger than the input image.

Parameters:
  • roi_size – Target size of the ROIs to be extracted as [Width, Height, Slices]

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • label_padding_mode: EnumStringParam assigned to clamp in {clamp; mirror; zero}

  • padding_mode: EnumStringParam assigned to clamp in {clamp; mirror; zero}

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomROISampler, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomROISampler, roi_size: numpy.ndarray[numpy.int32[3, 1]], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomResolutionReductionOperation(*args, **kwargs)

Bases: Operation

Downsamples the image to a target_spacing and upsamples again to the original spacing to reduce image information. The target_spacing is sampled uniformly and independently in each dimension between the corresponding image spacing and max_spacing.

Parameters:
  • max_spacing – maximum spacing per dimension which the target spacing is randomly sampled from. Minimum sampling spacing is the maximum (over all frames of the image set) spacing per dimension of the input SharedImageSet. Default: [0.0, 0.0, 0.0]

  • probability – probability of applying this Operation. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomResolutionReductionOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomResolutionReductionOperation, max_spacing: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomRotationOperation(*args, **kwargs)

Bases: Operation

Rotate input images and label maps with random angles.

Parameters:
  • angles_range – List of floats specifying the upper bound (in degrees) of the range from with the rotation angles will be drawn uniformly. Default: [0, 0, 0]

  • adjust_size – Increase image size to include the whole rotated image or keep current dimensions. Default: False

  • apply_now – Bake transformation right way (otherwise, just changes the matrix). Default: False

  • probability – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomRotationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomRotationOperation, angles_range: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), adjust_size: bool = False, apply_now: bool = False, probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomScalingOperation(*args, **kwargs)

Bases: Operation

Scale input images and label maps with random factors.

Parameters:
  • scales_range (vec3) – List of floats specifying the upper bound of the range from which the scaling ofset will be sampled. The scaling factor will be drawn uniformly within [1-scale, 1+scale]. Scale should be between 0 and 1. Default: [0.5, 0.5, 0.5]

  • log_scales_range (vec3) – List of floats specifying the upper bound of the range from which the scaling factor will be drawn uniformly in log scale. The scaling will then be distributed within [1/log_scale, log_scale]. Default: [2., 2., 2.]

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

  • log_parameterization (bool) – If true, uses the log scales range parameterization, otherwise uses the scales range parameterization. Default: False

  • apply_now (bool) – Bake transformation right way (otherwise, just changes the matrix). Default: False

  • probability (float) – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0

Other parameters accepted by configure():
  • log_parametrization: False

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomScalingOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomScalingOperation, scales_range: numpy.ndarray[numpy.float64[3, 1]] = array([0.5, 0.5, 0.5]), log_scales_range: numpy.ndarray[numpy.float64[3, 1]] = array([2., 2., 2.]), log_parameterization: bool = False, apply_now: bool = False, probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomSmoothOperation(*args, **kwargs)

Bases: Operation

Apply a random smoothing on the image (Gaussian kernel). The kernel can be parameterized either in pixel or in mm, and can be anisotropic. The half kernel size is distributed uniformly between half_kernel_bounds[0] and half_kernel_bounds[1]. \(\textnormal{image_output} = \textnormal{image} * \textnormal{gaussian_kernel}(\sigma)\) , with \(\sigma \sim U(\textnormal{half_kernel_bounds}[0], \textnormal{half_kernel_bounds}[1])\)

Parameters:
  • half_kernel_bounds – Bounds for the half kernel size. The final kernel size is 2 times the sampled half kernel size plus one. Default: [1, 1]

  • kernel_size_in_mm – Interpret kernel size as mm. Otherwise uses pixels. Default: False

  • isotropic – Forces the randomly drawn kernel size to be isotropic. Default: True

  • probability – Value in [0.0; 1.0] indicating the probability of this operation to be performed. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomSmoothOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomSmoothOperation, half_kernel_bounds: numpy.ndarray[numpy.float64[2, 1]] = array([1, 1], dtype=int32), kernel_size_in_mm: bool = False, isotropic: bool = True, probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RandomTemplateInpaintingOperation(*args, **kwargs)

Bases: Operation

Inpaints a template into an image with randomly selected spatial and intensity transformation in a given range.

Parameters:
  • template_paths – paths from which a template .imf file is randomly loaded.

  • rotation_range – rotation of template in degrees per axis randomly sampled from [-rotation_range, rotation_range]. Default: [0, 0, 0]

  • translation_range – translation of template in degrees per axis randomly sampled from [-translation_range, translation_range]. Default: [0, 0, 0]

  • template_mult_factor_range – Multiply template intensities with a factor randomly sampled from this range. Default: [0.0, 0.0]

  • add_values_to_existing – Adding values to input image rather than replacing them. Default: False

  • probability – Float in [0;1] defining the probability for the operation to be executed. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RandomTemplateInpaintingOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RandomTemplateInpaintingOperation, template_paths: list[str] = [], rotation_range: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), translation_range: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), template_mult_factor_range: numpy.ndarray[numpy.float64[2, 1]] = array([0., 0.]), add_values_to_existing: bool = False, probability: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RectifyRotationOperation(*args, **kwargs)

Bases: Operation

Sets the image matrix to the closest xyz-axis aligned rotation, effectively making every rotation angle a multiple of 90 degrees. This is useful when the values of the rotation are unimportant but the axis flips need to be preserved. If used before BakeTransformationOperation, this operation will avoid oblique angles and a lot of zero padding.

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RectifyRotationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RectifyRotationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RemoveMaskOperation(*args, **kwargs)

Bases: Operation

Removes the mask of all input images.

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RemoveMaskOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RemoveMaskOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RemoveOperation(*args, **kwargs)

Bases: Operation

Removes a set of fields from a data item.

Parameters:
  • apply_to – fields to mark as targets (will initialize the underlying apply_to parameter)

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RemoveOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RemoveOperation, source: set[str], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RenameOperation(*args, **kwargs)

Bases: Operation

Renames a set of fields of a data item.

Parameters:
  • source – list of the elements to be replaced

  • target – list of names of the new elements (must match the size of source)

  • throw_error_on_missing_source – if source field is missing, then throw an error (otherwise warn about unexpected behavior and do nothing). Default: True

  • throw_error_on_existing_target – if target field already exists, then throw an error (otherwise warn about unexpected behavior and overwrite it). Default: True

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RenameOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RenameOperation, source: list[str], target: list[str], throw_error_on_missing_source: bool = True, throw_error_on_existing_target: bool = True, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ReplaceLabelsValuesOperation(*args, **kwargs)

Bases: Operation

Replace some label values with other values (only works for integer-typed labels).

Parameters:
  • old_values – List of integer values to be replaced. All values that are not in this list will remain unchanged.

  • new_values – List of integer values to replace old_values. It must have the same size as old_values, since there should be a one-to-one mapping.

  • update_labelsdatacomponent – Replaces the old-values in the LabelsDataComponent with the mapped ones. Default: True

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ReplaceLabelsValuesOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ReplaceLabelsValuesOperation, old_values: list[int], new_values: list[int], update_labelsdatacomponent: bool = True, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ResampleDimsOperation(*args, **kwargs)

Bases: Operation

Resample the input to fixed target dimensions.

Parameters:
  • target_dims – Target dimensions in pixels as [Width, Height, Slices].

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ResampleDimsOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ResampleDimsOperation, target_dims: numpy.ndarray[numpy.int32[3, 1]], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ResampleKeepingAspectRatioOperation(*args, **kwargs)

Bases: Operation

Resample input to target dimensions while keeping aspect ratio of original images. The target dimensions are specified by either:

  1. one target dimension, i.e.: target_dim_x: 128. In such case the resampling will keep the aspect ratio of dimension y and z wrt x.

  2. two target dimensions, i.e.: target_dim_x: 128, i.e.: target_dim_y: 128 and which dimension to consider for preserving the aspect ratio of the leftover dimension, i.e. keep_aspect_ratio_wrt: x.

Parameters:
  • keep_aspect_ratio_wrt – specifies the dimension to which lock the aspect ratio, please assign either of “”, “x”, “y” or “z”. If only one target_dim is specified, then this can be empty (or must match the given target_dim). If all the target_dim args are specified, then this argument must be empty, however, in this case ResampleDims should then be preferred.

  • target_dim_x – either the target width or None if this dimension will be computed automatically by preserving the aspect ratio.

  • target_dim_y – either the target height or None if this dimension will be computed automatically by preserving the aspect ratio.

  • target_dim_z – for 3D images, either the target slices or None if this dimension will be computed automatically by preserving the aspect ratio.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ResampleKeepingAspectRatioOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ResampleKeepingAspectRatioOperation, keep_aspect_ratio_wrt: str = ‘’, target_dim_x: Optional[int] = 1, target_dim_y: Optional[int] = 1, target_dim_z: Optional[int] = 1, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ResampleOperation(*args, **kwargs)

Bases: Operation

Resample the input to a fixed target resolution.

Parameters:
  • resolution – Target spacing in mm.

  • preserve_extent – Preserve the exact spatial extent of the image, adjusting the output spacing resolution accordingly (since the extent is not always a multiple of resolution). Default: True

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ResampleOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ResampleOperation, resolution: numpy.ndarray[numpy.float64[3, 1]], preserve_extent: bool = True, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ResampleToInputOperation(*args, **kwargs)

Bases: Operation

Resample the input image with respect to the image in ReferenceImageDataComponent

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ResampleToInputOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ResampleToInputOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.ResetCriterion(*args, **kwargs)

Bases: pybind11_object

Members:

Fixed

SmallestLoader

LargestLoader

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ResetCriterion, value: int) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ResetCriterion, arg0: str) -> None

Fixed = <ResetCriterion.Fixed: 0>
LargestLoader = <ResetCriterion.LargestLoader: 2>
SmallestLoader = <ResetCriterion.SmallestLoader: 1>
property name
property value
class imfusion.machinelearning.ResolutionReductionOperation(*args, **kwargs)

Bases: Operation

Downsamples the image to the target_spacing and upsamples again to the original spacing to reduce image information.

Parameters:
  • target_spacing – spacing per dimension to which the image is resampled before it is resampled back

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ResolutionReductionOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ResolutionReductionOperation, target_spacing: numpy.ndarray[numpy.float64[3, 1]], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RotationOperation(*args, **kwargs)

Bases: Operation

Rotate input images and label maps with fixed angles.

Parameters:
  • angles – Rotation angles in degrees. Default: [0, 0, 0]

  • adjust_size – Increase image size to include the whole rotated image or keep current dimensions. Default: False

  • apply_now – Bake transformation right way (otherwise, just changes the matrix). Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RotationOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RotationOperation, angles: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), adjust_size: bool = False, apply_now: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.RunModelOperation(*args, **kwargs)

Bases: Operation

Run a machine learning model on the input item and merge the prediction to the input item. The input field names specified in the model config yaml will be use to determine which fields in the input data item the model is run. If the model doesn’t specify any input field, i.e. is a single input model, the user can either provide an input data item with a single image element, or use the apply_to to specify on which field the model should be run. The input item will be populated with the model prediction. The field names are those specified in the model configuration. If no output name is specified (i.e. single output case), the prediction will be associated to the field “Prediction”

Parameters:
  • config_path – path to the YAML configuration file of the pixelwise model

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.RunModelOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.RunModelOperation, config_path: str, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.SISBasedElement

Bases: DataElement

to_sis(self: SISBasedElement) SharedImageSet
property sis

Access to the underlying SharedImageSet.

class imfusion.machinelearning.ScalingOperation(*args, **kwargs)

Bases: Operation

Scale input images and label maps with fixed factors.

Parameters:
  • scales – Scaling factor applied to each dimension. Default: [1, 1, 1]

  • apply_now – Bake transformation right way (otherwise, just changes the matrix). Default: True

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ScalingOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ScalingOperation, scales: numpy.ndarray[numpy.float64[3, 1]] = array([1., 1., 1.]), apply_now: bool = True, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.SelectChannelsOperation(*args, **kwargs)

Bases: Operation

Keeps a subset of the input channels specified by the selected channel indices (0-based indexing).

Parameters:
  • selected_channels – List of channels to be selected in input. If empty, use all channels.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.SelectChannelsOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.SelectChannelsOperation, selected_channels: list[int], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.SetLabelModalityOperation(*args, **kwargs)

Bases: Operation

Sets the input modality. If the target modality is LABEL, warns and skips fields that are not unsigned 8-bit integer. The default processing policy is to apply to targets only.

Parameters:
  • label_names – List of non-background label names. The label with index zero is assigned the name ‘Background’.

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • modality: 8

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.SetLabelModalityOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.SetLabelModalityOperation, label_names: list[str], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.SetMatrixToIdentityOperation(*args, **kwargs)

Bases: Operation

Set the matrices of all images to identity (associated landmarks and boxes will be moved accordingly).

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.SetMatrixToIdentityOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.SetMatrixToIdentityOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.SetModalityOperation(*args, **kwargs)

Bases: Operation

Sets the input modality. If the target modality is LABEL, warns and skips fields that are not unsigned 8-bit integer. The default processing policy is to apply to all fields.

Parameters:
  • modality – Modality to set the input to.

  • label_names – List of non-background label names. The label with index zero is assigned the name ‘Background’. Default: [].

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.SetModalityOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.SetModalityOperation, modality: imfusion._bindings.Data.Modality, label_names: list[str] = [], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.SetSpacingOperation(*args, **kwargs)

Bases: Operation

Modify images so that image elements have specified spacing (associated landmarks and boxes will be moved accordingly). :param spacing: Target spacing. :param device: Specifies whether this Operation should run on CPU or GPU. :param seed: Specifies seeding for any randomness that might be contained in this operation. :param error_on_unexpected_behaviour: Specifies whether to throw an exception instead of warning about unexpected behaviour. :param apply_to: Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.SetSpacingOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.SetSpacingOperation, spacing: numpy.ndarray[numpy.float64[3, 1]] = array([1., 1., 1.]), *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.SigmoidOperation(*args, **kwargs)

Bases: Operation

Apply a sigmoid function on the input image. \(\textnormal{output} = 1.0/(1.0 + \exp(- \textnormal{scale} * \textnormal{input}))\)

Parameters:
  • scale – Scale parameter within the sigmoid function. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.SigmoidOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.SigmoidOperation, scale: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.SmoothOperation(*args, **kwargs)

Bases: Operation

Run a convolution with a Gaussian kernel on the input image. The kernel can be parameterized either in pixel or in mm, and can be anisotropic.

Parameters:
  • half_kernel_size – Half size of the convolution kernel in pixels or mm.

  • kernel_size_in_mm – Interpret kernel size as mm. Otherwise uses pixels. Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.SmoothOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.SmoothOperation, half_kernel_size: numpy.ndarray[numpy.float64[3, 1]], kernel_size_in_mm: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.SoftmaxOperation(*args, **kwargs)

Bases: Operation

Computes channel-wise softmax on input.

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.SoftmaxOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.SoftmaxOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.SplitROISampler(*args, **kwargs)

Bases: ImageROISampler

Sampler which samples one ROI from the input image and label map with a target size, regularly along each dimension. This sampler mimics the situation at test-time, when one image may be regularly divided in all dimensions.

Parameters:
  • roi_size – Target size of the ROIs to be extracted as [Width, Height, Slices]

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Other parameters accepted by configure():
  • label_padding_mode: EnumStringParam assigned to clamp in {clamp; mirror; zero}

  • padding_mode: EnumStringParam assigned to clamp in {clamp; mirror; zero}

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.SplitROISampler, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.SplitROISampler, roi_size: numpy.ndarray[numpy.int32[3, 1]], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.SurfaceDistancesMetric(self: SurfaceDistancesMetric, symmetric: bool = True, crop_margin: int = -1)

Bases: Metric

class Results(self: Results)

Bases: pybind11_object

property all_distances
property max_absolute_distance
property mean_absolute_distance
property mean_signed_distance
compute_distances(self: SurfaceDistancesMetric, prediction: SharedImageSet, target: SharedImageSet) list[dict[int, Results]]
class imfusion.machinelearning.SwapImageAndLabelsOperation(*args, **kwargs)

Bases: Operation

Swaps image and label map.

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.SwapImageAndLabelsOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.SwapImageAndLabelsOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.SyncOperation(*args, **kwargs)

Bases: Operation

Synchronizes shared memory (CPU <-> OpenGL) of images.

Parameters:
  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.SyncOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.SyncOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.TanhOperation(*args, **kwargs)

Bases: Operation

Apply a tanh function on the input image. \(\textnormal{output} = \tanh(\textnormal{scale} * \textnormal{input})\)

Parameters:
  • scale – Scale parameter within the tanh function. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.TanhOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.TanhOperation, scale: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.TargetTag(self: TargetTag)

Bases: DataComponentBase

class imfusion.machinelearning.TemplateInpaintingOperation(*args, **kwargs)

Bases: Operation

Inpaints a template into an image with specified spatial and intensity transformation.

Parameters:
  • template_path – path to load template .imf file.

  • template_rotation – rotation of template in degrees per axis. Default: [0, 0, 0]

  • template_translation – translation of template in degrees per axis. Default: [0, 0, 0]

  • add_values_to_existing – Adding values to input image rather than replacing them. Default: False

  • template_mult_factor – Multiply template intensities with this factor. Default: 1.0

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.TemplateInpaintingOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.TemplateInpaintingOperation, template_path: str = ‘’, template_rotation: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), template_translation: numpy.ndarray[numpy.float64[3, 1]] = array([0., 0., 0.]), add_values_to_existing: bool = False, template_mult_factor: float = 1.0, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.Tensor(self: Tensor, tensor: Buffer)

Bases: Data

Class for managing raw Tensors

This class is meant to have direct control over tensors either passed to, or received from a MachineLearningModel. Unlike the SISBasedElements, there is no inherent stacking/permuting of tensors, and there are no constraints on the order of the Tensor.

Note

The API for this class is experimental and may change soon.

Create an ml.Tensor from a numpy array.

property shape

Return shape of tensor.

class imfusion.machinelearning.TensorElement(self: TensorElement, tensor: Tensor)

Bases: DataElement

Class for managing raw Tensors

This class is meant to have direct control over tensors either passed to, or received from a MachineLearningModel. Unlike the SISBasedElements, there is no inherent stacking/permuting of tensors, and there are no constraints on the order of the Tensor.

Note

The API for this class is experimental and may change soon.

Initialize a TensorElement from a Tensor.

Parameters:

tensor (imfusion.Tensor) –

property tensor

Access to the underlying Tensor.

class imfusion.machinelearning.ThresholdOperation(*args, **kwargs)

Bases: Operation

Threshold the input image to a binary map with only 0 or 1 values.

Parameters:
  • value – Threshold value (strictly) above which the pixel will set to 1. Default: 0.0

  • to_ubyte – Output image must be unsigned byte instead of float. Default: False

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

  • apply_to – Specifies fields in a DataItem that this Operation should process (if empty, will select suitable fields based on the current processing_policy)

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.ThresholdOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.ThresholdOperation, value: float = 0.0, to_ubyte: bool = False, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, apply_to: Optional[list[str]] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.UnmarkAsTargetOperation(*args, **kwargs)

Bases: Operation

Unmark elements from the input data item as learning “target”. This operation is the opposite of MarkAsTargetOperation.

Parameters:
  • apply_to – fields to unmark as targets (will initialize the underlying apply_to parameter)

  • device – Specifies whether this Operation should run on CPU or GPU.

  • seed – Specifies seeding for any randomness that might be contained in this operation.

  • error_on_unexpected_behaviour – Specifies whether to throw an exception instead of warning about unexpected behaviour.

Overloaded function.

  1. __init__(self: imfusion.machinelearning._bindings.UnmarkAsTargetOperation, *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

  2. __init__(self: imfusion.machinelearning._bindings.UnmarkAsTargetOperation, apply_to: list[str], *, device: Optional[imfusion.machinelearning._bindings.ComputingDevice] = None, seed: Optional[int] = None, error_on_unexpected_behaviour: Optional[bool] = None) -> None

class imfusion.machinelearning.VectorElement(self: VectorElement, vectors: SharedImageSet)

Bases: SISBasedElement

Initialize a VectorElement from a SharedImageSet.

Parameters:

image (SharedImageSet) – image to be converted to a VectorElement

from_torch()
imfusion.machinelearning.available_cpp_engines() list[str]

Returns the list of registered C++ engines available for usage in MachineLearningModel.

imfusion.machinelearning.available_engines() list[str]

Returns the list of all registered engines available for usage in MachineLearningModel.

imfusion.machinelearning.available_py_engines() list[str]

Returns the list of registered Python engines available for usage in MachineLearningModel.

imfusion.machinelearning.is_semantic_segmentation_map(sis: SharedImageSet) bool
imfusion.machinelearning.is_target(sis: SharedImageSet) bool
imfusion.machinelearning.propertylist_to_data_loader_specs(properties: list[Properties]) list[DataLoaderSpecs]

Parse a properties object into a vector of DataLoaderSpecs.

imfusion.machinelearning.register_filter_func(arg0: str, arg1: Callable[[DataItem], bool]) None

Register user-defined function to be used in Dataset.filter decorator function

imfusion.machinelearning.register_map_func(arg0: str, arg1: Callable[[DataItem], None]) None

Register user-defined function to be used in Dataset.map decorator function

imfusion.machinelearning.register_py_op_cls(arg0: object) None
imfusion.machinelearning.tag_as_target(sis: SharedImageSet) None
imfusion.machinelearning.to_torch(self: DataElement | SharedImageSet | SharedImage, device: device = None, dtype: dtype = None, same_as: Tensor = None) Tensor

Convert SharedImageSet or a SharedImage to a torch.Tensor.

Parameters:
  • self (DataElement | SharedImageSet | SharedImage) – Instance of SharedImageSet or SharedImage (this function bound as a method to SharedImageSet and SharedImage)

  • device (device) – Target device for the new torch.Tensor

  • dtype (dtype) – Type of the new torch.Tensor

  • same_as (Tensor) – Template tensor whose device and dtype configuration should be matched. device and dtype are still applied afterwards.

Returns:

New torch.Tensor

Return type:

Tensor

imfusion.machinelearning.untag_as_target(sis: SharedImageSet) None

imfusion.mesh

Submodules containing routines for pre- and post-processing meshes.

class imfusion.mesh.PointDistanceResult(self: PointDistanceResult, mean_distance: float, median_distance: float, standard_deviation: float, min_distance: float, max_distance: float, distances: ndarray[numpy.float64[m, 1]])

Bases: pybind11_object

property distances
property max_distance
property mean_distance
property median_distance
property min_distance
property standard_deviation
class imfusion.mesh.Primitive(self: Primitive, value: int)

Bases: pybind11_object

Enumeration of supported mesh primitives.

Members:

SPHERE

CYLINDER

PYRAMID

CUBE

ICOSAHEDRON_SPHERE

CONE

GRID

CONE = <Primitive.CONE: 5>
CUBE = <Primitive.CUBE: 3>
CYLINDER = <Primitive.CYLINDER: 1>
GRID = <Primitive.GRID: 6>
ICOSAHEDRON_SPHERE = <Primitive.ICOSAHEDRON_SPHERE: 4>
PYRAMID = <Primitive.PYRAMID: 2>
SPHERE = <Primitive.SPHERE: 0>
property name
property value
imfusion.mesh.create(shape: Primitive) Mesh

Create a mesh primitive.

Args:

shape: The shape of the primitive to create.

imfusion.mesh.point_distance(*args, **kwargs)

Overloaded function.

  1. point_distance(target: imfusion._bindings.Mesh, source: imfusion._bindings.Mesh, signed_distance: bool = False, range_of_interest: Optional[tuple[int, int]] = None) -> imfusion._bindings.mesh.PointDistanceResult

    Compute point-wise distances between: 1. source mesh vertices and target mesh surface, 2. source point cloud and target mesh surface, 3. source mesh vertices and target point cloud vertices, 4. source point cloud and the target point cloud

    Args:

    target: Target data, defining the locations to estimate the distance to. source: Source data, defining the locations to estimate the distance from. signed_distance: Whether to compute signed distances (applicable to meshes only). Defaults to False. range_of_interest: Optional range of distances to consider (min, max) in percentage (integer-valued). Distances outside of this range will be set to NaN. Statistics are computed only over non-NaN distances. Defaults to None.

    Returns:

    A PointDistanceResult object containing the computed statistics and distances.

  2. point_distance(target: imfusion._bindings.PointCloud, source: imfusion._bindings.Mesh, signed_distance: bool = False, range_of_interest: Optional[tuple[int, int]] = None) -> imfusion._bindings.mesh.PointDistanceResult

  3. point_distance(target: imfusion._bindings.PointCloud, source: imfusion._bindings.PointCloud, signed_distance: bool = False, range_of_interest: Optional[tuple[int, int]] = None) -> imfusion._bindings.mesh.PointDistanceResult

  4. point_distance(target: imfusion._bindings.Mesh, source: imfusion._bindings.PointCloud, signed_distance: bool = False, range_of_interest: Optional[tuple[int, int]] = None) -> imfusion._bindings.mesh.PointDistanceResult

imfusion.registration

This module contains functionality for all kinds of registration tasks. You can find a demonstration of how to perform image registration on our GitHub.

class imfusion.registration.AbstractImageRegistration

Bases: BaseAlgorithm

class imfusion.registration.DescriptorsRegistrationAlgorithm(self: DescriptorsRegistrationAlgorithm, arg0: SharedImageSet, arg1: SharedImageSet)

Bases: pybind11_object

Class for performing image registration using local feature descriptors.

This algorithm performs the following steps: 1) Preprocess the fixed and moving images to prepare them for feature extraction. This consists of resampling to spacing and baking-in the rotation. 2) Extract feature descriptors using either DISAFeaturesAlgorithm or MINDDescriptorAlgorithm depending on descriptor_type. 3) Computes the weight for the moving image features. 4) Instantiates and uses FeatureMapsRegistrationAlgorithm to register the feature descriptors images. The computed registration is then applied to the moving image.

class DescriptorType(self: DescriptorType, value: int)

Bases: pybind11_object

Members:

DISA : Use the DISA descriptors defined in the paper “DISA: DIfferentiable Similarity Approximation for Universal Multimodal Registration”, Ronchetti et al. 2023

MIND

DISA = <DescriptorType.DISA: 0>
MIND = <DescriptorType.MIND: 1>
property name
property value
globalRegistration(self: DescriptorsRegistrationAlgorithm) None
heatmap(self: DescriptorsRegistrationAlgorithm, point: ndarray[numpy.float64[3, 1]]) SharedImageSet
initialize_pose(self: DescriptorsRegistrationAlgorithm) None
localRegistration(self: DescriptorsRegistrationAlgorithm) None
processed_fixed(self: DescriptorsRegistrationAlgorithm) SharedImageSet
processed_moving(self: DescriptorsRegistrationAlgorithm) SharedImageSet
DISA = <DescriptorType.DISA: 0>
MIND = <DescriptorType.MIND: 1>
property spacing
property type
property weight
class imfusion.registration.ImageRegistrationAlgorithm(self: imfusion.registration.ImageRegistrationAlgorithm, fixed: imfusion._bindings.SharedImageSet, moving: imfusion._bindings.SharedImageSet, model: imfusion.registration.ImageRegistrationAlgorithm.TransformationModel = <TransformationModel.LINEAR: 0>)

Bases: BaseAlgorithm

High-level interface for image registration. The image registration algorithm wraps several concrete image registration algorithms (e.g. linear and deformable) and extends them with pre-processing techniques. Available pre-processing options include downsampling and gradient-magnitude used for LC2. On creation, the algorithm tries to find the best settings for the registration problem depending on the modality, size and other properties of the input images. The image registration comes with a default set of different transformation models.

Parameters:
  • fixed – Input image that stays fixed during the registration.

  • moving – Input image that will be moving registration.

  • model – Defines the registration approach to use. Defaults to rigid / affine registration.

class PreprocessingOptions(self: PreprocessingOptions, value: int)

Bases: pybind11_object

Flags to enable/disable certain preprocessing options.

Members:

NO_PREPROCESSING : Disable preprocessing completely (this cannot be ORed with other options)

RESTRICT_MEMORY : Downsamples the images so that the registration will not use more than a given maximum of (video) memory

ADJUST_SPACING : if the spacing difference of both images is large, the spacing of the adjusted to the smaller one

IGNORE_FILTERING : Ignore any PreProcessingFilter required by the AbstractImageRegistration object

CACHE_RESULTS : Store PreProcessing results and only re-compute if necessary

NORMALIZE : Normalize images to float range [0.0, 1.0]

ADJUST_SPACING = <PreprocessingOptions.ADJUST_SPACING: 2>
CACHE_RESULTS = <PreprocessingOptions.CACHE_RESULTS: 16>
IGNORE_FILTERING = <PreprocessingOptions.IGNORE_FILTERING: 4>
NORMALIZE = <PreprocessingOptions.NORMALIZE: 32>
NO_PREPROCESSING = <PreprocessingOptions.NO_PREPROCESSING: 0>
RESTRICT_MEMORY = <PreprocessingOptions.RESTRICT_MEMORY: 1>
property name
property value
class TransformationModel(self: TransformationModel, value: int)

Bases: pybind11_object

Available transformation models. Each one represents a specific registration approach.

Members:

LINEAR : Rigid or affine DOF registration

FFD : Registration with non-linear Free-Form deformations

TPS : Registration with non-linear Thin-Plate-Splines deformations

DEMONS : Registration with non-linear dense (per-pixel) deformations

GREEDY_DEMONS : Registration with non-linear dense (per-pixel) deformations using patch-based SimilarityMeasures

POLY_RIGID : Registration with poly-rigid (i.e. partially piecewise rigid) deformations.

USER_DEFINED

DEMONS = <TransformationModel.DEMONS: 3>
FFD = <TransformationModel.FFD: 1>
GREEDY_DEMONS = <TransformationModel.GREEDY_DEMONS: 4>
LINEAR = <TransformationModel.LINEAR: 0>
POLY_RIGID = <TransformationModel.POLY_RIGID: 5>
TPS = <TransformationModel.TPS: 2>
USER_DEFINED = <TransformationModel.USER_DEFINED: 100>
property name
property value
compute_preprocessing(self: ImageRegistrationAlgorithm) bool

Applies the pre-processing options on the input images. Results are cached so this is a no-op if the preprocessing options have not changed. This function is automatically called by the compute method, and therefore does not have to be explicitly called in most cases.

reset(self: ImageRegistrationAlgorithm) None

Resets the transformation of moving to its initial transformation.

swap_fixed_and_moving(self: ImageRegistrationAlgorithm) None

Swaps which image is considered fixed and moving.

ADJUST_SPACING = <PreprocessingOptions.ADJUST_SPACING: 2>
CACHE_RESULTS = <PreprocessingOptions.CACHE_RESULTS: 16>
DEMONS = <TransformationModel.DEMONS: 3>
FFD = <TransformationModel.FFD: 1>
GREEDY_DEMONS = <TransformationModel.GREEDY_DEMONS: 4>
IGNORE_FILTERING = <PreprocessingOptions.IGNORE_FILTERING: 4>
LINEAR = <TransformationModel.LINEAR: 0>
NORMALIZE = <PreprocessingOptions.NORMALIZE: 32>
NO_PREPROCESSING = <PreprocessingOptions.NO_PREPROCESSING: 0>
POLY_RIGID = <TransformationModel.POLY_RIGID: 5>
RESTRICT_MEMORY = <PreprocessingOptions.RESTRICT_MEMORY: 1>
TPS = <TransformationModel.TPS: 2>
USER_DEFINED = <TransformationModel.USER_DEFINED: 100>
property best_similarity

Returns the best value of the similarity measure after optimization.

property fixed

Returns input image that is currently considered to be fixed.

property is_deformable

Indicates whether the current configuration uses a deformable registration

property max_memory

Restrict the memory used by the registration to the given amount in mebibyte. The value can be set in any case but will only have an effect if the RestrictMemory option is enabled. This will restrict video memory as well. The minimum size is 64 MB (the value will be clamped).

property moving

Returns input image that is currently considered to be moving.

property optimizer

Reference to the underlying optimizer.

property param_registration

Reference to the underlying parametric registration object that actually performs the computation (e.g. parametric registration, deformable registration, etc.). Will return None if the transformation model is not parametric.

property preprocessing_options

Which options should be enabled for preprocessing. The options are bitwise OR combination of PreprocessingOptions.

property registration

Reference to the underlying registration object that actually performs the computation (e.g. parametric registration, deformable registration, etc.)

property transformation_model

Transformation model to be used for the registration. If the transformation model changes, internal objects will be deleted and recreated. The configuration of the current model will be saved and the new model will be configured with any previously saved configuration for that model. Any attached identity deformations are removed from both images.

property verbose

Indicates whether the algorithm is going to print additional and detailed info messages.

class imfusion.registration.ParametricImageRegistration

Bases: BaseAlgorithm

class imfusion.registration.RegistrationInitAlgorithm(self: RegistrationInitAlgorithm, image1: SharedImageSet, image2: SharedImageSet)

Bases: BaseAlgorithm

Initialize the registration of two volumes by moving the second one.

class Mode(self: Mode, value: int)

Bases: pybind11_object

Specifies how the distance between images should be computed.

Members:

BOUNDING_BOX

CENTER_OF_MASS

BOUNDING_BOX = <Mode.BOUNDING_BOX: 0>
CENTER_OF_MASS = <Mode.CENTER_OF_MASS: 1>
property name
property value
BOUNDING_BOX = <Mode.BOUNDING_BOX: 0>
CENTER_OF_MASS = <Mode.CENTER_OF_MASS: 1>
property mode

Initialization mode (align bounding box centers, or center of mass).

class imfusion.registration.VolumeBasedMeshRegistrationAlgorithm(self: VolumeBasedMeshRegistrationAlgorithm, fixed: Mesh, moving: Mesh, pointcloud: PointCloud = None)

Bases: BaseAlgorithm

Calculates a deformable registration between two meshes by calculating a deformable registration between distance volumes. Internally, an instance of the DemonsImageRegistration algorithm is used to register the “fixed” distance volume to the “moving” distance volume. As this registration computes the inverse of the mapping from the fixed to the moving volume, this directly yields a registration of the “moving” Mesh to the “fixed” Mesh.

imfusion.registration.apply_deformation(image: SharedImageSet, adjust_size: bool = True, nearest_interpolation: bool = False) SharedImageSet

Creates a deformed image from the input image and its deformation.

Parameters:
  • image (SharedImageSet) – Input image assumed to have a deformation.

  • adjust_size (bool) – Whether the resulting image should adjust its size to encompass the deformation.

  • nearest_interpolation (bool) – Whether nearest or linear interpolation is used.