ImFusion SDK 4.3
MachineLearningModel Class Reference

#include <ImFusion/ML/MachineLearningModel.h>

Class for managing and executing a machine learning model on generic input data. More...

Detailed Description

Class for managing and executing a machine learning model on generic input data.

A model is constituted by a set of different components:

  • A Preprocessing pipeline (see Data Pipelines) for preparing the input data.
  • An Engine for running the model on the prepared input.
  • A Postprocessing pipeline (see Data Pipelines) for modifying the prediction from the engine.
    Note
    An Engine is a proxy class representing a serialized model from a third party ML framework, such as Torch, ONNX, TensorFlow, etc. Any specific implementation the Engine interface resides in a dedicated plugin, which also wraps the logic and the libraries needed for correctly deserializing and running the model. See Engine for more details.
    Splitting and recombination of the input image into patches is specified in the configuration file as specific preprocessing (SplitIntoPatches) and postprocessing (RecombinePatches) operations. When specified, those operations are typically the last operation in the preprocessing and the first operation in the postprocessing. In such case, the MachineLearningModel uses both those operations lazily, i.e. patches are extracted from the input image, fed into the engine, and the prediction recombined on the fly. This is useful for large images that would not fit in memory. If the splitting or recombination are not respactively the last/first operation in the preprocessing/postprocessing section, any operation specified after splitting or before recombining is executed on the input/output patch as part of the lazy prediction scheme.

Classes

struct  Context
 

Public Member Functions

DataItem predict (const DataItem &input)
 Method to execute a generic multiple input/multiple output model The input and output type of a machine learning model is the DataItem, which allows to give and retrieve an heterogeneous map-type container of the data needed and returned by the model.
 
std::unique_ptr< SharedImageSetpredict (const SharedImageSet &images)
 Convenience method to execute a single-input/single-output image-based model.
 
const ModelConfigurationconfig () const
 Returns the configuration of this model as const.
 
ModelConfigurationconfig ()
 Returns the configuration of this model.
 
const Engineengine () const
 Returns a const pointer to the underlying engine.
 
Engineengine ()
 Returns a pointer to the underlying engine. This is useful for setting CPU/GPU mode, querying whether CUDA is available, etc.
 
const OperationsSequencepreprocessingSequence () const
 Returns a const reference to the pre-processing operation sequence.
 
OperationsSequencepreprocessingSequence ()
 Returns a reference to the pre-processing operation sequence.
 
const OperationsSequencepostprocessingSequence () const
 Returns a const pointer to the post-processing operation sequence.
 
OperationsSequencepostprocessingSequence ()
 Returns a pointer to the post-processing operation sequence.
 
void setProgress (Progress *progress)
 Set the progress.
 
DataItem runEngine (const DataItem &input)
 Runs the machine-learning model without any pre-processing or post-processing operations.
 

Static Public Member Functions

static std::unique_ptr< MachineLearningModelcreate (std::string configPath, PredictionOutput defaultPredictionOutput=PredictionOutput::Unknown) noexcept
 Factory function for creating a machine learning model.
 
static std::pair< std::unique_ptr< MachineLearningModel >, StatuscreateWithStatus (std::string configPath, PredictionOutput defaultPredictionOutput=PredictionOutput::Unknown) noexcept
 Factory function for creating a machine learning model and return it together with its creation status Useful for custom handling of failure cases, since the user can consume the status object.
 

Protected Member Functions

DataItem applyPreProcessing (const DataItem &input) const
 
bool executeFrameByFrame (const DataItem &preprocessedInput, DataItem &outputItem, Progress::Task &task)
 
bool executeBatch (const DataItem &preprocessedInput, DataItem &outputItem, Progress::Task &task)
 
DataItem setupOutputItemContainers () const
 
bool executeFrameByFrameV2 (const DataItem &input, DataItem &outputItem, Progress::Task &task)
 
bool executeBatchV2 (const DataItem &input, DataItem &outputItem, Progress::Task &task)
 
 MachineLearningModel (std::string configPath, PredictionOutput defaultPredictionOutput=PredictionOutput::Unknown, bool delayEngineLoading=false)
 Constructor from configuration file.
 
Status init (std::string configPath, PredictionOutput defaultPredictionOutput=PredictionOutput::Unknown, bool delayEngineLoading=false)
 Protected function that is only used by the MachineLearningModelAlgorithm to delay the loading of the engine.
 
bool createEngine ()
 Internal function to create the engine object.
 

Protected Attributes

std::unique_ptr< ModelConfigurationm_config
 Model configuration.
 
std::unique_ptr< OperationsSequencem_preprocessingOp = nullptr
 Pre-processing operation sequence.
 
std::unique_ptr< OperationsSequencem_postprocessingOp = nullptr
 Post-processing operation sequence.
 
std::unique_ptr< Enginem_engine
 The engine running a model serialized by a specific ML framework.
 
std::unique_ptr< SplitImagesAlgorithmm_splitter
 Algorithm for padding/splitting unpadding/recombining images.
 
std::unique_ptr< OperationsSequencem_preprocessingAfterSplitting = nullptr
 Members used by NeuralNetworkV2 implementation.
 
std::unique_ptr< OperationsSequencem_postprocessingBeforeRecombine = nullptr
 Post-processing operation sequence.
 
std::unique_ptr< Operationm_splitOp = nullptr
 Split operation for optionally splitting an input image into patches.
 
std::unique_ptr< Operationm_recombineOp = nullptr
 Recombine operation for recombining predicted patches.
 
Context m_context
 Context store for keeping track of relevant data across the ML model lifetime.
 
Progressm_progress = nullptr
 Progress bar.
 

Constructor & Destructor Documentation

◆ MachineLearningModel()

MachineLearningModel ( std::string configPath,
PredictionOutput defaultPredictionOutput = PredictionOutput::Unknown,
bool delayEngineLoading = false )
explicitprotected

Constructor from configuration file.

Parameters
configPathPath to the configuration file used to create ModelConfiguration object owned by the model
defaultPredictionOutputParameter used to specify the default prediction output of a model if this is missing from the config file.
delayEngineLoadingWhether to delay loading the engine saved model when predict is called or load it immediately at construction, defaults false.
Note
The prediction output type must be specified either in the constructor or in the configuration file under the key PredictionOutput. If no prediction output is specified the model throws an error, whereas if it is specified in both places, the one from the config file is used. This construct is for supporting older config file where the prediction output type is not specified, and thus might be changed in the future.
Postponing loading the engine is used only in the algorithm/controller pair associated to this class for improving the UI experience (loading the engine saved model takes some time), it should not be used in the SDK as it bypasses checks that all the resources required by an ML model can be acquired without problems.
Exceptions
MLModelExceptionif any of the model component are not configured correctly.

Member Function Documentation

◆ create()

static std::unique_ptr< MachineLearningModel > create ( std::string configPath,
PredictionOutput defaultPredictionOutput = PredictionOutput::Unknown )
staticnoexcept

Factory function for creating a machine learning model.

If the resource required by the MachineLearningModel could not be acquired, returns an invalid pointer

Parameters
configPathPath to the configuration file used to create ModelConfiguration object owned by the model
defaultPredictionOutputParameter used to specify the prediction output of a model if this is missing from the config file.
Note
The prediction output type must be specified either in the constructor or in the configuration file under the key PredictionOutput. If no prediction output is specified the model throws an error, whereas if it is specified in both places, the one from the config file is used. This construct is for supporting older config file where the prediction output type is not specified, and thus might be changed in the future.

◆ createWithStatus()

static std::pair< std::unique_ptr< MachineLearningModel >, Status > createWithStatus ( std::string configPath,
PredictionOutput defaultPredictionOutput = PredictionOutput::Unknown )
staticnoexcept

Factory function for creating a machine learning model and return it together with its creation status Useful for custom handling of failure cases, since the user can consume the status object.

Note
This method doesn't log any (error) message, it is left to the user and the way the status is handled.

◆ predict() [1/2]

DataItem predict ( const DataItem & input)

Method to execute a generic multiple input/multiple output model The input and output type of a machine learning model is the DataItem, which allows to give and retrieve an heterogeneous map-type container of the data needed and returned by the model.

Parameters
inputInput data item containing all data to be used for inference
Returns
Post-processed prediction

◆ predict() [2/2]

std::unique_ptr< SharedImageSet > predict ( const SharedImageSet & images)

Convenience method to execute a single-input/single-output image-based model.

Parameters
imagesInput image set to be used for inference
Returns
Post-processed prediction images

Member Data Documentation

◆ m_preprocessingAfterSplitting

std::unique_ptr<OperationsSequence> m_preprocessingAfterSplitting = nullptr
protected

Members used by NeuralNetworkV2 implementation.

Pre-processing operation sequence


The documentation for this class was generated from the following file:
Search Tab / S to search, Esc to close