#include <ImFusion/ML/MachineLearningModel.h>
Class for managing and executing a machine learning model on generic input data.
More...
Class for managing and executing a machine learning model on generic input data.
A model is constituted by a set of different components:
- A Preprocessing pipeline (see Data Pipelines) for preparing the input data.
- An Engine for running the model on the prepared input.
- A Postprocessing pipeline (see Data Pipelines) for modifying the prediction from the engine.
- Note
- An Engine is a proxy class representing a serialized model from a third party ML framework, such as Torch, ONNX, TensorFlow, etc. Any specific implementation the Engine interface resides in a dedicated plugin, which also wraps the logic and the libraries needed for correctly deserializing and running the model. See Engine for more details.
-
Splitting and recombination of the input image into patches is specified in the configuration file as specific preprocessing (SplitIntoPatches) and postprocessing (RecombinePatches) operations. When specified, those operations are typically the last operation in the preprocessing and the first operation in the postprocessing. In such case, the MachineLearningModel uses both those operations lazily, i.e. patches are extracted from the input image, fed into the engine, and the prediction recombined on the fly. This is useful for large images that would not fit in memory. If the splitting or recombination are not respactively the last/first operation in the preprocessing/postprocessing section, any operation specified after splitting or before recombining is executed on the input/output patch as part of the lazy prediction scheme.
|
| DataItem | predict (const DataItem &input) |
| | Method to execute a generic multiple input/multiple output model The input and output type of a machine learning model is the DataItem, which allows to give and retrieve an heterogeneous map-type container of the data needed and returned by the model.
|
| |
| std::unique_ptr< SharedImageSet > | predict (const SharedImageSet &images) |
| | Convenience method to execute a single-input/single-output image-based model.
|
| |
|
const ModelConfiguration & | config () const |
| | Returns the configuration of this model as const.
|
| |
|
ModelConfiguration & | config () |
| | Returns the configuration of this model.
|
| |
|
const Engine * | engine () const |
| | Returns a const pointer to the underlying engine.
|
| |
|
Engine * | engine () |
| | Returns a pointer to the underlying engine. This is useful for setting CPU/GPU mode, querying whether CUDA is available, etc.
|
| |
|
const OperationsSequence & | preprocessingSequence () const |
| | Returns a const reference to the pre-processing operation sequence.
|
| |
|
OperationsSequence & | preprocessingSequence () |
| | Returns a reference to the pre-processing operation sequence.
|
| |
|
const OperationsSequence & | postprocessingSequence () const |
| | Returns a const pointer to the post-processing operation sequence.
|
| |
|
OperationsSequence & | postprocessingSequence () |
| | Returns a pointer to the post-processing operation sequence.
|
| |
|
void | setProgress (Progress *progress) |
| | Set the progress.
|
| |
|
DataItem | runEngine (const DataItem &input) |
| | Runs the machine-learning model without any pre-processing or post-processing operations.
|
| |
|
|
DataItem | applyPreProcessing (const DataItem &input) const |
| |
|
bool | executeFrameByFrame (const DataItem &preprocessedInput, DataItem &outputItem, Progress::Task &task) |
| |
|
bool | executeBatch (const DataItem &preprocessedInput, DataItem &outputItem, Progress::Task &task) |
| |
|
DataItem | setupOutputItemContainers () const |
| |
|
bool | executeFrameByFrameV2 (const DataItem &input, DataItem &outputItem, Progress::Task &task) |
| |
|
bool | executeBatchV2 (const DataItem &input, DataItem &outputItem, Progress::Task &task) |
| |
| | MachineLearningModel (std::string configPath, PredictionOutput defaultPredictionOutput=PredictionOutput::Unknown, bool delayEngineLoading=false) |
| | Constructor from configuration file.
|
| |
|
Status | init (std::string configPath, PredictionOutput defaultPredictionOutput=PredictionOutput::Unknown, bool delayEngineLoading=false) |
| | Protected function that is only used by the MachineLearningModelAlgorithm to delay the loading of the engine.
|
| |
|
bool | createEngine () |
| | Internal function to create the engine object.
|
| |
◆ MachineLearningModel()
Constructor from configuration file.
- Parameters
-
| configPath | Path to the configuration file used to create ModelConfiguration object owned by the model |
| defaultPredictionOutput | Parameter used to specify the default prediction output of a model if this is missing from the config file. |
| delayEngineLoading | Whether to delay loading the engine saved model when predict is called or load it immediately at construction, defaults false. |
- Note
- The prediction output type must be specified either in the constructor or in the configuration file under the key
PredictionOutput. If no prediction output is specified the model throws an error, whereas if it is specified in both places, the one from the config file is used. This construct is for supporting older config file where the prediction output type is not specified, and thus might be changed in the future.
-
Postponing loading the engine is used only in the algorithm/controller pair associated to this class for improving the UI experience (loading the engine saved model takes some time), it should not be used in the SDK as it bypasses checks that all the resources required by an ML model can be acquired without problems.
- Exceptions
-
◆ create()
Factory function for creating a machine learning model.
If the resource required by the MachineLearningModel could not be acquired, returns an invalid pointer
- Parameters
-
| configPath | Path to the configuration file used to create ModelConfiguration object owned by the model |
| defaultPredictionOutput | Parameter used to specify the prediction output of a model if this is missing from the config file. |
- Note
- The prediction output type must be specified either in the constructor or in the configuration file under the key
PredictionOutput. If no prediction output is specified the model throws an error, whereas if it is specified in both places, the one from the config file is used. This construct is for supporting older config file where the prediction output type is not specified, and thus might be changed in the future.
◆ createWithStatus()
Factory function for creating a machine learning model and return it together with its creation status Useful for custom handling of failure cases, since the user can consume the status object.
- Note
- This method doesn't log any (error) message, it is left to the user and the way the status is handled.
◆ predict() [1/2]
Method to execute a generic multiple input/multiple output model The input and output type of a machine learning model is the DataItem, which allows to give and retrieve an heterogeneous map-type container of the data needed and returned by the model.
- Parameters
-
| input | Input data item containing all data to be used for inference |
- Returns
- Post-processed prediction
◆ predict() [2/2]
Convenience method to execute a single-input/single-output image-based model.
- Parameters
-
| images | Input image set to be used for inference |
- Returns
- Post-processed prediction images
◆ m_preprocessingAfterSplitting
Members used by NeuralNetworkV2 implementation.
Pre-processing operation sequence
The documentation for this class was generated from the following file:
- ImFusion/ML/MachineLearningModel.h