Engines
The ImFusion::ML::Engine
class is the interface that abstracts the specific framework implementation used to train the model.
This is needed to allow the MachineLearningModel
and the MLPlugin
to be agnostic of the specific framework used to train the model, and to avoid
having an explicit dependency on the framework in the ImFusionSuite/SDK. Instead, the plugin mechanism is used to load the specific framework implementation.
The TorchPlugin
for instance is the place where the TorchEngine
is implemented. The TorchEngine
is registered into the EngineFactory
when the TorchPlugin
is loaded,
and is then available at runtime to the MachineLearningModel
to be used as the execution Engine for a model configured via the Inference Yaml Config (see the Machine Learning Model page for more details).
To this purpose, the user just needs to specify the Name
of the Engine in the Inference Yaml Config, which in this case is torch
, and the TorchPlugin
will take care of loading the correct implementation.
#include "ImFusion/ML/Engine.h"
#include <external/ml/framework.h> # i.e. <torch/torch.h>
class MyEngine : public ImFusion::ML::Engine
{
public:
explicit MyEngine(const Properties& properties)
{
// Initialize the engine with the given properties
}
/// Compute predictons of all classes for multiple images
ImFusion::ML::DataItem predict(const ImFusion::ML::DataItem& input) override
{
// Compute the predictions by using the specific framework implementation
// i.e. torch::Tensor output = model.forward(tensor);
}
};
// Register the engine in the factory in the plugin constructor
// This engine will be available to the MachineLearningModel at runtime via the Inference Yaml Config using the Name "my_engine"
ML::getCppEngineFactory()->registerType<MyEngine>("my_engine");
This is the preferred way to integrate a new framework into the ImFusionSuite/SDK that are meant to be used in production. The drawback of such an approach is that the user needs to setup a new plugin for each framework, and that plugin needs to consume the C++ version of the specific framework library. That can become very tedious and time-consuming.
As an alternative to this approach, if the user has access to the imfusion-sdk
python package, it is possible to implement a new Engine in Python.
This is useful for testing purposes, or for prototyping new models, but it is not encouraged for production use-cases where the user needs to ship C++ code only.
To this purpose, the imfusion-sdk
python package provides bindings for the Engine
class, and the user can implement a new Engine class in python
by subclassing imfusion.ml.Engine
and implementing the predict(imfusion.ml.DataItem item)
method.
import imfusion.machinelearning as ml
# Try to import the specific framework, if it fails, raise an ImportError (this will prevent the engine from being registered)
# The user must ensure that the specific framework is installed in the environment where the ``imfusion-sdk`` python package is being used
try:
import my_framework # i.e. import torch
except ImportError as e:
imf.log_debug(f"Could not register 'my_engine' engine: {str(e)}")
raise
# The factory_name is used to identify the engine in the Inference Yaml Config
class MyEngine(ml.Engine, factory_name=["my_engine"]):
def __init__(self, properties: imf.Properties):
# Initialize the engine with the given properties
pass
def predict(self, item: ml.DataItem) -> ml.DataItem:
# Compute the predictions by using the specific framework implementation
# i.e. output = model.forward(some_tensor);
pass
The advantage of this approach is that the user just needs to create a new file and install the specific framework in the environment where the imfusion-sdk
python package is being used.
Note
The imfusion-sdk
python package is not self-sufficient, and it needs to be installed in an environment where the specific python package (i.e. torch
or onnxruntime
) is also installed.