Run Model

This algorithm allows to run a machine learning model in inference on a given input. An inference model is specified by an Inference YAML Configuration file that controls the behaviour of loading a model, preparing the input, optionally defining a sampling strategy and post-processing the inference output. For more information see Machine Learning Model

Input

Any set of Data supported by DataItem. At the moment, supported Data are SharedImageSet, KeypointSet, BoundingBoxSet and Tensor.

Output

The result of the model inference, can be a set of Data supported by DataItem, see Input.

Description

In the controller, select the path to the YAML model configuration file, and click on Compute.