Frame Grabbing of Ultrasound Images
The acquisition and processing of ultrasound images involves transferring the content of each frame from the device into the ImFusion framework. Unfortunately, it is not always possible to use a dedicated API provided by the device manufacturer to retrieve the ultrasound image and imaging parameters in a native fashion. In many cases it is necessary to retrieve the images through a video (or frame) grabber, and to manually configure the imaging parameters, as displayed in the user interface of the machine, into our software.

Raw video input, with a convex geometry defined and highlighted in the UI
The ImFusion Suite provides extensive functionalities to support this use case:
a number of framegrabbing devices are supported from our high-performance acquisition pipeline
it is possible to modify the image on-the-fly, making use of SIMD and GPU acceleration (when appropriate)
the frame geometry (the outline of the proper ultrasound image on screen) can be automatically recognized, or conveniently manipulated from the ImFusionSuite user interface, to extract the image content from the raw video stream
the processed images, consisting only of useful ultrasound image pixels, can be recorded in real time (with consistent disk space savings compared to storing raw video output)
the relevant imaging parameters (such as physical image depth) can be conveniently set through the UI or passed to the methods contained in our library to derive the physical image spacing and enable freehand 3D ultrasound compounding
it is also possible to define the frame geometry at different imaging depths, and automatically switch between these presets based on the current UI visualized on screen

Resulting processed ultrasound data
The processed Ultrasound Stream can be combined with a Tracking Stream (e.g. from a stereo optical tracking device) or with our image-based frame pose estimation to obtain an Ultrasound Sweep, where the position of each frame in space is known. A calibration step is required first to determine the relative position between the tracking target and the ultrasound image (see Ultrasound Calibration for more details).
The rest of this document will describe how to acquire raw video input through a frame grabbing device, and how to employ the functionalities of our software to refine such data into a usable Ultrasound Stream.
Preparing the Ultrasound Machine
The ultrasound machine should be set to B-Mode imaging. The imaging depth should be chosen to be compatible with the targeted clinical application. It is crucial that this setting can be reproduced in the future, as the location in space of each pixel will be computed on the basis of this information.
Other imaging parameters can be chosen freely and adapted at runtime, as long as the location and size of the image remains unaffected. This may not hold true if the UI of the device changes in response to the variation of some imaging parameter.
Please note that the machine may enter “freeze” mode, giving the impression that the frame grabbing stream was interrupted. The “freeze” button of the US device can be used to unfreeze it and resume work.
Connecting a Framegrabber Device
A framegrabber device connects the workstation running the ImFusion software to one of the video outputs of the ultrasound machine. It is effectively recognized as a monitor, and forwards the received video output in a fashion that can be interpreted by a client program.
Most ultrasound machines provide auxiliary video output sockets, in a format such as DVI or HDMI, to allow the connection of a secondary monitor. A compatible framegrabber should be chosen according to the video output present on the machine.
An increasing share of the framegrabbing devices available on the market can be used in a “driver-free” fashion, such that no explicit support is required of the ImFusion framework. However, a native integration with the Epiphan SDK is included in order to support older devices from this producer (such as the VGA2USB family).
A “driver-free” device is recognized by the operating system as a webcam, so its output stream can be imported into the ImFusion Suite through via Video Camera Stream. Epiphan devices requiring the installation of the specific driver can be connected to using the Epiphan Stream from the Import menu. These operations create a generic ImFusion Video Stream, which is compatible with all generic image manipulation functionalities present in the framework. In the next sections we will see how to convert this stream into a processed Ultrasound Stream.
A Video Stream of either type can also be created within an ImFusion Workspace, as below. A convenient way to generate the required XML configuration is to use the “Save Workspace” functionality of the ImFusion Suite after achieving the desired setup through the user interface.
<!--ImFusion Suite, ...-->
<propertyfile version="...">
...
<property name="Algorithms">
<property name="Video Camera Stream">
...
<param name="execute">1</param>
<param name="inputUids"></param>
<param name="outputUids">"data0" </param>
</property>
</property>
...
</propertyfile>
Warning
It is worth mentioning that the framegrabber may not recognize the resolution of the video output correctly, and produce a deformed output, or artifacts therein. Alternatively, the operating system of the ultrasound device may adopt the wrong resolution for the “virtual monitor” represented by the framegrabber.
Both issues may require to use software tools provided by the producer of the framegrabber to configure its internal parameters, or to set a static EDID configuration that the framegrabber should forward to the machine upon connection. Please refer to the operating manual of the framegrabber in such cases.
Processing the Video Stream into an Ultrasound Stream
The raw framegrabbed video is not yet suitable for most applications, such as 3D compounding:
the region of the video containing the B-Mode intensity information must be cut out. While this reduces to a simple cropping operation for linear probes, this entails a more complex operation for the curved edges of a convex or sector-shaped probe
the incoming video is most likely in color format (RGB), although B-Mode content is inherently grayscale
until the ultrasound frame geometry is determined, the position in space of each pixel cannot be determined on a metric basis
further UI elements leaking into the image may need to be removed
Our ultrasound stream processing tools allow to perform these operations on-the-fly, making use of GPU acceleration if available. The advantages of this approach include an instantaneous feedback for the user of the quality of the recorded data, as well as a significant improvement of the performance of later stages in the processing pipeline and a noticeable reduction in the space required to store the recorded sweeps.
A Processed Ultrasound Stream can be created by any Video Stream present in the Data Widget, for example one created by connecting to a framegrabbing device as described above. However, a stream playing back a raw video file can be used as well. In both cases, the action needed is right-clicking on the Video Stream in the widget in the top-left of the screen, and selecting “Ultrasound” -> “Process Ultrasound Stream”. This will create a new stream that takes the old one as input, and forwards the result of processing each incoming frame in real time. Selecting either stream in the data widget will update the 2D view to show the original or the processed stream data.
<!--ImFusion Suite, ...-->
<propertyfile version="...">
...
<property name="Algorithms">
<property name="Video Camera Stream">
...
<param name="outputUids">"data0" </param>
</property>
<property name="Process Ultrasound Stream">
<param name="execute">1</param>
<param name="inputUids">"data0" </param>
...
</property>
</property>
...
</propertyfile>
The next sections will describe how to configure the Process Ultrasound Stream algorithm to achieve a correct output.
Configuring a Frame Geometry
As described in the relative section, the Frame Geometry describes the shape and size of the set of pixels that make up the ultrasound image, as well as the physical shape and size of the region of space that was scanned to create the image itself. Making use of the pixel spacing, the spatial location of each pixel cut out from the captured video content can be computed with respect to the probe. The tracking information and the ultrasound calibration allow us to compute the probe position in world space for each frame, and finally to compute the world position of each individual pixel.
Most native integrations with the ultrasound device APIs allow to programmatically retrieve the geometry information, without user input. On the other hand, manual intervention is required in the frame grabbing scenario. The ImFusion framework allows to configure a linear, convex or sector-shaped geometry through a small set of parameters. See the Fan section in Sweep Properties for more details on the different geometries. The geometry type and its parameters can be set through the user interface of the Suite, which provides a convenient set of interactive manipulation tools.
After visual confirmation, it is possible to save the configuration to a workspace file for later retrieval, as part of the Processed Ultrasound Stream configuration. The workspace file can also be written by hand, as follows:
<property name="Process Ultrasound Stream">
...
<property name="processing">
<property name="parameters">
...
<property name="frameGeometry">
<property name="FrameGeometryConvex">
<param name="offset">840 420 </param>
<param name="isTopDown">1</param>
<param name="indicatorPosition">0</param>
<param name="coordinateSystem">0</param>
<param name="shortRadius">525</param>
<param name="longRadius">1050</param>
<param name="openingAngle">35</param>
</property>
</property>
</property>
</property>
</property>
When the Processed Ultrasound Stream is selected in the Data Widget, a purple line follows the edge of the defined Frame Geometry in the 2D view. A well configured geometry should neatly cut the B-Mode image content from the surrounding UI, without including parts of the background into the selected region. As long as the “Depth” checkbox is not activated (we will describe its function later), this outline is interactive and can be dragged at the edges and corners to adjust the parameters of the geometry in the controller widget on the left. These parameters can also be directly typed through the controller UI.
It is possible to toggle the “Mask pixels outside geometry” to check that all foreign pixels have been excluded from the geometry. This option should be activated before acquiring an ultrasound sweep, to make sure that everything outside the geometry is ignored. The “Crop around geometry” also reduces the image size to tightly fit around the defined geometry, decreasing the recorded image size. This option should also be activated before recording. It is helpful to turn on the gain of the ultrasound machine, so that the contours of the image are easier to see against the static dark background, in particular in the deeper region of the image.
A few further elements of the controller UI can help getting started. The “Default” button resets the geometry to the center of the screen and a size that can be easily manipulated by grabbing the edges. The “Full image” button changes the geometry size and position to tightly fit into the image; for non-linear geometries, this button should be pressed only after setting the opening angle manually. The “Detect” button attempts to automatically recognize the frame geometry: for optimal performance it is advised to turn the gain setting to the maximum. The result can still be adjusted manually if needed.
After the geometry is correctly configured the imaging depth can be entered, by enabling the “Depth” checkbox and typing the current setting as displayed by the user interface of the ultrasound machine. This will set the pixel spacing of the image, and the ruler will appear in the 2D view as a consequence, so that the physical size of the object appearing in the ultrasound frame can be estimated. The checkbox can be temporarily deactivated again to alter the frame geometry, if necessary.
Configuring other Processing Parameters
There are further parameters that specify further operations on the video content to prepare it for usage as an ultrasound sweep. These parameters can also be set through the UI and saved to a workspace. Proficient users may want to generate the XML file manually.
<property name="Process Ultrasound Stream">
...
<property name="processing">
<property name="parameters">
<param name="applyCrop">0</param>
<param name="applyMask">0</param>
<param name="applyDepth">1</param>
<param name="depth">150</param>
<param name="removeColorThreshold">0</param>
<param name="inpaint">0</param>
<param name="extraCrop">0 0 0 0 </param>
<property name="frameGeometry">
...
</property>
</property>
</property>
</property>
The depth parameter specifies the imaging range starting from the surface of the transducer, in millimeters. This setting will be employed by the framework to compute the pixel spacing of the image, by dividing the physical depth through the number of pixels in the vertical direction from the center point of the top edge to the center point of the lower edge. Assuming that the image is displayed in a consumer monitor with squared pixels, this value is used as isotropic pixel spacing in both directions. This enables the measurement of objects through their appearance in the live image and in the recorded ultrasound sweep. Setting this parameter correctly is fundamental for most advanced use cases in ultrasound images with our framework.
If the ultrasound image is cropped by the UI of the ultrasound device at one of the edges, it is still possible to add an additional cropping while preserving the parameters of the geometry, such as the opening angle and the radii of a convex or sector geometry. To this end, the “Extra crop margins” checkbox should be enabled. The amount of pixels to be removed from each side can then be entered manually in the widgets that appear.
If the ultrasound device has a lower frame rate than the framegrabber, it is possible that the same ultrasound image is recorded multiple times before the content actually changes. In this case, it is possible to enable the “Remove duplicate frames” checkbox to have the framework compare incoming consecutive frames, and discard those that are identical to the last one received.
If elements of the ultrasound device UI are leaking into the image, the ultrasound stream processing can detect these regions and blacken them out, or use sophisticated inpainting techniques to fill the invalid region with content that is consistent with the surrounding pixels. This can lead to less artifacts, for example in the resulting compounded volumes. The UI elements are recognized by their color: in contrast to the B-Mode content, they are not strictly grayscale but their color is imbalanced. The “Threshold for color removal” specifies the minimum imbalance between a pixel’s RGB channels to be considered belonging to a UI element. Increasing this threshold elevates the hue value that is required for a pixel to be processed. The “Inpaint masked regions” option enables the intelligent inpainting technique for the replaced pixels, in contrast to simple filling with zero values.
Configuring Presets for the Frame Geometry
If the targeted clinical applications requires to work with different settings for the ultrasound imaging, such as different values for the imaging depth, having to manually change the parameters of the ImFusion ultrasound stream processing back and forth can be very tedious. This can be avoided by setting presets for each imaging depth value that will be employed in regular usage. Applying a preset will replace the ultrasound frame geometry with the one that was set at the time of defining the preset. Furthermore, it is possible to instruct the framework to automatically switch to a particular preset when the UI of the ultrasound device shows that one of its internal imaging parameters has changed.

List of two presets
Creating a New Preset
The Process Ultrasound Stream controller that is visualized in the left panel of the Suite when a Video Stream is being processed into an Ultrasound Stream contains the UI elements that allow to create a new preset. After defining a preset, saving a workspace will generate its XML representation as follows, so that the current state of the application can be restored. The XML representation can also be manually edited or generated.
<property name="Process Ultrasound Stream">
...
<property name="presets">
<property name="Convex 18cm">
<param name="depth">180</param>
<property name="frameGeometry">
<property name="FrameGeometryConvex">
<param name="offset">840 420 </param>
<param name="isTopDown">1</param>
<param name="indicatorPosition">0</param>
<param name="coordinateSystem">0</param>
<param name="shortRadius">525</param>
<param name="longRadius">1050</param>
<param name="openingAngle">35</param>
</property>
</property>
</property>
<property name="Convex 15cm">
<param name="depth">150</param>
<property name="frameGeometry">
<property name="FrameGeometryConvex">
<param name="offset">840 470 </param>
<param name="isTopDown">1</param>
<param name="indicatorPosition">0</param>
<param name="coordinateSystem">0</param>
<param name="shortRadius">535</param>
<param name="longRadius">1250</param>
<param name="openingAngle">35</param>
</property>
</property>
</property>
</property>
</property>
After configuring the Ultrasound Stream Processing as described in the previous sections, the presets widget of the Process Ultrasound Stream controller can be expanded to show the controls for the creation, update and editing of presets.
The name for the new presets should be entered into the respective text widget, and the Save button should be pressed. This will create the new preset in memory. Changing any of the processing parameters in the UI will deactivate the preset, so the name will disappear from the text widget.
Using an Existing Preset
After saving a preset, or after restoring a workspace containing a preset for the ultrasound stream processing, the preset can be activated again by selecting it in the text widget where the name was entered (the list of existing presets can be displayed by clicking on the small arrow pointing down in the text widget itself). Choosing a preset will overwrite the current processing configuration with the one saved in the preset.
Deleting an Existing Preset
A preset can be deleted by clicking the Remove button next to the text widget, while the preset is active.
Modifying an Existing Preset
A preset cannot be edited directly. The preset should first be deleted, then it can be recreated using the old name after setting the parameters to the desired values.
Warning
Typing in the name of the preset to be changed will reactivate it, overwriting the current configuration, which will be lost! To avoid this, save the current configuration with a temporary name first.
Saving the Configured Presets
The Save Workspace functionality can be used to save the full state of the video acquisition and stream processing functionalities. This includes the defined presets as well.