module #include "diplib/framework.h"
Frameworks Functions that form the basis of most pixel-based processing in DIPlib.
Contents
- Reference
The various frameworks implement iterating over image pixels, giving access to a single pixel, a whole image line, or a pixel’s neighborhood. The programmer needs to define a function that loops over one dimension. The framework will call this function repeatedly to process all the image’s lines, thereby freeing the programmer from implementing loops over multiple dimensions. This process allows most of DIPlib’s filters to be dimensionality independent, with little effort from the programmer. See Frameworks.
There are three frameworks that represent three different types of image processing functions:
- The Scan framework, to process individual pixels across multiple input and output images:
dip::Framework::Scan
. - The Separable framework, to apply separable filters:
dip::Framework::Separable
. - The Full framework, to apply non-separable filters:
dip::Framework::Full
.
Namespaces
- namespace dip::
Framework - Frameworks are the basis of most pixel-based processing in DIPlib.
Classes
-
struct dip::
Framework:: ScanBuffer - Structure that holds information about input or output pixel buffers for the
dip::Framework::Scan
callback function object. -
struct dip::
Framework:: ScanLineFilterParameters - Parameters to the line filter for
dip::Framework::Scan
. -
class dip::
Framework:: ScanLineFilter abstract - Prototype line filter for
dip::Framework::Scan
. -
template<dip::uint N, typename TPI, typename F>class dip::
Framework:: VariadicScanLineFilter - An implementation of the ScanLinefilter for N input images and 1 output image.
-
struct dip::
Framework:: SeparableBuffer - Structure that holds information about input or output pixel buffers
for the
dip::Framework::Separable
callback function object. -
struct dip::
Framework:: SeparableLineFilterParameters - Parameters to the line filter for
dip::Framework::Separable
. -
class dip::
Framework:: SeparableLineFilter abstract - Prototype line filter for
dip::Framework::Separable
. -
struct dip::
Framework:: FullBuffer - Structure that holds information about input or output pixel buffers
for the
dip::Framework::Full
callback function object. -
struct dip::
Framework:: FullLineFilterParameters - Parameters to the line filter for
dip::Framework::Full
. -
class dip::
Framework:: FullLineFilter abstract - Prototype line filter for
dip::Framework::Full
. -
class dip::
Framework:: ProjectionFunction abstract - Prototype line filter for
dip::Framework::Projection
.
Aliases
-
using dip::
Framework:: ScanOptions = dip::detail::Options - Combines any number of
dip::Framework::ScanOption
constants together. -
using dip::
Framework:: SeparableOptions = dip::detail::Options - Combines any number of
dip::Framework::SeparableOption
constants together. -
using dip::
Framework:: FullOptions = dip::detail::Options - Combines any number of
dip::Framework::FullOption
constants together. -
using dip::
Framework:: ProjectionOptions = dip::detail::Options - Combines any number of
dip::Framework::ProjectionOption
constants together.
Enums
-
enum class dip::
Framework:: ScanOption: uint8{ NoMultiThreading, NeedCoordinates, TensorAsSpatialDim, ExpandTensorInBuffer, NoSingletonExpansion, NotInPlace } - Defines options to the
dip::Framework::Scan
function. -
enum class dip::
Framework:: SeparableOption: uint8{ NoMultiThreading, AsScalarImage, ExpandTensorInBuffer, UseOutputBorder, DontResizeOutput, UseInputBuffer, UseOutputBuffer, CanWorkInPlace, UseRealComponentOfOutput } - Defines options to the
dip::Framework::Separable
function. -
enum class dip::
Framework:: FullOption: uint8{ NoMultiThreading, AsScalarImage, ExpandTensorInBuffer, BorderAlreadyExpanded } - Defines options to the
dip::Framework::Full
function. -
enum class dip::
Framework:: ProjectionOption: uint8{ NoMultiThreading } - Defines options to the
dip::Framework::Projection
function.
Functions
-
void dip::
Framework:: SingletonExpandedSize(dip::UnsignedArray& size1, dip::UnsignedArray const& size2) - Determines the singleton-expanded size as a combination of the two sizes.
-
auto dip::
Framework:: SingletonExpandedSize(dip::ImageConstRefArray const& in) -> dip::UnsignedArray - Determines if images can be singleton-expanded to the same size, and what that size would be.
-
auto dip::
Framework:: SingletonExpandedSize(dip::ImageArray const& in) -> dip::UnsignedArray - Determines if images can be singleton-expanded to the same size, and what that size would be.
-
auto dip::
Framework:: SingletonExpendedTensorElements(dip::ImageArray const& in) -> dip::uint - Determines if tensors in images can be singleton-expanded to the same size, and what that size would be.
-
auto dip::
Framework:: OptimalProcessingDim(dip::Image const& in) -> dip::uint - Determines the best processing dimension, which is the one with the smallest stride, except if that dimension is very small and there’s a longer dimension.
-
auto dip::
Framework:: OptimalProcessingDim(dip::Image const& in, dip::UnsignedArray const& kernelSizes) -> dip::uint - Determines the best processing dimension as above, but giving preference to a dimension where
kernelSizes
is large also. -
void dip::
Framework:: Scan(dip::ImageConstRefArray const& in, dip::ImageRefArray& out, dip::DataTypeArray const& inBufferTypes, dip::DataTypeArray const& outBufferTypes, dip::DataTypeArray const& outImageTypes, dip::UnsignedArray const& nTensorElements, dip::Framework::ScanLineFilter& lineFilter, dip::Framework::ScanOptions opts = {}) - Framework for pixel-based processing of images.
-
void dip::
Framework:: ScanSingleOutput(dip::Image& out, dip::DataType bufferType, dip::Framework::ScanLineFilter& lineFilter, dip::Framework::ScanOptions opts = {}) - Calls
dip::Framework::Scan
with one output image, which is already forged. ThelineFilter
will be called with an output buffer of typebufferType
. -
void dip::
Framework:: ScanSingleInput(dip::Image const& in, dip::Image const& c_mask, dip::DataType bufferType, dip::Framework::ScanLineFilter& lineFilter, dip::Framework::ScanOptions opts = {}) - Calls
dip::Framework::Scan
with one input image and a mask image, and no output image. -
void dip::
Framework:: ScanMonadic(dip::Image const& in, dip::Image& out, dip::DataType bufferTypes, dip::DataType outImageType, dip::uint nTensorElements, dip::Framework::ScanLineFilter& lineFilter, dip::Framework::ScanOptions opts = {}) - Calls
dip::Framework::Scan
with one input image and one output image. -
void dip::
Framework:: ScanDyadic(dip::Image const& in1, dip::Image const& in2, dip::Image& out, dip::DataType inBufferType, dip::DataType outBufferType, dip::DataType outImageType, dip::Framework::ScanLineFilter& lineFilter, dip::Framework::ScanOptions opts = {}) - Calls
dip::Framework::Scan
with two input images and one output image. -
template<typename TPI, typename F>auto dip::
Framework:: NewMonadicScanLineFilter(F const& func, dip::uint cost = 1) -> std::unique_ptr<ScanLineFilter> - Support for quickly defining monadic operators (1 input image, 1 output image).
See
dip::Framework::VariadicScanLineFilter
. -
template<typename TPI, typename F>auto dip::
Framework:: NewDyadicScanLineFilter(F const& func, dip::uint cost = 1) -> std::unique_ptr<ScanLineFilter> - Support for quickly defining dyadic operators (2 input images, 1 output image).
See
dip::Framework::VariadicScanLineFilter
. -
template<typename TPI, typename F>auto dip::
Framework:: NewTriadicScanLineFilter(F const& func, dip::uint cost = 1) -> std::unique_ptr<ScanLineFilter> - Support for quickly defining triadic operators (3 input images, 1 output image).
See
dip::Framework::VariadicScanLineFilter
. -
template<typename TPI, typename F>auto dip::
Framework:: NewTetradicScanLineFilter(F const& func, dip::uint cost = 1) -> std::unique_ptr<ScanLineFilter> - Support for quickly defining tetradic operators (4 input images, 1 output image).
See
dip::Framework::VariadicScanLineFilter
. -
void dip::
Framework:: Separable(dip::Image const& in, dip::Image& out, dip::DataType bufferType, dip::DataType outImageType, dip::BooleanArray process, dip::UnsignedArray border, dip::BoundaryConditionArray boundaryCondition, dip::Framework::SeparableLineFilter& lineFilter, dip::Framework::SeparableOptions opts = {}) - Framework for separable filtering of images.
-
void dip::
Framework:: OneDimensionalLineFilter(dip::Image const& in, dip::Image& out, dip::DataType inBufferType, dip::DataType outBufferType, dip::DataType outImageType, dip::uint processingDimension, dip::uint border, dip::BoundaryCondition boundaryCondition, dip::Framework::SeparableLineFilter& lineFilter, dip::Framework::SeparableOptions opts = {}) - Framework for filtering of image lines. This is a version of
dip::Framework::Separable
that works along one dimension only. -
void dip::
Framework:: Full(dip::Image const& in, dip::Image& out, dip::DataType inBufferType, dip::DataType outBufferType, dip::DataType outImageType, dip::uint nTensorElements, dip::BoundaryConditionArray const& boundaryCondition, dip::Kernel const& kernel, dip::Framework::FullLineFilter& lineFilter, dip::Framework::FullOptions opts = {}) - Framework for filtering of images with an arbitrary shape neighborhood.
-
void dip::
Framework:: Projection(dip::Image const& in, dip::Image const& mask, dip::Image& out, dip::DataType outImageType, dip::BooleanArray process, dip::Framework::ProjectionFunction& projectionFunction, dip::Framework::ProjectionOptions opts = {}) - Framework for projecting one or more dimensions of an image.
Class documentation
struct dip:: Framework:: ScanBuffer
Structure that holds information about input or output pixel buffers for the dip::Framework::Scan
callback function object.
The length of the buffer is given in a separate argument to the line filter. Depending on the arguments given to the
framework function, you might assume that tensorLength
is always 1, and consequently ignore also tensorStride
.
Variables | |
---|---|
void* buffer | Pointer to pixel data for image line, to be cast to expected data type. |
dip::sint stride | Stride to walk along pixels. |
dip::sint tensorStride | Stride to walk along tensor elements. |
dip::uint tensorLength | Number of tensor elements. |
struct dip:: Framework:: ScanLineFilterParameters
Parameters to the line filter for dip::Framework::Scan
.
We have put all the parameters to the line filter dip::Framework::ScanLineFilter::Filter
into
a single struct to simplify writing those functions.
Note that dimension
and position
are within the images that have had their tensor dimension
converted to spatial dimension, if dip::Framework::ScanOption::TensorAsSpatialDim
was given and at least
one input or output image is not scalar. In this case, tensorToSpatial
is true
, and the last dimension
corresponds to the tensor dimension. dimension
will never be equal to the last dimension in this case.
That is, position
will have one more element than the original image(s) we’re iterating over, but
position[ dimension ]
will always correspond to a position in the original image(s).
Variables | |
---|---|
std::vector<ScanBuffer> const& inBuffer | Input buffers (1D) |
std::vector<ScanBuffer>& outBuffer | Output buffers (1D) |
dip::uint bufferLength | Number of pixels in each buffer |
dip::uint dimension | Dimension along which the line filter is applied |
dip::UnsignedArray const& position | Coordinates of first pixel in line |
bool tensorToSpatial |
true if the tensor dimension was converted to spatial dimension
|
dip::uint thread | Thread number |
struct dip:: Framework:: SeparableBuffer
Structure that holds information about input or output pixel buffers
for the dip::Framework::Separable
callback function object.
The length of the buffer is given in a separate argument to the line filter. Depending on the arguments given to the
framework function, you might assume that tensorLength
is always 1, and consequently ignore also tensorStride
.
Variables | |
---|---|
void* buffer | Pointer to pixel data for image line, to be cast to expected data type. |
dip::uint length | Length of the buffer, not counting the expanded boundary |
dip::uint border | Length of the expanded boundary at each side of the buffer. |
dip::sint stride | Stride to walk along pixels. |
dip::sint tensorStride | Stride to walk along tensor elements. |
dip::uint tensorLength | Number of tensor elements. |
struct dip:: Framework:: SeparableLineFilterParameters
Parameters to the line filter for dip::Framework::Separable
.
We have put all the parameters to the line filter dip::Framework::SeparableLineFilter::Filter
into
a single struct to simplify writing those functions.
Note that dimension
and position
are within the images that have had their tensor dimension converted to
spatial dimension, if dip::Framework::SeparableOption::AsScalarImage
was given and the input is not scalar.
In this case, tensorToSpatial
is true
, and the last dimension corresponds to the tensor dimension.
dimension
will never be equal to the last dimension in this case. That is, position
will have one more element
than the original image(s) we’re iterating over, but position[ dimension ]
will always correspond to a position
in the original image(s).
Variables | |
---|---|
dip::Framework::SeparableBuffer const& inBuffer | Input buffer (1D) |
dip::Framework::SeparableBuffer& outBuffer | Output buffer (1D) |
dip::uint dimension | Dimension along which the line filter is applied |
dip::uint pass | Pass number (0..nPasses-1) |
dip::uint nPasses | Number of passes (typically nDims) |
dip::UnsignedArray const& position | Coordinates of first pixel in line |
bool tensorToSpatial |
true if the tensor dimension was converted to spatial dimension
|
dip::uint thread | Thread number |
struct dip:: Framework:: FullBuffer
Structure that holds information about input or output pixel buffers
for the dip::Framework::Full
callback function object.
Depending on the arguments given to the framework function, you might assume that
tensorLength
is always 1, and consequently ignore also tensorStride
.
Variables | |
---|---|
void* buffer | Pointer to pixel data for image line, to be cast to expected data type. |
dip::sint stride | Stride to walk along pixels. |
dip::sint tensorStride | Stride to walk along tensor elements. |
dip::uint tensorLength | Number of tensor elements. |
struct dip:: Framework:: FullLineFilterParameters
Parameters to the line filter for dip::Framework::Full
.
We have put all the parameters to the line filter dip::Framework::FullLineFilter::Filter
into
a single struct to simplify writing those functions.
Variables | |
---|---|
dip::Framework::FullBuffer const& inBuffer | Input buffer (1D) |
dip::Framework::FullBuffer& outBuffer | Output buffer (1D) |
dip::uint bufferLength | Number of pixels in each buffer |
dip::uint dimension | Dimension along which the line filter is applied |
dip::UnsignedArray const& position | Coordinates of first pixel in line |
dip::PixelTableOffsets const& pixelTable | The pixel table object describing the neighborhood |
dip::uint thread | Thread number |
Enum documentation
enum class dip:: Framework:: ScanOption: uint8
Defines options to the dip::Framework::Scan
function.
Implicitly casts to dip::Framework::ScanOptions
. Combine constants together with the +
operator.
Enumerators | |
---|---|
NoMultiThreading = 0 | Do not call the line filter simultaneously from multiple threads (it is not thread safe). |
NeedCoordinates = 1 | The line filter needs the coordinates to the first pixel in the buffer. |
TensorAsSpatialDim = 2 | Tensor dimensions are treated as a spatial dimension for scanning, ensuring that the line scan filter always gets scalar pixels. |
ExpandTensorInBuffer = 3 | The line filter always gets input tensor elements as a standard, column-major matrix. |
NoSingletonExpansion = 4 | Inhibits singleton expansion of input images. |
NotInPlace = 5 | The line filter can write to the output buffers without affecting the input buffers. |
enum class dip:: Framework:: SeparableOption: uint8
Defines options to the dip::Framework::Separable
function.
Implicitly casts to dip::Framework::SeparableOptions
. Combine constants together with the +
operator.
Enumerators | |
---|---|
NoMultiThreading = 0 | Do not call the line filter simultaneously from multiple threads (it is not thread safe). |
AsScalarImage = 1 | The line filter is called for each tensor element separately, and thus always sees pixels as scalar values. |
ExpandTensorInBuffer = 2 | The line filter always gets input tensor elements as a standard, column-major matrix. |
UseOutputBorder = 3 | The output line buffer also has space allocated for a border. |
DontResizeOutput = 4 | The output image has the right size; it can differ from the input size. |
UseInputBuffer = 5 | The line filter can modify the input data without affecting the input image; samples are guaranteed to be contiguous. |
UseOutputBuffer = 6 | The output buffer is guaranteed to have contiguous samples. |
CanWorkInPlace = 7 | The input and output buffer are allowed to both point to the same memory. |
UseRealComponentOfOutput = 8 | If the buffer type is complex, and the output type is not, cast by taking the real component of the complex data, rather than the modulus. |
enum class dip:: Framework:: FullOption: uint8
Defines options to the dip::Framework::Full
function.
Implicitly casts to dip::Framework::FullOptions
. Combine constants together with the +
operator.
Enumerators | |
---|---|
NoMultiThreading = 0 | Do not call the line filter simultaneously from multiple threads (it is not thread safe). |
AsScalarImage = 1 | The line filter is called for each tensor element separately, and thus always sees pixels as scalar values. |
ExpandTensorInBuffer = 2 | The line filter always gets input tensor elements as a standard, column-major matrix. |
BorderAlreadyExpanded = 3 |
The input image already has expanded boundaries (see dip::ExtendImage , use "masked" option).
|
enum class dip:: Framework:: ProjectionOption: uint8
Defines options to the dip::Framework::Projection
function.
Implicitly casts to dip::Framework::ProjectionOptions
. Combine constants together with the +
operator.
Enumerators | |
---|---|
NoMultiThreading = 0 | Do not call the projection function simultaneously from multiple threads (it is not thread safe). |
Function documentation
void
dip:: Framework:: SingletonExpandedSize(dip::UnsignedArray& size1,
dip::UnsignedArray const& size2)
Determines the singleton-expanded size as a combination of the two sizes.
Singleton dimensions (size==1) can be expanded to match another image’s size. This function can be used to check
if such expansion is possible, and what the resulting sizes would be. size1
is adjusted. An exception is thrown
if the singleton expansion is not possible.
dip::UnsignedArray
dip:: Framework:: SingletonExpandedSize(dip::ImageConstRefArray const& in)
Determines if images can be singleton-expanded to the same size, and what that size would be.
Singleton dimensions (size==1) can be expanded to a larger size by setting their stride to 0. This change can be
performed without modifying the data segment. If image dimensions differ such that singleton expansion cannot make
them all the same size, an exception is thrown. Use dip::Image::ExpandSingletonDimensions
to apply the
transform to one image.
dip::UnsignedArray
dip:: Framework:: SingletonExpandedSize(dip::ImageArray const& in)
Determines if images can be singleton-expanded to the same size, and what that size would be.
Singleton dimensions (size==1) can be expanded to a larger size by setting their stride to 0. This change can be
performed without modifying the data segment. If image dimensions differ such that singleton expansion cannot make
them all the same size, an exception is thrown. Use dip::Image::ExpandSingletonDimensions
to apply the
transform to one image.
dip::uint
dip:: Framework:: SingletonExpendedTensorElements(dip::ImageArray const& in)
Determines if tensors in images can be singleton-expanded to the same size, and what that size would be.
The tensors must all be of the same size, or of size 1. The tensors with size 1 are singletons, and can be
expended to the size of the others by setting their stride to 0. This change can be performed without modifying
the data segment. If singleton expansion cannot make them all the same size, an exception is thrown.
Use dip::Image::ExpandSingletonTensor
to apply the transform to one image.
void
dip:: Framework:: Scan(dip::ImageConstRefArray const& in,
dip::ImageRefArray& out,
dip::DataTypeArray const& inBufferTypes,
dip::DataTypeArray const& outBufferTypes,
dip::DataTypeArray const& outImageTypes,
dip::UnsignedArray const& nTensorElements,
dip::Framework::ScanLineFilter& lineFilter,
dip::Framework::ScanOptions opts = {})
Framework for pixel-based processing of images.
The function object lineFilter
is called for each image line, with input and output buffers either pointing
directly to the input and output images, or pointing to temporary buffers that are handled by the framework and
serve to prevent lineFilter
to have to deal with too many different data types. The buffers are always of the
type requested by the inBufferTypes
and outBufferTypes
parameters, but are passed as void*
.
lineFilter
should cast these pointers to the right types.
Output buffers are not initialized, lineFilter
is responsible for setting all its values.
Output images (unless protected) will be resized to match the (singleton-expanded) input, but have a number of
tensor elements specified by nTensorElements
, and their type will be set to that specified by outImageTypes
.
Protected output images must have the correct size and type, otherwise an exception will be thrown.
The scan function can be called without input images. In this case, at least one output image must be given.
The dimensions of the first output image will be used to direct the scanning, and the remaining output images (if
any) will be adjusted to the same size. It is also possible to give no output images, as would be the case for a
reduction operation such as computing the average pixel value. However, it makes no sense to call the scan
function without input nor output images.
Tensors are passed to lineFilter
as vectors, if the shape is important, store this information in lineFilter
.
nTensorElements
gives the number of tensor elements for each output image. These are created as standard vectors.
The calling function can reshape the tensors after the call to dip::Framework::Scan
. It is not necessary nor
enforced that the tensors for each image (both input and output) are the same, the calling function is to make
sure the tensors satisfy whatever constraints.
However, if the option dip::Framework::ScanOption::TensorAsSpatialDim
is given, then the tensor is cast to a
spatial dimension, and singleton expansion is applied. Thus, lineFilter
does not need to check inTensorLength
or outTensorLength
(they will be 1), and the output tensor size is guaranteed to match the largest input tensor.
nTensorElements
is ignored. Even with a single input image, where no singleton expansion can happen, it is
beneficial to use the dip::Framework::ScanOption::TensorAsSpatialDim
option, as lineFilter
can be simpler
and faster. Additionally, the output tensor shape is identical to the input image’s. In case of multiple inputs,
the first input image that has as many tensor elements as the (singleton-expanded) output will model the output
tensor shape.
If the option dip::Framework::ScanOption::ExpandTensorInBuffer
is given, then the input buffers passed to
lineFilter
will contain the tensor elements as a standard, column-major matrix. If the image has tensors stored
differently, buffers will be used. This option is not used when dip::Framework::ScanOption::TensorAsSpatialDim
is set, as that forces the tensor to be a single sample. Use this option if you need to do computations with the
tensors, but do not want to bother with all the different tensor shapes, which are meant only to save memory.
Note, however, that this option does not apply to the output images. When expanding the input tensors in this way,
it makes sense to set the output tensor to a full matrix. Don’t forget to specify the right size in nTensorElements
.
The framework function sets the output pixel size to that of the first input image with a defined pixel size, and it sets the color space to that of the first input image with matching number of tensor elements. The calling function is expected to “correct” these values if necessary.
The buffers are not guaranteed to be contiguous, please use the stride
and tensorStride
values to access samples.
All buffers contain bufferLength
pixels. position
gives the coordinates for the first pixel in the buffers,
subsequent pixels occur along dimension dimension
. position[dimension]
is not necessarily zero.
However, when dip::Framework::ScanOption::NeedCoordinates
is not given, dimension
and position
are
meaningless. The framework is allowed to treat all pixels in the image as a single image line in this case.
If in
and out
share an image, then it is possible that the corresponding input and output buffers point to the
same memory. The input image will be overwritten with the processing result. That is, all processing can be
performed in place. The scan framework is intended for pixel-wise processing, not neighborhood-based processing,
so there is never a reason not to work in place. However, some types of tensor processing might want to write to
the output without invalidating the input for that same pixel. In this case, give the option
dip::Framework::ScanOption::NotInPlace
. It will make sure that the output buffers given to the line filter
do not alias the input buffers.
dip::Framework::Scan
will process the image using multiple threads, so lineFilter
will be called from multiple
threads simultaneously. If it is not thread safe, specify dip::Framework::ScanOption::NoMultiThreading
as an
option. The SetNumberOfThreads
method to lineFilter
will be called once before the processing starts, when
dip::Framework::Scan
has determined how many threads will be used in the scan, even if
dip::Framework::ScanOption::NoMultiThreading
was specified.
void
dip:: Framework:: ScanSingleInput(dip::Image const& in,
dip::Image const& c_mask,
dip::DataType bufferType,
dip::Framework::ScanLineFilter& lineFilter,
dip::Framework::ScanOptions opts = {})
Calls dip::Framework::Scan
with one input image and a mask image, and no output image.
If mask
is forged, it is expected to be a scalar image of type dip::DT_BIN
, and of size compatible with in
.
mask
is singleton-expanded to the size of in
, but not the other way around. Its pointer will be passed to
lineFilter
directly, without copies to change its data type. Thus, inBuffer[ 1 ].buffer
is of type bin*
,
not of type bufferType
.
void
dip:: Framework:: ScanMonadic(dip::Image const& in,
dip::Image& out,
dip::DataType bufferTypes,
dip::DataType outImageType,
dip::uint nTensorElements,
dip::Framework::ScanLineFilter& lineFilter,
dip::Framework::ScanOptions opts = {})
Calls dip::Framework::Scan
with one input image and one output image.
bufferTypes
is the type for both the input and output buffer. The output image will be reforged to have the
same sizes as the input image, and nTensorElements
and outImageType
.
void
dip:: Framework:: ScanDyadic(dip::Image const& in1,
dip::Image const& in2,
dip::Image& out,
dip::DataType inBufferType,
dip::DataType outBufferType,
dip::DataType outImageType,
dip::Framework::ScanLineFilter& lineFilter,
dip::Framework::ScanOptions opts = {})
Calls dip::Framework::Scan
with two input images and one output image.
It handles some of the work for dyadic (binary) operators related to matching up tensor dimensions in the input image.
Input tensors are expected to match, but a scalar is expanded to the size of the other tensor. The output tensor will be of the same size as the input tensors, its shape will match the input shape if one image is a scalar, or if both images have matching tensor shapes. Otherwise the output tensor will be a column-major matrix (or vector or scalar, as appropriate).
This function adds dip::Framework::ScanOption::TensorAsSpatialDim
or dip::Framework::ScanOption::ExpandTensorInBuffer
to opts
, so don’t set these values yourself. This means that the tensors passed to lineFilter
is either all
scalars (the tensor can be converted to a spatial dimension) or full, column-major tensors of equal size.
Do not specify dip::Framework::ScanOption::NoSingletonExpansion
in opts
.
void
dip:: Framework:: Separable(dip::Image const& in,
dip::Image& out,
dip::DataType bufferType,
dip::DataType outImageType,
dip::BooleanArray process,
dip::UnsignedArray border,
dip::BoundaryConditionArray boundaryCondition,
dip::Framework::SeparableLineFilter& lineFilter,
dip::Framework::SeparableOptions opts = {})
Framework for separable filtering of images.
The function object lineFilter
is called for each image line, and along each dimension, with input and output
buffers either pointing directly to the input and output images, or pointing to temporary buffers that are handled
by the framework and present the line’s pixel data with a different data type, with expanded borders, etc.
The buffers are always of the type specified in bufferType
, but are passed as void*
.
lineFilter
should cast these pointers to the right types. The output buffer is not initialized, lineFilter
is responsible for setting all its values.
The process
array specifies along which dimensions the filtering is applied. If it is an empty array, all
dimensions will be processed. Otherwise, it must have one element per image dimension.
The output image (unless protected) will be resized to match the input, and its type will be set to that specified
by outImage
. A protected output image must have the correct size and type, otherwise an exception will be thrown.
The separable filter always has one input and one output image.
If the option dip::Framework::SeparableOption::DontResizeOutput
is given, then the sizes of the output image
will be kept (but it could still be reforged to change the data type). In this case, the length of the input and
output buffers can differ, causing the intermediate result image to change size one dimension at the time, as each
dimension is processed. For example, if the input image is of size 256x256, and the output is 1x1, then in a first
step 256 lines are processed, each with 256 pixels as input and a single pixel as output. In a second step, a single
line of 256 pixels is processed yielding the final single-pixel result. In the same case, but with an output of
64x512, 256 lines are processed, each with 256 pixels as input and 64 pixels as output. In the second step,
64 lines are processed, each with 256 pixels as input and 512 pixels as output. This option is useful for functions
that scale and do other geometric transformations, as well as functions that compute projections.
Tensors are passed to lineFilter
as vectors, if the shape is important, store this information in lineFilter
.
The output image will have the same tensor shape as the input except if the option
dip::Framework::SeparableOption::ExpandTensorInBuffer
is given. In this case, the input buffers passed to
lineFilter
will contain the tensor elements as a standard, column-major matrix, and the output image will be
a full matrix of that size. If the input image has tensors stored differently, buffers will be used when processing
the first dimension; for subsequent dimensions, the intermediate result will already contain the full matrix.
Use this option if you need to do computations with the tensors, but do not want to bother with all the different
tensor shapes, which are meant only to save memory.
However, if the option dip::Framework::SeparableOption::AsScalarImage
is given, then the line filter is called
for each tensor element, effectively causing the filter to process a sequence of scalar images, one for each tensor
element. This is accomplished by converting the tensor into a spatial dimension for both the input and output image,
and setting the process
array for the new dimension to false. For example, given an input image in
with 3 tensor
elements, filter(in,out)
will result in an output image out
with 3 tensor elements, and computed as if filter
were called 3 times: filter(in[0],out[0])
, filter(in[1],out[1])
, and filter(in[2],out[2])
.
The framework function sets the output tensor size to that of the input image, and it sets the color space to that
of the input image if the two images have matching number of tensor elements (these can differ if
dip::Framework::SeparableOption::ExpandTensorInBuffer
is given). The calling function is expected to “correct”
these values if necessary. Note the difference here with the Scan
and Full
frameworks: it is not possible to
apply a separate filter to a tensor image and obtain an output with a different tensor representation
(because the question arises: in which image pass does this change occur?).
The buffers are not guaranteed to be contiguous, please use the stride
and tensorStride
values to access samples.
The dip::Framework::SeparableOption::UseInputBuffer
and dip::Framework::SeparableOption::UseOutputBuffer
options force the use of temporary buffers to store each image line. These temporary buffers always have contiguous
samples, with the tensor stride equal to 1 and the spatial stride equal to the number of tensor elements.
That is, the tensor elements for each pixel are contiguous, and the pixels are contiguous. This is useful when
calling external code to process the buffers, and that external code expects input data to be contiguous.
These buffers will also be aligned to a 32-byte boundary.
Forcing the use of an input buffer is also useful when the algorithm needs to write temporary data to its input,
for example, to compute the median of the input data by sorting. If the input has a stride of 0 in the dimension
being processed (this happens when expanding singleton dimensions), it means that a single pixel is repeated across
the whole line. This property is preserved in the buffer. Thus, even when these two flags are used, you need to
check the stride
value and deal with the singleton dimension appropriately.
The input buffer contains bufferLength + 2 * border
pixels. The pixel pointed to by the buffer
pointer is the
first pixel on that line in the input image. The lineFilter
function object can read up to border
pixels before
that pixel, and up to border
pixels after the last pixel on the line. These pixels are filled by the framework
using the boundaryCondition
value for the given dimension. The boundaryCondition
array can be empty, in which
case the default boundary condition value is used. If the option dip::Framework::SeparableOption::UseOutputBorder
is given, then the output buffer also has border
extra samples at each end. These extra samples are meant to help
in the computation for some filters, and are not copied back to the output image. position
gives the coordinates
for the first pixel in the buffers, subsequent pixels occur along dimension dimension
.
position[dimension]
is always zero.
If in
and out
share their data segments, then the input image might be overwritten with the processing result.
However, the input and output buffers will not share memory. That is, the line filter can freely write in the output
buffer without invalidating the input buffer, even when the filter is being applied in-place.
The dip::Framework::SeparableOption::CanWorkInPlace
option causes the input and output buffer to potentially
both point to the same image data, if input and output images are the same and everything else falls into place
as well. It is meant to save some copy work for those algorithms that can work in-place, but does not guarantee
that the output buffer points to the input data.
If in
and out
share their data segments (e.g. they are the same image), then the filtering operation can be
applied completely in place, without any temporary images. For this to be possible, outImageType
, bufferType
and the input image data type must all be the same.
dip::Framework::Separable
will process the image using multiple threads, so lineFilter
will be called from
multiple threads simultaneously. If it is not thread safe, specify dip::Framework::SeparableOption::NoMultiThreading
as an option. The SetNumberOfThreads
method to lineFilter
will be called once before the processing starts,
when dip::Framework::Separable
has determined how many threads will be used in the processing, even if
dip::Framework::SeparableOption::NoMultiThreading
was specified.
void
dip:: Framework:: OneDimensionalLineFilter(dip::Image const& in,
dip::Image& out,
dip::DataType inBufferType,
dip::DataType outBufferType,
dip::DataType outImageType,
dip::uint processingDimension,
dip::uint border,
dip::BoundaryCondition boundaryCondition,
dip::Framework::SeparableLineFilter& lineFilter,
dip::Framework::SeparableOptions opts = {})
Framework for filtering of image lines. This is a version of dip::Framework::Separable
that works along one
dimension only.
Here we describe only the differences with dip::Framework::Separable
. If it is not described here, refer to
dip::Framework::Separable
.
The input and output buffers can be of different types, inBufferType
and outBufferType
determine these
two types. Note that this would not be possible in the separable framework function: the output of one
pass is the input to the next pass, so the data types of input and output must be the same.
Instead of a process
array, there is a processingDimension
parameter, which specifies which dimension
the filter will be applied along. Both border
and boundaryCondition
are scalars instead of arrays, and
apply to processingDimension
.
void
dip:: Framework:: Full(dip::Image const& in,
dip::Image& out,
dip::DataType inBufferType,
dip::DataType outBufferType,
dip::DataType outImageType,
dip::uint nTensorElements,
dip::BoundaryConditionArray const& boundaryCondition,
dip::Kernel const& kernel,
dip::Framework::FullLineFilter& lineFilter,
dip::Framework::FullOptions opts = {})
Framework for filtering of images with an arbitrary shape neighborhood.
The function object lineFilter
is called for each image line, with input and output buffers either pointing
directly to the input and output images, or pointing to temporary buffers that are handled by the framework and
present the line’s pixel data with a different data type, with expanded borders, etc. The buffers are always of
the type specified in inBufferType
and outBufferType
, but are passed as void*
. lineFilter
should cast
these pointers to the right types. The output buffer is not initialized, lineFilter
is responsible for setting
all its values.
lineFilter
can access the pixels on the given line for all input and output images, as well as all pixels within
the neighborhood for all input images. The neighborhood is given by kernel
. This object defines the size of the
border extension in the input buffer.
The output image out
(unless protected) will be resized to match the input in
, but have nTensorElements
tensor elements, and its type will be set to that specified by outImageType
. A protected output image must have
the correct size and type, otherwise an exception will be thrown. The full filter always has one input and one
output image.
Tensors are passed to lineFilter
as vectors, if the shape is important, store this information in lineFilter
.
nTensorElements
gives the number of tensor elements for the output image. These are created as standard vectors,
unless the input image has the same number of tensor elements, in which case that tensor shape is copied.
The calling function can reshape the tensors after the call to dip::Framework::Full
. It is not necessary nor
enforced that the tensors for each image (both input and output) are the same, the calling function is to make
sure the tensors satisfy whatever constraints.
However, if the option dip::Framework::FullOption::AsScalarImage
is given, then the line filter is called for
each tensor element, effectively causing the filter to process a sequence of scalar images, one for each tensor
element. nTensorElements
is ignored, and set to the number of tensor elements of the input. For example, given
an input image in
with 3 tensor elements, filter(in,out)
will result in an output image out
with 3 tensor
elements, and computed as if filter
were called 3 times: filter(in[0],out[0])
, filter(in[1],out[1])
, and
filter(in[2],out[2])
.
If the option dip::Framework::FullOption::ExpandTensorInBuffer
is given, then the input buffer passed to
lineFilter
will contain the tensor elements as a standard, column-major matrix. If the image has tensors stored
differently, buffers will be used. This option is not used when dip::Framework::FullOption::AsScalarImage
is set, as that forces the tensor to be a single sample. Use this option if you need to do computations with the
tensors, but do not want to bother with all the different tensor shapes, which are meant only to save memory.
Note, however, that this option does not apply to the output image. When expanding the input tensor in this way,
it makes sense to set the output tensor to a full matrix. Don’t forget to specify the right size in nTensorElements
.
The framework function sets the output pixel size to that of the input image, and it sets the color space to that of the input image if the two images have matching number of tensor elements. The calling function is expected to “correct” these values if necessary.
The buffers are not guaranteed to be contiguous, please use the stride
and tensorStride
values to access samples.
The pixel pointed to by the buffer
pointer is the first pixel on that line in the input image.
lineFilter
can read any pixel within the neighborhood of all the pixels on the line. These pixels are filled by
the framework using the boundaryCondition
values. The boundaryCondition
vector can be empty, in which case
the default boundary condition value is used.
If the option dip::Framework::FullOption::BorderAlreadyExpanded
is given, then the input image is presumed
to have been expanded using the function dip::ExtendImage
(specify the option "masked"
). That is, it is
possible to read outside the image bounds within an area given by the size of kernel
. If the tensor doesn’t need
to be expanded, and the image data type matches the buffer data type, then the input image will not be copied.
In this case, a new data segment will always be allocated for the output image. That is, the operation cannot
be performed in place. Also, boundaryCondition
are ignored.
position
gives the coordinates for the first pixel in the buffers, subsequent pixels occur along dimension
dimension
. position[dimension]
is always zero. If dip::Framework::FullOption::AsScalarImage
was given and
the input image has more than one tensor element, then position
will have an additional element.
Use pixelTable.Dimensionality()
to determine how many of the elements in position
to use.
The input and output buffers will never share memory. That is, the line filter can freely write in the output buffer without invalidating the input buffer, even when the filter is being applied in-place.
dip::Framework::Full
will process the image using multiple threads, so lineFilter
will be called from multiple
threads simultaneously. If it is not thread safe, specify dip::Framework::FullOption::NoMultiThreading
as an
option. The SetNumberOfThreads
method to lineFilter
will be called once before the processing starts, when
dip::Framework::Full
has determined how many threads will be used in the scan, even if
dip::Framework::FullOption::NoMultiThreading
was specified.
void
dip:: Framework:: Projection(dip::Image const& in,
dip::Image const& mask,
dip::Image& out,
dip::DataType outImageType,
dip::BooleanArray process,
dip::Framework::ProjectionFunction& projectionFunction,
dip::Framework::ProjectionOptions opts = {})
Framework for projecting one or more dimensions of an image.
process
determines which dimensions of the input image in
will be collapsed. out
will have the
same dimensionality as in
, but the dimensions that are true
in process
will have a size of 1
(i.e. be singleton dimensions); the remaining dimensions will be of the same size as in in
.
The function object projectionFunction
is called for each sub-image that projects onto a single sample.
Each tensor element is processed independently, and so the sub-image is always a scalar image.
For example, when computing the sum over the entire image, the projectionFunction
is called once for
each tensor element, with a scalar image the size of the full input image as input. When computing the
sum over image rows, the projectionFunction
is called once for each tensor element and each row of the
image, with a scalar image the size of one image row.
The projection function cannot make any assumptions about contiguous data or input dimensionality. The input will be transformed such that it has as few dimensions as possible, just to make the looping inside the projection function more efficient.
The output image out
(unless protected) will be resized to match the required output size,
and its type will be set to that specified by outImageType
. A protected output image must have
the correct size, otherwise an exception will be thrown, but can have a different data type.
The output sample in the projection function will always be of type outImageType
, even if the output
image cannot be converted to that type (in which case the framework function will take care of casting
each output value generated by the projection function to the output type).
dip::Framework::Projection
will process the image using multiple threads, so projectionFunction
will be called from multiple threads simultaneously. If it is not thread safe, specify
dip::Framework::ProjectionOption::NoMultiThreading
as an option. The SetNumberOfThreads
method to projectionFunction
will be called once before the processing starts, when
dip::Framework::Projection
has determined how many threads will be used in the scan, even if
dip::Framework::FullOption::NoMultiThreading
was specified.