Transforms module
The Fourier and other transforms.
Classes

template<typename T>class dip::
DFT  An object that encapsulates the Discrete Fourier Transform (DFT).
Functions

auto dip::
GetOptimalDFTSize (size_t size0) > size_t  Returns a size equal or larger to
size0
that is efficient for our DFT implementation. 
void dip::
FourierTransform (dip::Image const& in, dip::Image& out, dip::StringSet const& options = {}, dip::BooleanArray process = {})  Computes the forward and inverse Fourier Transform

auto dip::
OptimalFourierTransformSize (dip::uint size) > dip::uint  Returns the next higher multiple of {2, 3, 5}. The largest value that can be returned is 2125764000
(smaller than 2^{31}1, the largest possible value of an
int
on most platforms). 
void dip::
RieszTransform (dip::Image const& in, dip::Image& out, dip::String const& inRepresentation = S::SPATIAL, dip::String const& outRepresentation = S::SPATIAL, dip::BooleanArray process = {})  Computes the Riesz transform of a scalar image.

void dip::
StationaryWaveletTransform (dip::Image const& in, dip::Image& out, dip::uint nLevels = 4, dip::StringArray const& boundaryCondition = {}, dip::BooleanArray const& process = {})  Computes a stationary wavelet transform (also called àtrous wavelet decomposition).
Variables

size_t const dip::
maximumDFTSize constexpr  The largest size supported by the DFT (both the internal code and FFTW use
int
for sizes).
Function documentation
size_t
dip::GetOptimalDFTSize (size_t size0)
#include "diplib/dft.h"
Returns a size equal or larger to size0
that is efficient for our DFT implementation.
Returns 0 if size0
is too large for our DFT implementation.
Prefer to use dip::OptimalFourierTransformSize
in your applications, it will throw an error if
the transform size is too large.
void
dip::FourierTransform (dip::Image const& in,
dip::Image& out,
dip::StringSet const& options = {},
dip::BooleanArray process = {})
#include "diplib/transform.h"
Computes the forward and inverse Fourier Transform
The Fourier transform as implemented here places the origin (frequency 0) in the middle of
the image. If the image has N
pixels along a dimension, then the origin will be at pixel N/2
along that dimension, where N/2
is the integer division, and hence truncates the result for
odd values of N
. For example, an image of 256 pixels wide will have the origin at pixel 128
(right of the center), whereas an image of 255 pixels will have the origin at pixel 127
(dead in the middle). The same is true for the spatial domain, which is only obvious when
computing the Fourier transform of a convolution kernel.
As it is commonly defined, the Fourier transform is not normalized, and the inverse transform
is normalized by 1/size
for each dimension. This normalization is necessary for the sequence of
forward and inverse transform to be idempotent. However, it is possible to change where the
normalization is applied. For example, DIPlib 2 used identical
normalization for each of the two transforms. The advantage of using the common
definition without normalization in the forward transform is that it is straightforward to
transform an image and a convolution kernel, multiply them, and apply the inverse transform, as
an efficient way to compute the convolution. With any other normalization, this process would
require an extra multiplication by a constant to undo the normalization in the forward transform
of the convolution kernel.
This function will compute the Fourier Transform along the dimensions indicated by process
. If
process
is an empty array, all dimensions will be processed (normal multidimensional transform).
options
is an set of strings that indicate how the transform is applied:
 “inverse”: compute the inverse transform; not providing this string causes the the forward transform to be computed.
 “real”: assumes that the (complex) input is conjugate symmetric, and returns a realvalued result. Only to be used together with “inverse”.
 “fast”: pads the input to a “nice” size, multiple of 2, 3 and 5, which can be processed faster. Note that “fast” causes the output to be interpolated. This is not always a problem when computing convolutions or correlations, but will introduce e.g. edge effects in the result of the convolution.
 “corner”: sets the origin to the topleft corner of the image (both in the spatial and the frequency domain). This yields a standard DFT (Discrete Fourier Transform).
 “symmetric”: the normalization is made symmetric, where both forward and inverse transforms
are normalized by the same amount. Each transform is multiplied by
1/sqrt(size)
for each dimension. This makes the transform identical to how it was in DIPlib 2.
For tensor images, each plane is transformed independently.
With the “fast” mode, the input will be padded. If “corner” is given, the padding is to the right.
Otherwise it is split evenly on both sides, in such a way that the origin remains in the middle pixel.
For the forward transform, the padding applied is the “zero order” boundary condition (see dip::BoundaryCondition
).
Its effect is similar to padding with zeros, but with reduced edge effects.
For the inverse transform, padding is with zeros (“add zeros” boundary condition). However, the combination
of “fast”, “corner” and “inverse” is not allowed, since padding in that case is nontrivial.
dip::uint
dip::OptimalFourierTransformSize (dip::uint size)
#include "diplib/transform.h"
Returns the next higher multiple of {2, 3, 5}. The largest value that can be returned is 2125764000
(smaller than 2^{31}1, the largest possible value of an int
on most platforms).
void
dip::RieszTransform (dip::Image const& in,
dip::Image& out,
dip::String const& inRepresentation = S::SPATIAL,
dip::String const& outRepresentation = S::SPATIAL,
dip::BooleanArray process = {})
#include "diplib/transform.h"
Computes the Riesz transform of a scalar image.
The Riesz transform is the multidimensional generalization of the Hilbert transform, and identical to it for onedimensional images. It is computed through the Fourier domain by
where is the input image and is the coordinate vector.
out
is a vector image with one element per image dimension. If process
is given, it specifies which
dimensions to include in the output vector image. in
must be scalar.
inRepresentation
and outRepresentation
can be "spatial"
or "frequency"
, and indicate in which domain
the input image is, and in which domain the output image should be.
If inRepresentation
is "frequency"
, the input image must already be in the frequency domain, and will not
be transformed again. Likewise, if outRepresentation
is "frequency"
, the output image will not be transformed
to the spatial domain. Use these flags to prevent redundant backandforth transformations if other processing
in the frequency domain is necessary.
void
dip::StationaryWaveletTransform (dip::Image const& in,
dip::Image& out,
dip::uint nLevels = 4,
dip::StringArray const& boundaryCondition = {},
dip::BooleanArray const& process = {})
#include "diplib/transform.h"
Computes a stationary wavelet transform (also called àtrous wavelet decomposition).
For an ndimensional input image, creates an (n+1)dimensional output image where each
slice corresponds to one level of the wavelet transform. The first slice is the lowest level
(finest detail), and subsequent slices correspond to increasingly coarser levels. The last
slice corresponds to the residue. There are nLevels + 1
slices in total.
The filter used to smooth the image for the first level is [1/16, 1/4, 3/8, 1/4, 1/16]
,
applied to each dimension in sequence through dip::SeparableConvolution
.
For subsequent levels, zeros are inserted into this filter.
boundaryCondition
is passed to dip::SeparableConvolution
to determine how to extend the
input image past its boundary. process
can be used to exclude some dimensions from the
filtering.
in
can have any number of dimensions, any number of tensor elements, and any data type.
out
will have the smallest signed data type that can hold all values if in
(see
dip::DataType::SuggestSigned
. Note that the first nLevels
slices will contain negative
values, even if in
is purely positive, as these levels are the difference between two
differently smoothed images.
Summing the output image along its last dimension will yield the input image:
dip::Image img = ...; dip::Image swt = StationaryWaveletTransform( img ); dip::BooleanArray process( swt.Dimensionality(), false ); process.back() = true; img == dip.Sum( swt, {}, process ).Squeeze();
Variable documentation
size_t const dip::maximumDFTSize constexpr
#include "diplib/dft.h"
The largest size supported by the DFT (both the internal code and FFTW use int
for sizes).