Skip to content

Resample

Resample

Bases: SpatialTransform

Resample image to a different physical space.

This is a powerful transform that can be used to change the image shape or spatial metadata, or to apply a spatial transformation.

Parameters:

Name Type Description Default
target TypeTarget

Argument to define the output space. Can be one of:

  • Output spacing \((s_w, s_h, s_d)\), in mm. If only one value \(s\) is specified, then \(s_w = s_h = s_d = s\).

  • Path to an image that will be used as reference.

  • Instance of Image.

  • Name of an image key in the subject.

  • Tuple (spatial_shape, affine) defining the output space.

ONE_MILLIMITER_ISOTROPIC
pre_affine_name str | None

Name of the image key (not subject key) storing an affine matrix that will be applied to the image header before resampling. If None, the image is resampled with an identity transform. See usage in the example below.

None
image_interpolation str

See Interpolation.

'linear'
label_interpolation str

See Interpolation.

'nearest'
scalars_only bool

Apply only to instances of ScalarImage. Used internally by RandomAnisotropy.

False
antialias bool

If True, apply Gaussian smoothing before downsampling along any dimension that will be downsampled. For example, if the input image has spacing (0.5, 0.5, 4) and the target spacing is (1, 1, 1), the image will be smoothed along the first two dimensions before resampling. Label maps are not smoothed. The standard deviations of the Gaussian kernels are computed according to the method described in Cardoso et al., Scale factor point spread function matching: beyond aliasing in image resampling , MICCAI 2015.

False
**kwargs

See Transform for additional keyword arguments.

{}

Examples:

>>> import torch
>>> import torchio as tio
>>> transform = tio.Resample()                      # resample all images to 1mm isotropic
>>> transform = tio.Resample(2)                     # resample all images to 2mm isotropic
>>> transform = tio.Resample('t1')                  # resample all images to 't1' image space
>>> # Example: using a precomputed transform to MNI space
>>> ref_path = tio.datasets.Colin27().t1.path  # this image is in the MNI space, so we can use it as reference/target
>>> affine_matrix = tio.io.read_matrix('transform_to_mni.txt')  # from a NiftyReg registration. Would also work with e.g. .tfm from SimpleITK
>>> image = tio.ScalarImage(tensor=torch.rand(1, 256, 256, 180), to_mni=affine_matrix)  # 'to_mni' is an arbitrary name
>>> transform = tio.Resample(colin.t1.path, pre_affine_name='to_mni')  # nearest neighbor interpolation is used for label maps
>>> transformed = transform(image)  # "image" is now in the MNI space
Note

The antialias option is recommended when large (e.g. > 2×) downsampling factors are expected, particularly for offline (before training) preprocessing, when run times are not a concern.

__call__(data)

__call__(data: Subject) -> Subject
__call__(data: ImageT) -> ImageT
__call__(data: torch.Tensor) -> torch.Tensor
__call__(data: np.ndarray) -> np.ndarray
__call__(data: sitk.Image) -> sitk.Image
__call__(data: dict[str, object]) -> dict[str, object]
__call__(data: nib.Nifti1Image) -> nib.Nifti1Image

Transform data and return a result of the same type.

Parameters:

Name Type Description Default
data TypeTransformInput

Instance of torchio.Subject, 4D torch.Tensor or numpy.ndarray with dimensions \((C, W, H, D)\), where \(C\) is the number of channels and \(W, H, D\) are the spatial dimensions. If the input is a tensor, the affine matrix will be set to identity. Other valid input types are a SimpleITK image, a torchio.Image, a NiBabel Nifti1 image or a dict. The output type is the same as the input type.

required

to_hydra_config()

Return a dictionary representation of the transform for Hydra instantiation.

plot

Source code
import torchio as tio
subject = tio.datasets.FPG()
subject.remove_image('seg')
resample = tio.Resample(8)
t1_resampled = resample(subject.t1)
subject.add_image(t1_resampled, 'Downsampled')
subject.plot()