Skip to content

Resize

Resize

Bases: SpatialTransform

Resample images so the output shape matches the given target shape.

The field of view remains the same.

Warning

In most medical image applications, this transform should not be used as it will deform the physical object by scaling anisotropically along the different dimensions. The solution to change an image size is typically applying Resample and CropOrPad.

Parameters:

Name Type Description Default
target_shape TypeSpatialShape

Tuple \((W, H, D)\). If a single value \(N\) is provided, then \(W = H = D = N\). The size of dimensions set to -1 will be kept.

required
image_interpolation str

See Interpolation.

'linear'
label_interpolation str

See Interpolation.

'nearest'

__call__(data)

__call__(data: Subject) -> Subject
__call__(data: ImageT) -> ImageT
__call__(data: torch.Tensor) -> torch.Tensor
__call__(data: np.ndarray) -> np.ndarray
__call__(data: sitk.Image) -> sitk.Image
__call__(data: dict[str, object]) -> dict[str, object]
__call__(data: nib.Nifti1Image) -> nib.Nifti1Image

Transform data and return a result of the same type.

Parameters:

Name Type Description Default
data TypeTransformInput

Instance of torchio.Subject, 4D torch.Tensor or numpy.ndarray with dimensions \((C, W, H, D)\), where \(C\) is the number of channels and \(W, H, D\) are the spatial dimensions. If the input is a tensor, the affine matrix will be set to identity. Other valid input types are a SimpleITK image, a torchio.Image, a NiBabel Nifti1 image or a dict. The output type is the same as the input type.

required

to_hydra_config()

Return a dictionary representation of the transform for Hydra instantiation.