Skip to content

RandomAffine

RandomAffine

Bases: RandomTransform, SpatialTransform

Apply a random affine transformation and resample the image.

Parameters:

Name Type Description Default
scales TypeOneToSixFloat

Tuple \((a_1, b_1, a_2, b_2, a_3, b_3)\) defining the scaling ranges. The scaling values along each dimension are \((s_1, s_2, s_3)\), where \(s_i \sim \mathcal{U}(a_i, b_i)\). If two values \((a, b)\) are provided, then \(s_i \sim \mathcal{U}(a, b)\). If only one value \(x\) is provided, then \(s_i \sim \mathcal{U}(1 - x, 1 + x)\). If three values \((x_1, x_2, x_3)\) are provided, then \(s_i \sim \mathcal{U}(1 - x_i, 1 + x_i)\). For example, using scales=(0.5, 0.5) will zoom out the image, making the objects inside look twice as small while preserving the physical size and position of the image bounds.

0.1
degrees TypeOneToSixFloat

Tuple \((a_1, b_1, a_2, b_2, a_3, b_3)\) defining the rotation ranges in degrees. Rotation angles around each axis are \((\theta_1, \theta_2, \theta_3)\), where \(\theta_i \sim \mathcal{U}(a_i, b_i)\). If two values \((a, b)\) are provided, then \(\theta_i \sim \mathcal{U}(a, b)\). If only one value \(x\) is provided, then \(\theta_i \sim \mathcal{U}(-x, x)\). If three values \((x_1, x_2, x_3)\) are provided, then \(\theta_i \sim \mathcal{U}(-x_i, x_i)\).

10
translation TypeOneToSixFloat

Tuple \((a_1, b_1, a_2, b_2, a_3, b_3)\) defining the translation ranges in mm. Translation along each axis is \((t_1, t_2, t_3)\), where \(t_i \sim \mathcal{U}(a_i, b_i)\). If two values \((a, b)\) are provided, then \(t_i \sim \mathcal{U}(a, b)\). If only one value \(x\) is provided, then \(t_i \sim \mathcal{U}(-x, x)\). If three values \((x_1, x_2, x_3)\) are provided, then \(t_i \sim \mathcal{U}(-x_i, x_i)\). For example, if the image is in RAS+ orientation (e.g., after applying ToCanonical) and the translation is \((10, 20, 30)\), the sample will move 10 mm to the right, 20 mm to the front, and 30 mm upwards. If the image was in, e.g., PIR+ orientation, the sample will move 10 mm to the back, 20 mm downwards, and 30 mm to the right.

0
isotropic bool

If True, only one scaling factor will be sampled for all dimensions, i.e. \(s_1 = s_2 = s_3\). If one value \(x\) is provided in scales, the scaling factor along all dimensions will be \(s \sim \mathcal{U}(1 - x, 1 + x)\). If two values provided \((a, b)\) in scales, the scaling factor along all dimensions will be \(s \sim \mathcal{U}(a, b)\).

False
center str

If 'image', rotations and scaling will be performed around the image center. If 'origin', rotations and scaling will be performed around the origin in world coordinates.

'image'
default_pad_value str | float

As the image is rotated, some values near the borders will be undefined. If 'minimum', the fill value will be the image minimum. If 'mean', the fill value is the mean of the border values. If 'otsu', the fill value is the mean of the values at the border that lie under an Otsu threshold . If it is a number, that value will be used. This parameter applies to intensity images only.

'minimum'
default_pad_label int | float

As the label map is rotated, some values near the borders will be undefined. This numeric value will be used to fill those undefined regions. This parameter applies to label maps only.

0
image_interpolation str

See Interpolation.

'linear'
label_interpolation str

See Interpolation.

'nearest'
check_shape bool

If True an error will be raised if the images are in different physical spaces. If False, center should probably not be 'image' but 'center'.

True
**kwargs

See Transform for additional keyword arguments.

{}

Examples:

>>> import torchio as tio
>>> image = tio.datasets.Colin27().t1
>>> transform = tio.RandomAffine(
...     scales=(0.9, 1.2),
...     degrees=15,
... )
>>> transformed = transform(image)

__call__(data)

__call__(data: Subject) -> Subject
__call__(data: ImageT) -> ImageT
__call__(data: torch.Tensor) -> torch.Tensor
__call__(data: np.ndarray) -> np.ndarray
__call__(data: sitk.Image) -> sitk.Image
__call__(data: dict[str, object]) -> dict[str, object]
__call__(data: nib.Nifti1Image) -> nib.Nifti1Image

Transform data and return a result of the same type.

Parameters:

Name Type Description Default
data TypeTransformInput

Instance of torchio.Subject, 4D torch.Tensor or numpy.ndarray with dimensions \((C, W, H, D)\), where \(C\) is the number of channels and \(W, H, D\) are the spatial dimensions. If the input is a tensor, the affine matrix will be set to identity. Other valid input types are a SimpleITK image, a torchio.Image, a NiBabel Nifti1 image or a dict. The output type is the same as the input type.

required

to_hydra_config()

Return a dictionary representation of the transform for Hydra instantiation.

plot

Source code
import torchio as tio
subject = tio.datasets.Slicer('CTChest')
ct = subject.CT_chest
transform = tio.RandomAffine()
ct_transformed = transform(ct)
subject.add_image(ct_transformed, 'Transformed')
subject.plot()