Skip to content

ToReferenceSpace

ToReferenceSpace

Bases: SpatialTransform

Modify the spatial metadata so it matches a reference space.

This is useful, for example, to set meaningful spatial metadata of a neural network embedding, for visualization or further processing such as resampling a segmentation output.

Examples:

import torchio as tio image = tio.datasets.FPG().t1 embedding_tensor = my_network(image.tensor) # we lose metadata here embedding_image = tio.ToReferenceSpace.from_tensor(embedding_tensor, image)

__call__(data)

__call__(data: Subject) -> Subject
__call__(data: ImageT) -> ImageT
__call__(data: torch.Tensor) -> torch.Tensor
__call__(data: np.ndarray) -> np.ndarray
__call__(data: sitk.Image) -> sitk.Image
__call__(data: dict[str, object]) -> dict[str, object]
__call__(data: nib.Nifti1Image) -> nib.Nifti1Image

Transform data and return a result of the same type.

Parameters:

Name Type Description Default
data TypeTransformInput

Instance of torchio.Subject, 4D torch.Tensor or numpy.ndarray with dimensions \((C, W, H, D)\), where \(C\) is the number of channels and \(W, H, D\) are the spatial dimensions. If the input is a tensor, the affine matrix will be set to identity. Other valid input types are a SimpleITK image, a torchio.Image, a NiBabel Nifti1 image or a dict. The output type is the same as the input type.

required

to_hydra_config()

Return a dictionary representation of the transform for Hydra instantiation.

from_tensor(tensor, reference) staticmethod

Build a TorchIO image from a tensor and a reference image.