Subject-Matched Templates for Spatial Normalization

Citation

Rohlfing, T., Sullivan, E.V., Pfefferbaum, A. (2009). Subject-Matched Templates for Spatial Normalization. In: Yang, GZ., Hawkes, D., Rueckert, D., Noble, A., Taylor, C. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2009. MICCAI 2009. Lecture Notes in Computer Science, vol 5762. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04271-3_28

Abstract

Spatial normalization of images from multiple subjects is a common problem in group comparison studies, such as voxel-based and deformation-based morphometric analyses. Use of a study-specific template for normalization may improve normalization accuracy over a study-independent standard template (Good et al., NeuroImage, 14(1):21-36, 2001). Here, we develop this approach further by introducing the concept of subject-matched templates. Rather than using a single template for the entire population, a different template is used for every subject, with the template matched to the subject in terms of age, sex, and potentially other parameters (e.g., disease). All subject-matched templates are created from a single generative regression model of atlas appearance, thus providing a priori template-to-template correspondence without registration. We demonstrate that such an approach is technically feasible and significantly improves spatial normalization accuracy over using a single template.


Read more from SRI

  • A photo of Mary Wagner

    Recognizing the life and work of Mary Wagner 

    A cherished SRI colleague and globally respected leader in education research, Mary Wagner leaves behind an extraordinary legacy of groundbreaking work supporting children and youth with disabilities and their families.

  • Testing XRGo in a robotics laboratory

    Robots in the cleanroom

    A global health leader is exploring how SRI’s robotic telemanipulation technology can enhance pharmaceutical manufacturing.

  • SRI research aims to make generative AI more trustworthy

    Researchers have developed a new framework that reduces generative AI hallucinations by up to 32%.