Building quantitative models to summarize the structural variability of the human brain is an essential task in brain image analysis. Such quantitative models can be used to measure the normative variation of healthy brains, to capture their change over time, and to find imaging patterns of a diseased group. These model can be further applied to individual brain scans for tissue segmentation, lesion delineation, abnormality detection and image registration. A common approach to derive a representation of a population is through the use of atlases (i.e., characteristic brains) that are either manually determined or automatically inferred. However, atlases are first-order statistical measures that do not convey information about the amount and direction of variability within a population and are therefore inadequate for many applications. Most previous works on statistical modeling of imaging data have resorted to voxel-based constructions in which image values at different voxels are assumed to be statistically independent. Although voxel-based methods can identify structural variations that are well localized, they are myopic to correlations between different regions and cannot capture any global patterns of the underlying data. Contrarily, classical multivariate statistical methods can be useful for finding the most dominant trends of variability. However, they are incapable of providing a statistically consistent estimate of the full covariance structure or the joint probabilistic density function of high-dimensional image data with a limited amount of samples. In this thesis, we introduce a multivariate framework for learning probability distributions over high-dimensional image data to capture the inter-subject structural variability of the brain. Specifically, we adopt the divide and conquer strategy by breaking the challenging task of learning high-dimensional image data into a collection of smaller, more tractable problems. In Chapter 2, we present a generative model built upon the aforementioned strategy to capture normative variations of image appearance. The model is incorporated within a novel framework for locating imaging abnormalities. In particular, a 3-Dimensional image volume is modeled as an ensemble of overlapping local regions. A sparse probabilistic model is used to approximate the marginal distribution of local intensity patterns, while pairwise potentials are incorporated to account for correlations across local regions. To tackle the difficulties associated with registering an image of a healthy brain to a scan of a diseased brain, we develop an iterative procedure that interleaves abnormality detection with registration. The method was evaluated using simulated data and tested using images with real lesions. Experimental results demonstrate that the framework can achieve accurate registration and abnormality detection simultaneously.In Chapter 3, we introduce a generative probabilistic model of high-dimensional spatial transformations. To make use of linear statistical methods while preserving diffeomorphisms, we adopt the Log-Euclidean framework and parametrize diffeomorphisms as exponentials of stationary velocity fields. Following the divide and conquer principle, we treat a velocity field as a collection of local velocities that reside in much lower-dimensional sub-spaces. Differing from the model for image appearances, principal component analysis is used to estimate the covariance structure for each local velocity and canonical correlation analysis is used to learn the dependencies between pairs of local velocities. The learned model is used as the foundation of a statistically constrained diffeomorphic registration algorithm. The method was tested using both simulated and real data. The results indicate that the proposed model is able to capture the normative variations of deformations with sub-millimeter accuracy and that the learned statistical constraints lead to substantially more robust registration results in the presence of abnormalities. Lastly, in Chapter 4, we shift our attention to the segmentation of specific pathological structures in a supervised setting. In particular, we demonstrate how a generative model similar to the one described in Chapter 2 can be combined with discriminative learning techniques to form a hybrid segmentation framework. The hybrid method was validated using 132 scans of patients with high-grade gliomas. Quantitative evaluation of the segmentation shows that the hybrid approach outperforms both the baseline generative method and the baseline discriminative model.
Ph. D. University of Pennsylvania 2018. Department: Applied Mathematics and Computational Science. Supervisor: Christos Davatzikos. Includes bibliographical references.