Imaging genomics is essentially a multimodal research area whose focus is to analyze the influence of genetic variation on brain function and structure. Due to the high dimensionality of such data, a critical step consists of applying a feature extraction/dimensionality reduction method. Often, unimodal methods are used for each dataset separately, thus failing to properly extract subtle interactions between various modalities. In this paper, we propose a multimodal sparse representation model to jointly extract features of interest by effectively coupling genomic and neuroimaging data. More precisely, we reconstruct neuroimaging data using a sparse linear combination of dictionary atoms, while taking into account contributions from genomic data during such decomposition process. This is achieved by introducing an explicit constraint through the use of a mapping function linking genomic data with the set of subject-wise coefficients associated with the imaging dictionary atoms. The motivation of this work is to extract generative features as well as the intrinsic relationships between the two modalities. This model can be expressed as a constrained optimization problem, for which a complete algorithmic procedure is provided. The proposed method is applied to analyze the differences between two young adult populations whose verbal ability shows significant differences (low/high achievers) by relying on both imaging and genomic data.
CITATION STYLE
Zille, P., & Wang, Y. P. (2017). Coupled dimensionality-reduction model for imaging genomics. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10551 LNCS, pp. 241–248). Springer Verlag. https://doi.org/10.1007/978-3-319-67675-3_22
Mendeley helps you to discover research relevant for your work.