WavTrans: Synergizing Wavelet and Cross-Attention Transformer for Multi-contrast MRI Super-Resolution

14Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Current multi-contrast MRI super-resolution (SR) methods often harness convolutional neural networks (CNNs) for feature extraction and fusion. However, existing models have some shortcomings that prohibit them from producing more satisfactory results. First, during the feature extraction, some high-frequency details in the images are lost, resulting in blurring boundaries in the reconstructed images, which may impede the following diagnosis and treatment. Second, the perceptual field of the convolution kernel is limited, making the networks difficult to capture long-range/non-local features. Third, most of these models are solely driven by training data, neglecting prior knowledge about the correlations among different contrasts, which, once well leveraged, will effectively enhance the performance with limited training data. In this paper, we propose a novel model to synergize wavelet transforms with a new cross-attention transformer to comprehensively tackle these challenges; we call it WavTrans. Specifically, we harness one-level wavelet transformation to obtain the detail and approximation coefficients in the reference contrast MR images (Ref). While the approximation coefficients are applied to compress the low-frequency global information, the detail coefficients are utilized to represent the high-frequency local structure and texture information. Then, we propose a new residual cross-attention swin transformer to extract and fuse extracted features to establish long-distance dependencies between features and maximize the restoration of high-frequency information in Tar. In addition, a multi-residual fusion module is designed to fuse the high-frequency information in the upsampled Tar and the original Ref to ensure the restoration of detailed information. Extensive experiments demonstrate that WavTrans outperforms the SOTA methods by a considerable margin with upsampling factors of 2-fold and 4-fold. Code will be available at https://github.com/XAIMI-Lab/WavTrans.

Cite

CITATION STYLE

APA

Li, G., Lyu, J., Wang, C., Dou, Q., & Qin, J. (2022). WavTrans: Synergizing Wavelet and Cross-Attention Transformer for Multi-contrast MRI Super-Resolution. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13436 LNCS, pp. 463–473). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16446-0_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free