Limited view tomographic reconstruction aims to reconstruct a tomographic image from a limited number of projection views arising from sparse view or limited angle acquisitions that reduce radiation dose or shorten scanning time. However, such a reconstruction suffers from severe artifacts due to the incompleteness of sinogram. To derive quality reconstruction, previous methods use UNet-like neural architectures to directly predict the full view reconstruction from limited view data; but these methods leave the deep network architecture issue largely intact and cannot guarantee the consistency between the sinogram of the reconstructed image and the acquired sinogram, leading to a non-ideal reconstruction. In this work, we propose a cascaded residual dense spatial-channel attention network consisting of residual dense spatial-channel attention networks and projection data fidelity layers. We evaluate our methods on two datasets. Our experimental results on AAPM Low Dose CT Grand Challenge datasets demonstrate that our algorithm achieves a consistent and substantial improvement over the existing neural network methods on both limited angle reconstruction and sparse view reconstruction. In addition, our experimental results on Deep Lesion datasets demonstrate that our method is able to generate high-quality reconstruction for 8 major lesion types.
CITATION STYLE
Zhou, B., Zhou, S. K., Duncan, J. S., & Liu, C. (2021). Limited View Tomographic Reconstruction Using a Cascaded Residual Dense Spatial-Channel Attention Network With Projection Data Fidelity Layer. IEEE Transactions on Medical Imaging, 40(7), 1792–1804. https://doi.org/10.1109/TMI.2021.3066318
Mendeley helps you to discover research relevant for your work.