Multi-Modal Super-Resolution with Deep Guided Filtering

7Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite the visually appealing results, most Deep Learningbased super-resolution approaches lack the comprehensibility that is required for medical applications. We propose a modified version of the locally linear guided filter for the application of super-resolution in medical imaging. The guidance map itself is learned end-to-end from multimodal inputs, while the actual data is only processed with known operators. This ensures comprehensibility of the results and simplifies the implementation of guarantees. We demonstrate the possibilities of our approach based on multi-modal MR and cross-modal CT and MR data. For both datasets, our approach performs clearly better than bicubic upsampling. For projection images, we achieve SSIMs of up to 0.99, while slice image data results in SSIMs of up to 0.98 for four-fold upsampling given an image of the respective other modality at full resolution. In addition, end-to-end learning of the guidance map considerably improves the quality of the results.

Cite

CITATION STYLE

APA

Stimpel, B., Syben, C., Schirrmacher, F., Hoelter, P., Dörfler, A., & Maier, A. (2019). Multi-Modal Super-Resolution with Deep Guided Filtering. In Informatik aktuell (pp. 110–115). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-658-25326-4_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free