Deep Image Translation for Enhancing Simulated Ultrasound Images

7Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Ultrasound simulation based on ray tracing enables the synthesis of highly realistic images. It can provide an interactive environment for training sonographers as an educational tool. However, due to high computational demand, there is a trade-off between image quality and interactivity, potentially leading to sub-optimal results at interactive rates. In this work we introduce a deep learning approach based on adversarial training that mitigates this trade-off by improving the quality of simulated images with constant computation time. An image-to-image translation framework is utilized to translate low quality images into high quality versions. To incorporate anatomical information potentially lost in low quality images, we additionally provide segmentation maps to image translation. Furthermore, we propose to leverage information from acoustic attenuation maps to better preserve acoustic shadows and directional artifacts, an invaluable feature for ultrasound image interpretation. The proposed method yields an improvement of 7.2% in Fréchet Inception Distance and 8.9% in patch-based Kullback-Leibler divergence.

Cite

CITATION STYLE

APA

Zhang, L., Portenier, T., Paulus, C., & Goksel, O. (2020). Deep Image Translation for Enhancing Simulated Ultrasound Images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12437 LNCS, pp. 85–94). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-60334-2_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free