Normal Appearance Autoencoder for Lung Cancer Detection and Segmentation

22Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the major differences between medical doctor training and machine learning is that doctors are trained to recognize normal/healthy anatomy first. Knowing the healthy appearance of anatomy structures helps doctors to make better judgement when some abnormality shows up in an image. In this study, we propose a normal appearance autoencoder (NAA), that removes abnormalities from a diseased image. This autoencoder is semi-automatically trained using another partial convolutional in-paint network that is trained using healthy subjects only. The output of the autoencoder is then fed to a segmentation net in addition to the original input image, i.e. the latter gets both the diseased image and a simulated healthy image where the lesion is artificially removed. By getting access to knowledge of how the abnormal region is supposed to look, we hypothesized that the segmentation network could perform better than just being shown the original slice. We tested the proposed network on the LIDC-IDRI dataset for lung cancer detection and segmentation. The preliminary results show the NAA approach improved segmentation accuracy substantially in comparison with the conventional U-Net architecture.

Cite

CITATION STYLE

APA

Astaraki, M., Toma-Dasu, I., Smedby, Ö., & Wang, C. (2019). Normal Appearance Autoencoder for Lung Cancer Detection and Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11769 LNCS, pp. 249–256). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-32226-7_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free