Lesion Detection with Deep Aggregated 3D Contextual Feature and Auxiliary Information

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Detecting different kinds of lesions in computed tomography (CT) scans at the same time is a difficult but important task for a computer-aided diagnosis (CADx) system. Compared to single-lesion detection methods, our lesion detection method considers additional intra-class differences. In this work, we present a CT image analysis framework for lesion detection. Our model is developed based on a dense region-based fully convolutional network (Dense R-FCN) model using 3D context and is equipped with a dense auxiliary loss (DAL) scheme for end-to-end learning. It fuses shallow, medium, and deep features to meet the needs of detecting lesions of various sizes. Owing to its fully-connected structure, it is called Dense R-FCN. Meanwhile, the DAL supervises the intermediate hidden layers in order to maximize the use of the shallow layer information, which benefits the detection results, especially for small lesions. Experiment results on the DeepLesion dataset corroborate the efficacy of our method.

Cite

CITATION STYLE

APA

Zhang, H., & Chung, A. C. S. (2019). Lesion Detection with Deep Aggregated 3D Contextual Feature and Auxiliary Information. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11861 LNCS, pp. 45–53). Springer. https://doi.org/10.1007/978-3-030-32692-0_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free