Automatic detection and classification of lesions in medical images remains one of the most important and challenging problems. In this paper, we present a new multi-task convolutional neural network (CNN) approach for detection and semantic description of lesions in diagnostic images. The proposed CNN-based architecture is trained to generate and rank rectangular regions of interests (ROI’s) surrounding suspicious areas. The highest score candidates are fed into the subsequent network layers. These layers are trained to generate semantic description of the remaining ROI’s. During the training stage, our approach uses rectangular ground truth boxes; it does not require accurately delineated lesion contours. It has a clear advantage for supervised training on large datasets. Our system learns discriminative features which are shared in the Detection and the Description stages. This eliminates the need for hand-crafted features, and allows application of the method to new modalities and organs with minimal overhead. The proposed approach generates medical report by estimating standard radiological lexicon descriptors which are a basis for diagnosis. The proposed approach should help radiologists to understand a diagnostic decision of a computer aided diagnosis (CADx) system. We test the proposed method on proprietary and publicly available breast databases, and show that our method outperforms the competing approaches.
CITATION STYLE
Kisilev, P., Sason, E., Barkan, E., & Hashoul, S. (2016). Medical image description using multi-task-loss CNN. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10008 LNCS, pp. 121–129). Springer Verlag. https://doi.org/10.1007/978-3-319-46976-8_13
Mendeley helps you to discover research relevant for your work.