Attention-based medical caption generation with image modality classification and clinical concept mapping

10Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes an attention-based deep learning framework for caption generation from medical images. We also propose to utilize the same framework for clinical concept prediction to improve caption generation by formulating the task as a case of sequence-to-sequence learning. The predicted concept IDs are then mapped to corresponding terms in a clinical ontology to generate an image caption. We also investigate if learning to classify images based on the modality e.g. CT scan, MRI etc. can aid in generating precise captions.

Cite

CITATION STYLE

APA

Hasan, S. A., Ling, Y., Liu, J., Sreenivasan, R., Anand, S., Arora, T. R., … Farri, O. (2018). Attention-based medical caption generation with image modality classification and clinical concept mapping. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11018 LNCS, pp. 224–230). Springer Verlag. https://doi.org/10.1007/978-3-319-98932-7_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free