Automated medical image modality recognition by fusion of visual and text information

10Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this work, we present a framework for medical image modality recognition based on a fusion of both visual and text classification methods. Experiments are performed on the public ImageCLEF 2013 medical image modality dataset, which provides figure images and associated fulltext articles from PubMed as components of the benchmark. The presented visual-based system creates ensemble models across a broad set of visual features using a multi-stage learning approach that best optimizes per-class feature selection while simultaneously utilizing all available data for training. The text subsystem uses a pseudo-probabilistic scoring method based on detection of suggestive patterns, analyzing both the figure captions and mentions of the figures in the main text. Our proposed system yields state-of-the-art performance in all 3 categories of visual-only (82.2%), text-only (69.6%), and fusion tasks (83.5%). © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Codella, N., Connell, J., Pankanti, S., Merler, M., & Smith, J. R. (2014). Automated medical image modality recognition by fusion of visual and text information. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8674 LNCS, pp. 487–495). Springer Verlag. https://doi.org/10.1007/978-3-319-10470-6_61

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free