Biomedical image retrieval using multimodal context and concept feature spaces

7Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a unified medical image retrieval method that integrates visual features and text keywords using multimodal classification and filtering. For content-based image search, concepts derived from visual features are modeled using support vector machine (SVM)-based classification of local patches from local image regions. Text keywords from associated metadata provides the context and are indexed using the vector space model of information retrieval. The concept and context vectors are combined and trained for SVM classification at a global level for image modality (e.g., CT, MR, x-ray, etc.) detection. In this method, the probabilistic outputs from the modality categorization are used to filter images so that the search can be performed only on a candidate subset. An evaluation of the method on ImageCLEFmed 2010 dataset of 77,000 images, XML annotations and topics results in a mean average precision (MAP) score of 0.1125. It demonstrates the effectiveness and efficiency of the proposed multimodal framework compared to using only a single modality or without using any classification information. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Rahman, M. M., Antani, S. K., Fushman, D. D., & Thoma, G. R. (2012). Biomedical image retrieval using multimodal context and concept feature spaces. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7075 LNCS, pp. 24–35). https://doi.org/10.1007/978-3-642-28460-1_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free