Learning to re-rank medical images using a Bayesian network-based thesaurus

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we believe that representing query and images with specific medical features allows to bridge the gap between the user information need and the searched images. Queries could be classified into three categories: textual, visual and combined. We present, in this work, the list of specific medical features such as image modality and image dimensionality. We exploit these specific features in a new medical image re-ranking method based on Bayesian network. Indeed, using a learning algorithm, we construct a Bayesian network that represents the relationships among these specific features appearing in a given image collection; this network is then considered as a thesaurus (specific for that collection). The relevance of an image to a given query is obtained by means of an inference process through the Bayesian network. Finally, the images are re-ranked based on combining their initial scores and the new scores. Experiments are performed on Medical ImageCLEF datasets from 2009 to 2012 and results show that our proposed model enhances significantly the image retrieval performance compared with BM25 model.

Cite

CITATION STYLE

APA

Ayadi, H., Khemakhem, M. T., Huang, J. X., Daoud, M., & Jemaa, M. B. (2017). Learning to re-rank medical images using a Bayesian network-based thesaurus. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10193 LNCS, pp. 160–172). Springer Verlag. https://doi.org/10.1007/978-3-319-56608-5_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free