Bridging the semantic gap in image search via visual semantic descriptors by integrating text and visual features

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To facilitate access to the enormous and ever–growing amount of images on the web, existing Image Search engines use different image re-ranking methods to improve the quality of image search. Existing search engines retrieve results based on the keyword provided by the user. A major challenge is that, only using the query keyword one cannot correlate the similarities of low level visual features with image’s high-level semantic meanings which induce a semantic gap. The proposed image re-ranking method identifies the visual semantic descriptors associated with different images and then images are re-ranked by comparing their semantic descriptors. Another limitation of the current systems is that sometimes duplicate images show up as similar images which reduce the search diversity. The proposed work overcomes this limitation through the usage of perceptual hashing. Better results have been obtained for image re-ranking on a real-world image dataset collected from a commercial search engine.

Cite

CITATION STYLE

APA

Lekshmi, V. L., & John, A. (2016). Bridging the semantic gap in image search via visual semantic descriptors by integrating text and visual features. In Advances in Intelligent Systems and Computing (Vol. 412, pp. 207–215). Springer Verlag. https://doi.org/10.1007/978-981-10-0251-9_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free