Experimenting a visual attention model in the context of CBIR systems

ISSN: 16130073
Citations of this article
Mendeley users who have this article in their library.
This PDF is freely available from an open access repository. It may not have been peer-reviewed.


Many novel applications in the field of object recognition and pose estimation have been built relying on local invariant features extracted from selected key points of the images. Such keypoints usually lie on high-contrast regions of the image, such as object edges. However, the visual saliency of the those regions is not considered by state-of-the art detection algorithms that assume the user is interested in the whole image. Moreover, the most common approaches discard all the color in- formation by limiting their analysis to monochromatic versions of the input images. In this paper we present the experimental results of the application of a biologically-inspired visual attention model to the problem of local feature selection in landmark and object recognition tasks. The model uses color-information and restricts the matching between the images to the areas showing a strong saliency. The results show that the approach improves the accuracy of the classifier in the object recognition task and preserves a good accuracy in the landmark recognition task when a high percentage of visual features is filtered out. In both cases the reduction of the average numbers of local features result in high efficiency gains during the search phase that typically requires costly searches of candidate images for matches and geometric consistency checks.




Cardillo, F. A., Amato, G., & Falchi, F. (2013). Experimenting a visual attention model in the context of CBIR systems. In CEUR Workshop Proceedings (Vol. 964, pp. 45–56). CEUR-WS.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free