Context-Aware keypoint extraction for robust image representation

8Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Tasks such as image retrieval, scene classification, and object recognition often make use of local image features, which are intended to provide a reliable and efficient image representation. However, local feature extractors are designed to respond to a limited set of structures (e.g. blobs or corners), which might not be sufficient to capture the most relevant image content. We discuss the lack of coverage of relevant image information by local features as well as the often neglected complementarity between sets of features. As a result, we propose an information-theoretic-based keypoint extraction that responds to complementary local structures and is aware of the image composition. We empirically assess the validity of the method by analysing the completeness, complementarity, and repeatability of context-aware features on different standard datasets. Under these results, we discuss the applicability of the method.

Cite

CITATION STYLE

APA

Martins, P., Carvalho, P., & Gatta, C. (2012). Context-Aware keypoint extraction for robust image representation. In BMVC 2012 - Electronic Proceedings of the British Machine Vision Conference 2012. British Machine Vision Association, BMVA. https://doi.org/10.5244/C.26.100

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free