Image correspondences matching using multiple features fusion

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we present a novel framework which significantly increases the accuracy of correspondences matching between two images under various image transformations. We first define a retina inspired patch-structure which mimics the human eye retina topology, and use the highly discriminative convolutional neural networks (CNNs) features to represent those patches. Then, we employ the conventional salient point methods to locate salient points, and finally, we fuse both the local descriptor of each salient point and the CNN feature from the local patch which the salient point belongs to. The evaluation results show the effectiveness of the proposed multiple features fusion (MFF) framework and that it improves the accuracy of leading approaches on two popular benchmark datasets.

Cite

CITATION STYLE

APA

Wu, S., & Lew, M. S. (2016). Image correspondences matching using multiple features fusion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9915 LNCS, pp. 737–746). Springer Verlag. https://doi.org/10.1007/978-3-319-49409-8_61

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free