H-Net: Neural network for cross-domain image patch matching

29Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Describing the same scene with different imaging style or rendering image from its 3D model gives us different domain images. Different domain images tend to have a gap and different local appearances, which raise the main challenge on the cross-domain image patch matching. In this paper, we propose to incorporate AutoEncoder into Siamese network, named as H-Net, of which the structural shape resembles the letter H. The H-Net achieves state-of-the-art performance on the cross-domain image patch matching. Furthermore, we improved H-Net to H-Net++. The H-Net++ extracts invariant feature descriptors in cross-domain image patches and achieves state-of-the-art performance by feature retrieval in Euclidean space. As there is no benchmark dataset including cross-domain images, we made a cross-domain image dataset which consists of camera images, rendering images from UAV 3D model, and images generated by CycleGAN algorithm. Experiments show that the proposed H-Net and H-Net++ outperform the existing algorithms. Our code and cross-domain image dataset are available at https://github.com/Xylon-Sean/H-Net.

Cite

CITATION STYLE

APA

Liu, W., Shen, X., Wang, C., Zhang, Z., Wen, C., & Li, J. (2018). H-Net: Neural network for cross-domain image patch matching. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 856–863). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/119

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free