Planematch: patch coplanarity prediction for robust RGB-D reconstruction

9Citations
Citations of this article
134Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We introduce a novel RGB-D patch descriptor designed for detecting coplanar surfaces in SLAM reconstruction. The core of our method is a deep convolutional neural network that takes in RGB, depth, and normal information of a planar patch in an image and outputs a descriptor that can be used to find coplanar patches from other images. We train the network on 10 million triplets of coplanar and non-coplanar patches, and evaluate on a new coplanarity benchmark created from commodity RGB-D scans. Experiments show that our learned descriptor outperforms alternatives extended for this new task by a significant margin. In addition, we demonstrate the benefits of coplanarity matching in a robust RGBD reconstruction formulation. We find that coplanarity constraints detected with our method are sufficient to get reconstruction results comparable to state-of-the-art frameworks on most scenes, but outperform other methods on established benchmarks when combined with traditional keypoint matching.

Cite

CITATION STYLE

APA

Shi, Y., Xu, K., Nießner, M., Rusinkiewicz, S., & Funkhouser, T. (2018). Planematch: patch coplanarity prediction for robust RGB-D reconstruction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11212 LNCS, pp. 767–784). Springer Verlag. https://doi.org/10.1007/978-3-030-01237-3_46

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free