Predicting the next best view for 3d mesh refinement

4Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

3D reconstruction is a core task in many applications such as robot navigation or sites inspections. Finding the best poses to capture part of the scene is one of the most challenging topic that goes under the name of Next Best View. Recently many volumetric methods have been proposed; they choose the Next Best View by reasoning into a 3D voxelized space and by finding which pose minimizes the uncertainty decoded into the voxels. Such methods are effective but they do not scale well since the underlaying representation requires a huge amount of memory. In this paper we propose a novel mesh-based approach that focuses the next best view on the worst reconstructed region of the environment. We define a photo-consistent index to evaluate the model accuracy, and an energy function over the worst regions of the mesh that takes into account the mutual parallax with respect to the previous cameras, the angle of incidence of the viewing ray to the surface and the visibility of the region. We tested our approach over a well known dataset and achieve state-of-the-art results.

Cite

CITATION STYLE

APA

Morreale, L., Romanoni, A., & Matteucci, M. (2019). Predicting the next best view for 3d mesh refinement. In Advances in Intelligent Systems and Computing (Vol. 867, pp. 760–772). Springer Verlag. https://doi.org/10.1007/978-3-030-01370-7_59

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free