3D Reconstruction with Multi-view Texture Mapping

3Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, a novel 3D reconstruction with multi-view texture mapping method based on Kinect 2 is proposed. Camera poses of all chosen key frames are optimized according to photometric consistency. Optimized camera poses can make the projected point from vertices to different views get closer. A small range of translations with limited calculation is added in this method. A new form of data term and smoothness term in Markov Random Field (MRF) objective function is presented. The outlier images are rejected before view selection and Poisson blending are applied in the end. Experimental results show that our method achieves a high-quality 3D model with high fidelity texture.

Cite

CITATION STYLE

APA

Ye, X., Wang, L., Li, D., & Zhang, M. (2017). 3D Reconstruction with Multi-view Texture Mapping. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10636 LNCS, pp. 198–207). Springer Verlag. https://doi.org/10.1007/978-3-319-70090-8_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free