A next best view method based on self-occlusion information in depth images for moving object

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The determination of next best view of a camera for moving object has wide application in dynamic object scenario, such as unmanned aerial vehicle and automatic recognition. The major challenge of this problem is how to determine the next best view while the visual object is moving. In this work, a novel next best view method based on self-occlusion information in depth images of moving object is proposed. Firstly, a depth image of moving object is acquired and self-occlusion detection is utilized in the acquired image. On this basis, the self-occlusion regions are modeled by utilizing space quadrilateral subdivision. Secondly, according to the modeling result, a method based on the idea of mean shift is proposed to calculate the result of self-occlusion avoidance corresponding to the current object. Thirdly, the second depth image of moving object is acquired, and the feature points in two images are detected and matched, then the 3D motion estimation is accomplished by the 3D coordinates of feature points which are matched. Finally, the next best view is determined by combining the result of self-occlusion avoidance and 3D motion estimation. Experimental results validate that the proposed method is feasible and has relatively high real-time performance.

Cite

CITATION STYLE

APA

Zhang, S., Li, X., He, H., & Miao, Y. (2018). A next best view method based on self-occlusion information in depth images for moving object. Multimedia Tools and Applications, 77(8), 9753–9777. https://doi.org/10.1007/s11042-018-5822-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free