Mosaicking of Unmanned Aerial Vehicle imagery in the absence of camera poses

63Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

The mosaicking of Unmanned Aerial Vehicle (UAV) imagery usually requires information from additional sensors, such as Global Position System (GPS) and Inertial Measurement Unit (IMU), to facilitate direct orientation, or 3D reconstruction approaches (e.g., structure-from-motion) to recover the camera poses. In this paper, we propose a novel mosaicking method for UAV imagery in which neither direct nor indirect orientation procedures are required. Inspired by the embedded deformation model, a widely used non-rigid mesh deformation model, we present a novel objective function for image mosaicking. Firstly, we construct a feature correspondence energy term that minimizes the sum of the squared distances between matched feature pairs to align the images geometrically. Secondly, we model a regularization term that constrains the image transformation parameters directly by keeping all transformations as rigid as possible to avoid global distortion in the final mosaic. Experimental results presented herein demonstrate that the accuracy of our method is twice as high as an existing (purely image-based) approach, with the associated benefits of significantly faster processing times and improved robustness with respect to reference image selection.

Cite

CITATION STYLE

APA

Xu, Y., Ou, J., He, H., Zhang, X., & Mills, J. (2016). Mosaicking of Unmanned Aerial Vehicle imagery in the absence of camera poses. Remote Sensing, 8(3). https://doi.org/10.3390/rs8030204

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free