The rôle of self-calibration in euclidean reconstruction from two rotating and zooming cameras

6Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Reconstructing the scene from image sequences captured by moving cameras with varying intrinsic parameters is one of the major achievements of computer vision research in recent years. However, there remain gaps in the knowledge of what is reliably recoverable when the camera motion is constrained to move in particular ways. This paper considers the special case of multiple cameras whose optic centres are fixed in space, but which are allowed to rotate and zoom freely, an arrangement seen widely in practical applications. The analysis is restricted to two such cameras, although the methods are readily extended to more than two. As a starting point an initial self-calibration of each camera is obtained independently. The first contribution of this paper is to provide an analysis of nearambiguities which commonly arise in the self-calibration of rotating cameras. Secondly we demonstrate howtheir effects may be mitigated by exploiting the epipolar geometry. Results on simulated and real data are presented to demonstrate how a number of self-calibration methods perform, including a final bundle-adjustment of all motion and structure parameters.

Cite

CITATION STYLE

APA

Hayman, E., de Agapito, L., Reid, I. D., & Murray, D. W. (2000). The rôle of self-calibration in euclidean reconstruction from two rotating and zooming cameras. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1843, pp. 477–492). Springer Verlag. https://doi.org/10.1007/3-540-45053-x_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free