Learning multi-view correspondences via subspace-based temporal coincidences

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this work we present an approach to automatically learn pixel correspondences between pairs of cameras. We build on the method of Temporal Coincidence Analysis (TCA) and extend it from the pure temporal (i.e. single-pixel) to the spatiotemporal domain. Our approach is based on learning a statistical model for local spatiotemporal image patches, determining rare, and expressive events from this model, and matching these events across multiple views. Accumulating multi-image coincidences of such events over time allows to learn the desired geometric and photometric relations. The presented method also works for strongly different viewpoints and camera settings, including substantial rotation, and translation. The only assumption that is made is that the relative orientation of pairs of cameras may be arbitrary, but fixed, and that the observed scene shows visual activity. We show that the proposed method outperforms the single pixel approach to TCA both in terms of learning speed and accuracy. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Conrad, C., & Mester, R. (2013). Learning multi-view correspondences via subspace-based temporal coincidences. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7944 LNCS, pp. 456–467). https://doi.org/10.1007/978-3-642-38886-6_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free