Information fusion algorithms have been successful in many vision tasks such as stereo, motion estimation, registration and robot localization. Stereo and motion image analysis are intimately connected and can provide complementary information to obtain robust estimates of scene structure and motion. We present an information fusion based approach for multi-camera and multi-body structure and motion that combines bottom-up and top-down knowledge on scene structure and motion. The only assumption we make is that all scene motion consists of rigid motion. We present experimental results on synthetic and non-synthetic data sets, demonstrating excellent performance compared to binocular based state-of-the-art approaches for structure and motion. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Andreopoulos, A., & Tsotsos, J. K. (2007). Information fusion for multi-camera and multi-body structure and motion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4843 LNCS, pp. 385–396). Springer Verlag. https://doi.org/10.1007/978-3-540-76386-4_36
Mendeley helps you to discover research relevant for your work.