Manhattan scene understanding using monocular, stereo, and 3D features

109Citations
Citations of this article
143Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper addresses scene understanding in the context of a moving camera, integrating semantic reasoning ideas from monocular vision with 3D information available through structure-from-motion. We combine geometric and photometric cues in a Bayesian framework, building on recent successes leveraging the indoor Manhattan assumption in monocular vision. We focus on indoor environments and show how to extract key boundaries while ignoring clutter and decorations. To achieve this we present a graphical model that relates photometric cues learned from labeled data, stereo photo-consistency across multiple views, and depth cues derived from structure-from-motion point clouds. We show how to solve MAP inference using dynamic programming, allowing exact, global inference in ∼100 ms (in addition to feature computation of under one second) without using specialized hardware. Experiments show our system out-performing the state-of-the-art. © 2011 IEEE.

Cite

CITATION STYLE

APA

Flint, A., Murray, D., & Reid, I. (2011). Manhattan scene understanding using monocular, stereo, and 3D features. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2228–2235). https://doi.org/10.1109/ICCV.2011.6126501

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free