Human-centric indoor environment modeling from depth videos

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose an approach to model indoor environments from depth videos (the camera is stationary when recording the videos), which includes extracting the 3-D spatial layout of the rooms and modeling objects as 3-D cuboids. Different from previous work which purely relies on image appearance, we argue that indoor environment modeling should be human-centric: not only because humans are an important part of the indoor environments, but also because the interaction between humans and environments can convey much useful information about the environments. In this paper, we develop an approach to extract physical constraints from human poses and motion to better recover the spatial layout and model objects inside. We observe that the cues provided by human-environment intersection are very powerful: we don't have a lot of training data but our method can still achieve promising performance. Our approach is built on depth videos, which makes it more user friendly. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Lu, J., & Wang, G. (2012). Human-centric indoor environment modeling from depth videos. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7584 LNCS, pp. 42–51). Springer Verlag. https://doi.org/10.1007/978-3-642-33868-7_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free