Indoor Scene Understanding in 2.5/3D for Autonomous Agents: A Survey

85Citations
Citations of this article
185Readers
Mendeley users who have this article in their library.

Abstract

With the availability of low-cost and compact 2.5/3D visual sensing devices, computer vision community is experiencing a growing interest in visual scene understanding of indoor environments. This survey paper provides a comprehensive background to this research topic. We begin with a historical perspective, followed by a popular 3D data representation and a comparative analysis of available datasets. Before delving into the application specific details, this survey provides a succinct introduction to the core technologies that are the underlying methods extensively used in this paper. Afterwards, we review the developed techniques according to a taxonomy based on the scene understanding tasks. This covers holistic indoor scene understanding as well as subtasks, such as scene classification, object detection, pose estimation, semantic segmentation, 3D reconstruction, saliency detection, physics-based reasoning, and affordance prediction. Later on, we summarize the performance metrics used for evaluation in different tasks and a quantitative comparison among the recent state-of-the-art techniques. We conclude this review with the current challenges and an outlook toward the open research problems requiring further investigation.

Cite

CITATION STYLE

APA

Naseer, M., Khan, S., & Porikli, F. (2019). Indoor Scene Understanding in 2.5/3D for Autonomous Agents: A Survey. IEEE Access, 7, 1859–1887. https://doi.org/10.1109/ACCESS.2018.2886133

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free