3D Spatial Analysis Method with First-Person Viewpoint by Deep Convolutional Neural Network with Omnidirectional RGB and Depth Images

6Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

The fields of architecture and urban planning widely apply spatial analysis based on images. However, many features can influence the spatial conditions, not all of which can be explicitly defined. In this research, we propose a new deep learning framework for extracting spatial features without explicitly specifying them and use these features for spatial analysis and prediction. As a first step, we establish a deep convolution neural network (DCNN) learning problem with omnidirectional images that include depth images as well as ordinary RGB images. We then use these images as explanatory variables in a game engine to predict a subjects' preference regarding a virtual urban space. DCNNs learn the relationship between the evaluation result and the omnidirectional camera images and we confirm the prediction accuracy of the verification data.

Cite

CITATION STYLE

APA

Takizawa, A., & Furuta, A. (2017). 3D Spatial Analysis Method with First-Person Viewpoint by Deep Convolutional Neural Network with Omnidirectional RGB and Depth Images. In Proceedings of the International Conference on Education and Research in Computer Aided Architectural Design in Europe (Vol. 2, pp. 693–702). Education and research in Computer Aided Architectural Design in Europe. https://doi.org/10.52842/conf.ecaade.2017.2.693

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free