Cities are considered complex and open environments with multidimensional aspects including urban forms, urban imagery, and urban energy performance. So, a platform that supports the dialogue between the user and the machine is crucial in urban computational modeling (UCM). In this paper, we present a novel urban computational modeling framework, which integrates urban geometry and urban visual appearance aspects. The framework applies unsupervised machine learning, self-organizing map (SOM), and information retrieval techniques. We propose the instrument to help designers navigate among references from the built environment. The framework incorporates geometric and imagery aspects by encoding urban spatial and visual appearance characteristics with Isovist and semantic segmentation for integrated geometry and imagery features (IGIF). A ray SOM and a mask SOM are trained with the IGIF, using building footprints and street view images of Nanjing as a dataset. By interlinking the two SOMs, the program retrieves urban plots which have similar spatial traits or visual appearance, or both. The program provides urban designers with a navigatable explorer space with references from the built environment to inspire design ideas and learn from them. Our proposed framework helps architects and urban designers with both design inspiration and decision making by bringing human intelligence into UCM. Future research directions using and extending the framework are also discussed.
CITATION STYLE
Cai, C., Zaghloul, M., & Li, B. (2022). Data Clustering in Urban Computational Modeling by Integrated Geometry and Imagery Features for Probabilistic Navigation. Applied Sciences (Switzerland), 12(24). https://doi.org/10.3390/app122412704
Mendeley helps you to discover research relevant for your work.