Diffusion Models for Environment Visualization: Leveraging Stable Diffusion as a Generator for Architectural Spatial Design

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Artificial intelligence continues to seamlessly integrate in our daily lives, sparking immense interest in its optimization across various sectors. Architectural design and environment digital creation could receive huge benefits from this technology evolution. This study delves into a specific application of AI: the generation of both rendered and conceptualized images to facilitate project communication. Leveraging Stable Diffusion, a generative diffusion model, in conjunction with Control Net—a multi-layer AI control tool—different iterations were conducted. A deliberate configuration of variables allowed for localized variations stemming from standardized input instructions, showcasing the breath of possibilities these technologies offer in architectural communication and digital creation. Classifying 720 images based on the applied control type, this research systematically compares the characteristics and biases of the utilized variables to scrutinize their influence on the resultant outputs. Rooted in an exploratory and propositional approach, this work sets the stage for subsequent in-depth investigations and advancements in architectural AI research.

Cite

CITATION STYLE

APA

Meira-Rodríguez, P., & López-Chao, V. (2024). Diffusion Models for Environment Visualization: Leveraging Stable Diffusion as a Generator for Architectural Spatial Design. In Springer Series in Design and Innovation (Vol. 43, pp. 417–426). Springer Nature. https://doi.org/10.1007/978-3-031-57575-4_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free