Abstract
Graphical user interface (GUI) prototyping helps to clarify requirements and keep stakeholders engaged in software development. While contemporary approaches retrieve GUIs relevant to a user's query, little support exists for the actual reuse, i.e., for using an existing GUI to create a new one. To shorten the gap, we investigate GUI-centered reuse via one of the latest artificial intelligence (AI) techniques - vision-language models (VLMs). We report an empirical study involving 73 university students working on ten GUI reuse tasks. Each task is associated with different reuse directions recommended by VLMs and by a natural language (NL) method. In addition, a focused GUI element is provided to offer a starting point for making the actual changes. Our results show that VLMs significantly outperform the NL method in making reuse recommendations, but surprisingly, the focused GUI elements are not consistently modified during reuse. With the assessments made by four experienced designers, we further offer insights into the creativity of human-reuse and AI-reuse results.
Author supplied keywords
Cite
CITATION STYLE
Niu, V., Alshammari, W., Iluru, N. M., Teeleti, P. V., Niu, N., Bhowmik, T., & Zhang, J. (2025). Exploiting Vision-Language Models in GUI Reuse. In Proceedings - 2025 IEEE/ACM 22nd International Conference on Software and Systems Reuse, ICSR 2025 (pp. 21–32). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICSR66718.2025.00009
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.