WebUI: A Dataset for Enhancing Visual UI Understanding with Web Semantics

49Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Modeling user interfaces (UIs) from visual information allows systems to make inferences about the functionality and semantics needed to support use cases in accessibility, app automation, and testing. Current datasets for training machine learning models are limited in size due to the costly and time-consuming process of manually collecting and annotating UIs. We crawled the web to construct WebUI, a large dataset of 400,000 rendered web pages associated with automatically extracted metadata. We analyze the composition of WebUI and show that while automatically extracted data is noisy, most examples meet basic criteria for visual UI modeling. We applied several strategies for incorporating semantics found in web pages to increase the performance of visual UI understanding models in the mobile domain, where less labeled data is available: (i) element detection, (ii) screen classification and (iii) screen similarity.

Cite

CITATION STYLE

APA

Wu, J., Wang, S., Shen, S., Peng, Y. H., Nichols, J., & Bigham, J. P. (2023). WebUI: A Dataset for Enhancing Visual UI Understanding with Web Semantics. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3544548.3581158

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free