Learning design semantics for mobile apps

156Citations
Citations of this article
177Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, researchers have developed black-box approaches to mine design and interaction data from mobile apps. Although the data captured during this interaction mining is descriptive, it does not expose the design semantics of UIs: what elements on the screen mean and how they are used. This paper introduces an automatic approach for generating semantic annotations for mobile app UIs. Through an iterative open coding of 73k UI elements and 720 screens, we contribute a lexical database of 25 types of UI components, 197 text button concepts, and 135 icon classes shared across apps. We use this labeled data to learn code-based patterns to detect UI components and to train a convolutional neural network that distinguishes between icon classes with 94% accuracy. To demonstrate the efficacy of our approach at scale, we compute semantic annotations for the 72k unique UIs in the Rico dataset, assigning labels for 78% of the total visible, non-redundant elements.

Cite

CITATION STYLE

APA

Liu, T. F., Craft, M., Situ, J., Yumer, E., Mech, R., & Kumar, R. (2018). Learning design semantics for mobile apps. In UIST 2018 - Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (pp. 569–579). Association for Computing Machinery, Inc. https://doi.org/10.1145/3242587.3242650

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free