Given the ever increasing number of research tools to auto-matically generate inputs to test Android applications (or simply apps), researchers recently asked the question \Are we there yet?" (in terms of the practicality of the tools). By conducting an empirical study of the various tools, the researchers found that Monkey (the most widely used tool of this category in industrial settings) outperformed all of the research tools in the study. In this paper, we present two significant extensions of that study. First, we conduct the fist industrial case study of applying Monkey against WeChat, a popular messenger app with over 762 million monthly active users, and report the empirical findings on Monkey's limitations in an industrial setting. Second, we de-velop a new approach to address major limitations of Mon-key and accomplish substantial code-coverage improvements over Monkey. We conclude the paper with empirical insights for future enhancements to both Monkey and our approach.
CITATION STYLE
Zeng, X., Li, D., Zheng, W., Xia, F., Deng, Y., Lam, W., … Xie, T. (2016). Automated test input generation for android: Are we really there yet in an industrial case? In Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering (Vol. 13-18-November-2016, pp. 987–992). Association for Computing Machinery. https://doi.org/10.1145/2950290.2983958
Mendeley helps you to discover research relevant for your work.