Automated test generation techniques for graphical user interfaces include model-based approaches that generate tests from a graph or state machine model, capture-replay methods that require the user to demonstrate each test case, and pattern-based approaches that provide templates for abstract test cases. There has been little work, however, in automated goal-based testing, where the goal is a realistic user task, a function, or an abstract behavior. Recent work in human performance regression testing has shown that there is a need for generating multiple test cases that execute the same user task in different ways, however that work does not have an efficient way to generate tests and only a single type of goal has been considered. In this paper we expand the notion of goal based interface testing to generate tests for a variety of goals. We develop a direct test generation technique, EventFlowSlicer, that is more efficient than that used in human performance regression testing, reducing run times by 92.5% on average for test suites between 9 to 26 steps and 63.1% across all test suites. Our evaluation shows that the number of tests generated is non-trivial more than can be easily captured manually. On average EventFlowSlicer generated 38 test cases per suite, and as many as 200 test cases which all achieve the same goal for a specified task.
CITATION STYLE
Saddler, J., & Cohen, M. B. (2016). EventFlowSlicer: Goal based test generation for graphical user interfaces. In A-TEST 2016 - Proceedings of the 7th International Workshop on Automating Test Case Design, Selection, and Evaluation, co-located with FSE 2016 (pp. 8–15). Association for Computing Machinery, Inc. https://doi.org/10.1145/2994291.2994293
Mendeley helps you to discover research relevant for your work.