Despite the multitude of available software testing tools, literature lists lack of right tools and costs as problems for adopting a tool. We conducted a case study to analyze how a group of practitioners, familiar with Robot Framework (an open source, generic test automation framework), evaluate the tool. We based the case and the unit of analysis on our academia-industry relations, i.e., availability. We used a survey (n = 68) and interviews (n = 6) with convenience sampling to develop a comprehensive view of the phenomena. The study reveals the importance of understanding the interconnection of different criteria and the potency of the context on those. Our results show that unconfirmed or unfocused opinions about criteria, e.g., about Costs or Programming Skills, can lead to misinterpretations or hamper strategic decisions if overlooking required technical competence. We conclude surveys can serve as a useful instrument for collecting empirical knowledge about tool evaluation, but experiential reasoning collected with a complementary method is required to develop into comprehensive understanding about it.
CITATION STYLE
Raulamo-Jurvanen, P., Hosio, S., & Mäntylä, M. V. (2019). Applying Surveys and Interviews in Software Test Tool Evaluation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11915 LNCS, pp. 20–36). Springer. https://doi.org/10.1007/978-3-030-35333-9_2
Mendeley helps you to discover research relevant for your work.