Identifying and Classifying User Requirements in Online Feedback via Crowdsourcing

23Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.
Get full text

Abstract

[Context and motivation] App stores and social media channels such as Twitter enable users to share feedback regarding software. Due to its high volume, it is hard to effectively and systematically process such feedback to obtain a good understanding of users’ opinions about a software product. [Question/problem] Tools based on natural language processing and machine learning have been proposed as an inexpensive mechanism for classifying user feedback. Unfortunately, the accuracy of these tools is imperfect, which jeopardizes the reliability of the analysis results. We investigate whether assigning micro-tasks to crowd workers could be an alternative technique for identifying and classifying requirements in user feedback. [Principal ideas/results] We present a crowdsourcing method for filtering out irrelevant app store reviews and for identifying features and qualities. A validation study has shown positive results in terms of feasibility, accuracy, and cost. [Contribution] We provide evidence that crowd workers can be an inexpensive yet accurate resource for classifying user reviews. Our findings contribute to the debate on the roles of and synergies between humans and AI techniques.

Cite

CITATION STYLE

APA

van Vliet, M., Groen, E. C., Dalpiaz, F., & Brinkkemper, S. (2020). Identifying and Classifying User Requirements in Online Feedback via Crowdsourcing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12045 LNCS, pp. 143–159). Springer. https://doi.org/10.1007/978-3-030-44429-7_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free