Associating assessment items with hypothesized knowledge components (KCs) enables us to gain fine-grained data on students’ performance within an ed-tech system. However, creating this association is a time consuming process and requires substantial instructor effort. In this study, we present the results of crowdsourcing valuable insights into the underlying concepts of problems in mathematics and English writing, as a first step in leveraging the crowd to expedite the task of generating KCs. We presented crowdworkers with two problems in each domain and asked them to provide three explanations about why one problem is more challenging than the other. These explanations were then independently analyzed through (1) a series of qualitative coding methods and (2) several topic modeling techniques, to compare how they might assist in extracting KCs and other insights from the participant contributions. Results of our qualitative coding showed that crowdworkers were able to generate KCs that approximately matched those generated by domain experts. At the same time, the topic models’ outputs were evaluated against both the domain expert generated KCs and the results of the previous coding to determine effectiveness. Ultimately we found that while the topic modeling was not up to parity with the qualitative coding methods, it did assist in identifying useful clusters of explanations. This work demonstrates a method to leverage both the crowd’s knowledge and topic modeling to assist in the process of generating KCs for assessment items.
CITATION STYLE
Moore, S., Nguyen, H. A., & Stamper, J. (2020). Evaluating crowdsourcing and topic modeling in generating knowledge components from explanations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12163 LNAI, pp. 398–410). Springer. https://doi.org/10.1007/978-3-030-52237-7_32
Mendeley helps you to discover research relevant for your work.