Recent educational standards stress that students should learn how to read and understand scientific explanations and create explanations of their own. But these skills are difficult for teachers to evaluate, so they often assess them at a shallow level or avoid giving such assignments. Previous approaches for automatically evaluating explanatory and other types of structured essays have relied on the use of shallow features or bag-of-words methods. These methods might allow for a reasonable holistic assessment of an essay, but they fail to identify which concepts students included and which causal connections they made. In this paper, we investigate which natural language processing methods are most successful at locating conceptual information in student explanations and the causal connections between them. We found that a combination of a recurrent neural network for identifying concepts along with a novel causal relation parser produced very good accuracy in two different scientific domains, significantly improving on the prior state-of-the-art.
CITATION STYLE
Hughes, S., Hastings, P., & Britt, M. A. (2019). Identifying the structure of students’ explanatory essays. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11626 LNAI, pp. 110–115). Springer Verlag. https://doi.org/10.1007/978-3-030-23207-8_21
Mendeley helps you to discover research relevant for your work.