Investigating neural architectures for short answer scoring

134Citations
Citations of this article
163Readers
Mendeley users who have this article in their library.

Abstract

Neural approaches to automated essay scoring have recently shown state-of-theart performance. The automated essay scoring task typically involves a broad notion of writing quality that encompasses content, grammar, organization, and conventions. This differs from the short answer content scoring task, which focuses on content accuracy. The inputs to neural essay scoring models - ngrams and embeddings - are arguably well-suited to evaluate content in short answer scoring tasks. We investigate how several basic neural approaches similar to those used for automated essay scoring perform on short answer scoring. We show that neural architectures can outperform a strong nonneural baseline, but performance and optimal parameter settings vary across the more diverse types of prompts typical of short answer scoring.

Cite

CITATION STYLE

APA

Riordan, B., Horbach, A., Cahill, A., Zesch, T., & Min Lee, C. (2017). Investigating neural architectures for short answer scoring. In EMNLP 2017 - 12th Workshop on Innovative Use of NLP for Building Educational Applications, BEA 2017 - Proceedings of the Workshop (pp. 159–168). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-5017

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free