Practical use of a latent semantic analysis (LSA) model for automatic evaluation of written answers

12Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

This paper presents research of an application of a latent semantic analysis (LSA) model for the automatic evaluation of short answers (25 to 70 words) to open-ended questions. In order to reach a viable application of this LSA model, the research goals were as follows: (1) to develop robustness, (2) to increase accuracy, and (3) to widen portability. The methods consisted of the following tasks: firstly, the implementation of word bigrams; secondly, the implementation of combined models of unigrams and bigrams using multiple linear regression; and, finally, the addition of an adjustment step after the score attribution taking into consideration the average of the words of the answers. The corpus was composed by 359 answers produced according to two questions from a Brazilian public university’s entrance examination, which were previously scored by human evaluators. The results demonstrate that the experiments produced accuracy about 84.94 %, while the accuracy of the two human evaluators was about 84.93 %. In conclusion, it can be seen that the automatic evaluation technology shows that it is reaching a high level of efficiency.

Cite

CITATION STYLE

APA

Alves dos Santos, J. C., & Favero, E. L. (2015). Practical use of a latent semantic analysis (LSA) model for automatic evaluation of written answers. Journal of the Brazilian Computer Society, 21(1), 1–8. https://doi.org/10.1186/s13173-015-0039-7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free