Comparing the performance of latent semantic analysis and probability latent semantic analysis models on autoscoring essay tasks

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper evaluates the performance variances of Latent Semantic Analysis (LSA) and Probability Latent Semantic Analysis (PLSA) by judging essay text qualities as automated essay (AES) scoring tools. A correlation research design was used to examine the correlation between LSA performance and PLSA performance. We introduced 3 weight methods and performed 6 experiments to produce the scoring performances of both LSA and PLSA from a total of 2444 Chinese essays. The results show that there were strong correlations between the LSA scores and PLSA scores. While the overall performance of PLSA is better than that of LSA, the findings from the current study do not corroborate the previous findings for PLSA methods that claim a significant improvement. The implications of our research for AES reveal that both LSA and PLSA have a limited capability at this point and those more reliable measures for automated essay analyzing and scoring, such as text formats and forms, still need to be a component of text quality analysis.

Cite

CITATION STYLE

APA

Ke, X., & Luo, H. (2017). Comparing the performance of latent semantic analysis and probability latent semantic analysis models on autoscoring essay tasks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10108 LNCS, pp. 401–411). Springer Verlag. https://doi.org/10.1007/978-3-319-52836-6_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free