High Precision Latent Semantic Evaluation for descriptive answer assessment

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

This paper proposes an approach to evaluate student's descriptive answers, using comparison-based approach in which student's answer is compared with the standard answer. The standard answers contains domain specific knowledge as per the category (how, why, what, etc.) of questions asked in the examination. Several state-of-art claims that LSA correlates with the human assessor's way of evaluation. With this as background, we investigated evaluation of students' descriptive answer using Latent Semantic Analysis (LSA). In the course of research, it was discovered that standard LSA has limitations like: LSA research usually involves heterogeneous text (text from various domains) which may include irrelevant terms that are highly susceptible to noisy, missing and inconsistent data. We propose a new technique inspired by LSA, denoted as "High Precision Latent Semantic Evaluation" (HPLSE), LSA has been modified to overcome some of the limitations; this has also increased precision. By using the proposed technique (HPLSE), for the same datasets, average score difference and standard deviation between a human assessor and computer assessor has reduced and the Pearson correlation coefficient (r) has increased considerably. The new technique has been discussed and demonstrates on various problem classes.

Cite

CITATION STYLE

APA

Kaur, A., & Sasi Kumar, M. (2018). High Precision Latent Semantic Evaluation for descriptive answer assessment. Journal of Computer Science, 14(10), 1293–1302. https://doi.org/10.3844/jcssp.2018.1293.1302

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free