Extending LMS to support IRT-based assessment test calibration

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Developing unambiguous and challenging assessment material for measuring educational attainment is a time-consuming, labor-intensive process. As a result Computer Aided Assessment (CAA) tools are becoming widely adopted in academic environments in an effort to improve the assessment quality and deliver reliable results of examinee performance. This paper introduces a methodological and architectural framework which embeds a CAA tool in a Learning Management System (LMS) so as to assist test developers in refining items to constitute assessment tests. An Item Response Theory (IRT) based analysis is applied to a dynamic assessment profile provided by the LMS. Test developers define a set of validity rules for the statistical indices given by the IRT analysis. By applying those rules, the LMS can detect items with various discrepancies which are then flagged for review of their content. Repeatedly executing the aforementioned procedure can improve the overall efficiency of the testing process. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Fotaris, P., Mastoras, T., Mavridis, I., & Manitsaris, A. (2010). Extending LMS to support IRT-based assessment test calibration. In Communications in Computer and Information Science (Vol. 73 CCIS, pp. 534–543). https://doi.org/10.1007/978-3-642-13166-0_75

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free