Modeling students' natural language explanations

1Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Intelligent tutoring systems have achieved demonstrable success in supporting formal problem solving. More recently such systems have begun incorporating student explanations of problem solutions. Typically, these natural language explanations are entered with menus, but some ITSs accept openended typed inputs. Typed inputs require more work by both developers and students and evaluations of the added value for learning outcomes has been mixed. This paper examines whether typed input can yield more accurate student modeling than menu-based input. This paper examines the application of Knowledge Tracing student modeling to natural language inputs and examines the standard Knowledge Tracing definition of errors. The analyses indicate that typed explanations can yield more predictive models of student test performance than menu-based explanations and that focusing on semantic errors can further improve predictive accuracy. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Corbett, A., Wagner, A., Lesgold, S., Ulrich, H., & Stevens, S. (2007). Modeling students’ natural language explanations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4511 LNCS, pp. 117–126). Springer Verlag. https://doi.org/10.1007/978-3-540-73078-1_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free