Learning what works in ITS from non-traditional randomized controlled trial data

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The traditional, well established approach to finding out what works in education research is to run a randomized controlled trial (RCT) using a standard pretest and posttest design. RCTs have been used in the intelligent tutoring community for decades to determine which questions and tutorial feedback work best. Practically speaking, however, ITS creators need to make decisions on what content to deploy without the benefit of having run an RCT in advance. Additionally, most log data produced by an ITS is not in a form that can easily be evaluated with traditional methods. As a result, there is much data produced by tutoring systems that we would like to learn from but are not. In prior work we introduced a potential solution to this problem: a Bayesian networks method that could analyze the log data of a tutoring system to determine which items were most effective for learning among a set of items of the same skill. The method was validated by way of simulations. In this work we further evaluate the method by applying it to real world data from 11 experiment datasets that investigate the effectiveness of various forms of tutorial help in a web based math tutoring system. The goal of the method was to determine which questions and tutorial strategies cause the most learning. We compared these results with a more traditional hypothesis testing analysis, adapted to our particular datasets. We analyzed experiments in mastery learning problem sets as well as experiments in problem sets that, even though they were not planned RCTs, took on the standard RCT form. We found that the tutorial help or item chosen by the Bayesian method as having the highest rate of learning agreed with the traditional analysis in 9 out of 11 of the experiments. The practical impact of this work is an abundance of knowledge about what works that can now be learned from the thousands of experimental designs intrinsic in datasets of tutoring systems that assign items in a random order. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Pardos, Z. A., Dailey, M. D., & Heffernan, N. T. (2010). Learning what works in ITS from non-traditional randomized controlled trial data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6095 LNCS, pp. 41–50). https://doi.org/10.1007/978-3-642-13437-1_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free