Automated feedback on the structure of hypothesis tests

1Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hypothesis testing is a challenging topic for many students in introductory university statistics courses. In this paper we explore how automated feedback in an Intelligent Tutoring System can foster students’ ability to carry out hypothesis tests. Students in an experimental group (N = 163) received elaborate feedback on the structure of the hypothesis testing procedure, while students in a control group (N = 151) only received verification feedback. Immediate feedback effects were measured by comparing numbers of attempted tasks, complete solutions, and errors between the groups, while transfer of feedback effects was measured by student performance on follow-up tasks. Results show that students receiving elaborate feedback solved more tasks and made fewer errors than students receiving only verification feedback, which suggests that students benefited from the elaborate feedback.

Cite

CITATION STYLE

APA

Tacoma, S., Heeren, B., Jeuring, J., & Drijvers, P. (2019). Automated feedback on the structure of hypothesis tests. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11626 LNAI, pp. 281–285). Springer Verlag. https://doi.org/10.1007/978-3-030-23207-8_52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free