Comparing the Effect of Contextualized Versus Generic Automated Feedback on Students' Scientific Argumentation

2Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This study uses a computerized formative assessment system that provides automated scoring and feedback to help students write scientific arguments in a climate change curriculum. We compared the effect of contextualized versus generic automated feedback on students' explanations of scientific claims and attributions of uncertainty to those claims. Classes were randomly assigned to the contextualized feedback condition (227 students from 11 classes) or to the generic feedback condition (138 students from 9 classes). The results indicate that the formative assessment helped students improve their scores in both explanation and uncertainty scores, but larger score gains were found in the uncertainty attribution scores. Although the contextualized feedback was associated with higher final scores, this effect was moderated by the number of revisions made, the initial score, and gender. We discuss how the results might be related to students' familiarity with writing scientific explanations versus uncertainty attributions at school.

Cite

CITATION STYLE

APA

Olivera-Aguilar, M., Lee, H. S., Pallant, A., Belur, V., Mulholland, M., & Liu, O. L. (2022). Comparing the Effect of Contextualized Versus Generic Automated Feedback on Students’ Scientific Argumentation. ETS Research Report Series, 2022(1), 1–14. https://doi.org/10.1002/ets2.12344

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free