The quality of reporting of experimental results in computing education literature has been previously shown to be less than rigorous. In this study, we first examined research standards set forth by four organizations: American Psychology Association (APA), American Educational Research Association (AERA), What Works Clearinghouse (WWC), and the CONsolidated Standards of Reporting Trials (CONSORT). We selected the most important data standards based on their prominence across all four and the most typical study designs in computing education research. We then examined 76 articles designated as quantitative research studies (K-12) published in ten venues (2012-2018) to determine whether the reporting in these articles met these five standards. Findings indicate that only 48% of these articles report effect size and even fewer (11%) report confidence intervals and levels. We found that reported data did not meet the standard that data should be reported in a way that the reader could construct effect-size estimates and confidence intervals beyond those supplied in the paper. Additionally, authors used existing instruments less than a quarter of the time (24%) and used instruments with evidence of reliability and validity less than half of the time (39%). We conclude with recommendations for those in the K-12 computing education research community to consider when reporting statistical data in future work so that we can increase the level of rigorous reporting in this growing field.
CITATION STYLE
McGill, M. M., & Decker, A. (2020). A gap analysis of statistical data reporting in k-12 computing education research: Recommendations for improvement. In SIGCSE 2020 - Proceedings of the 51st ACM Technical Symposium on Computer Science Education (pp. 591–597). https://doi.org/10.1145/3328778.3366842
Mendeley helps you to discover research relevant for your work.