Abstract
Feedback is a critical element of student-instructor interaction: it provides a direct manner for students to learn from mistakes. However, with student to teacher ratios growing rapidly, challenges arise for instructors to provide quality feedback to individual students. While significant efforts have been directed at automating feedback generation, relatively little attention has been given to underlying feedback characteristics. We develop a methodology for analyzing instructor-provided feedback and determining how it correlates with changes in student grades using data from online higher education engineering classrooms. Specifically, we featurize written feedback on individual assignments using Natural Language Processing (NLP) techniques including sentiment analysis, bigram splitting, and Named Entity Recognition (NER) to quantify post-, sentence-, and word-dependent attributes of grader writing. We demonstrate that student grade improvement can be well approximated by a multivariate linear model with average fits across course sections between 67% and 83%. We determine several statistically significant contributors to and detractors from student success contained in instructor feedback. For example, our results reveal that inclusion of student name is significantly correlated with an improvement in post-feedback grades, as is inclusion of specific assignment-related keywords. Finally, we discuss how this methodology can be incorporated into educational technology systems to make recommendations for feedback content from observed student behavior.
Author supplied keywords
Cite
CITATION STYLE
Nicoll, S., Douglas, K., & Brinton, C. (2022). Giving Feedback on Feedback: An Assessment of Grader Feedback Construction on Student Performance. In ACM International Conference Proceeding Series (pp. 239–249). Association for Computing Machinery. https://doi.org/10.1145/3506860.3506897
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.