Writing effective autograded exercises using Bloom's taxonomy

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Computer Science (CS) enrollment continues to grow every year and many CS instructors have turned to auto-graded exercises to ease grading load while still allowing students to practice concepts. As the use of autograders becomes more common, it is important that the exercise sets are being written to maximize student benefit. In this paper, we use Bloom's Taxonomy (BT) to create auto-graded exercise sets that scale up from lower to higher levels of complexity. We conducted a field experiment in an introductory programming course (264 students) and focused on evaluating learning efficiency, code quality, and student perception of their learning experience. We found that it takes students more submission attempts in the auto-grader when they are given BT Apply/Analyze-type questions that contain some starter code. Students complete the auto-graded assignments with fewer number of submissions when there is no-starter code and they have to write their solution from scratch, i.e. BT Create-type of questions. However, when writing code from scratch, the students' code quality can suffer because the students are not required to actually understand the concept being tested and might be able to find a workaround to pass the tests of the auto-grader.

Cite

CITATION STYLE

APA

Battestilli, L., & Korkes, S. (2020). Writing effective autograded exercises using Bloom’s taxonomy. In ASEE Annual Conference and Exposition, Conference Proceedings (Vol. 2020-June). American Society for Engineering Education. https://doi.org/10.18260/1-2--35711

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free