Software models are increasingly popular. To educate the next generation of software engineers, it is important that they learn how to model software systems well, so that they can design them effectively in industry. It is also important that instructors have the tools that can help them assess students' models more effectively. In this paper, we investigate how a tool that combines a simple heuristic with machine learning techniques can be used to help assess student submissions in model-driven engineering courses. We apply our proposed technique to first identify submissions of high quality and second to predict approximate letter grades. The results are comparable to human grading and a complex rule-based technique for the former and surprisingly accurate for the latter.
CITATION STYLE
Boubekeur, Y., Mussbacher, G., & McIntosh, S. (2020). Automatic assessment of students’ software models using a simple heuristic and machine learning. In Proceedings - 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems, MODELS-C 2020 - Companion Proceedings (pp. 84–93). Association for Computing Machinery, Inc. https://doi.org/10.1145/3417990.3418741
Mendeley helps you to discover research relevant for your work.