This paper presents the ACTA system, which performs automated short-answer grading in the domain of high-stakes medical exams. The system builds upon previous work on neural similarity-based grading approaches by applying these to the medical domain and utilizing contrastive learning as a means to optimize the similarity metric. ACTA is evaluated against three strong baselines and is developed in alignment with operational needs, where low-confidence responses are flagged for human review. Learning curves are explored to understand the effects of training data on performance. The results demonstrate that ACTA leads to substantially lower number of responses being flagged for human review, while maintaining high classification accuracy.
CITATION STYLE
Suen, K. Y., Yaneva, V., Ha, L. A., Mee, J., Zhou, Y., & Harik, P. (2023). ACTA: Short-Answer Grading in High-Stakes Medical Exams. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 443–447). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.bea-1.36
Mendeley helps you to discover research relevant for your work.