S cratic Questioning of Novice De gers: A Benchmark Dataset and Preliminary Evaluations

9Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Socratic questioning is a teaching strategy where the student is guided towards solving a problem on their own, instead of being given the solution directly. In this paper, we introduce a dataset of Socratic conversations where an instructor helps a novice programmer fix buggy solutions to simple computational problems. The dataset is then used for benchmarking the Socratic debugging abilities of GPT-based language models. While GPT-4 is observed to perform much better than GPT-3.5, its precision, and recall still fall short of human expert abilities, motivating further work in this area.

Cite

CITATION STYLE

APA

Al-Hossami, E., Bunescu, R., Teehan, R., Powell, L., Mahajan, K., & Dorodchi, M. (2023). S cratic Questioning of Novice De gers: A Benchmark Dataset and Preliminary Evaluations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 709–726). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.bea-1.57

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free