Instructor Perceptions of AI Code Generation Tools - A Multi-Institutional Interview Study

37Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Much of the recent work investigating large language models and AI Code Generation tools in computing education has focused on assessing their capabilities for solving typical programming problems and for generating resources such as code explanations and exercises. If progress is to be made toward the inevitable lasting pedagogical change, there is a need for research that explores the instructor voice, seeking to understand how instructors with a range of experiences plan to adapt. In this paper, we report the results of an interview study involving 12 instructors from Australia, Finland and New Zealand, in which we investigate educators' current practices, concerns, and planned adaptations relating to these tools. Through this empirical study, our goal is to prompt dialogue between researchers and educators to inform new pedagogical strategies in response to the rapidly evolving landscape of AI code generation tools.

Cite

CITATION STYLE

APA

Sheard, J., Denny, P., Hellas, A., Leinonen, J., Malmi, L., & Simon. (2024). Instructor Perceptions of AI Code Generation Tools - A Multi-Institutional Interview Study. In SIGCSE 2024 - Proceedings of the 55th ACM Technical Symposium on Computer Science Education (Vol. 1, pp. 1223–1229). Association for Computing Machinery, Inc. https://doi.org/10.1145/3626252.3630880

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free