Analyzing AI and student responses through the lens of sensemaking and mechanistic reasoning

3Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Physics education research (PER) shares a rich tradition of designing learning environments that promote valued epistemic practices such as sensemaking and mechanistic reasoning. Recent technological advancements, particularly artificial intelligence has caught significant traction in the PER community due to its human-like, sophisticated responses to physics tasks. In this study, we contribute to the ongoing efforts by comparing AI (ChatGPT) and student responses to a physics task through the cognitive frameworks of sensemaking and mechanistic reasoning. Findings highlight that by virtue of its training data set, ChatGPT’s response provide evidence of mechanistic reasoning and mimics the vocabulary of experts in its responses. On the other hand, half of students’ responses evidenced sensemaking and reflected an effective amalgamation of diagram-based and mathematical reasoning, showcasing a comprehensive problem-solving approach. In other words, while AI responses reflected how physics is talked about, a part of students’ responses reflected how physics is practiced during problem solving. We discuss the implications of this finding with an emphasis on epistemology of AI responses and designing next-generation assessments in physics.

Cite

CITATION STYLE

APA

Zollman, D., Sirnoorkar, A., & Laverty, J. T. (2023). Analyzing AI and student responses through the lens of sensemaking and mechanistic reasoning. In Physics Education Research Conference Proceedings (pp. 415–420). American Association of Physics Teachers. https://doi.org/10.1119/perc.2023.pr.Zollman

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free