Challenging ChatGPT with Different Types of Physics Education Questions

  • López-Simó V
  • Rezende M
13Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

239 with students to critique and fix them. 2 On the other hand, some investigations have analyzed the level of performance of this AI tool in solving physics problems. According to Ref. 3, ChatGPT would narrowly pass a calculus-based physics course while exhibiting many of the preconceptions and errors of a beginning learner. In parallel, Ref. 4 found that ChatGPT-3.5 can match or exceed the median performance of a university student who has completed one semester of college physics, and Ref. 5 found very impressive basic problem solving capabilities of ChatGPT in interpreting simple physics problems, assuming relevant parameters, and writing correct code. Those previous contributions focus either on identifying ChatGPT-based physics education good practices or testing ChatGPT's physics performance in comparison with real students , whereas our particular interest lies in understanding how different types of physics education problems may influence both the correctness and the variability of the answers provided by the tool. It is well known in physics education research that the type of physics education questions strongly affects the ways students reason and obtain an answer. 6,7 For this reason, our research question is, How are the correctness and the variability of the answers provided by ChatGPT af

Cite

CITATION STYLE

APA

López-Simó, V., & Rezende, M. F. (2024). Challenging ChatGPT with Different Types of Physics Education Questions. The Physics Teacher, 62(4), 290–294. https://doi.org/10.1119/5.0160160

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free