This work explores the mathematical modelling capabilities of various iterations of ChatGPT, focusing on their performance across tasks of differing complexity and openness. The study examines the abilities of GPT-3.5, GPT-4.0, and a more instructed version, GPT-MM, in multiple scenarios. It is observed that all versions demonstrate basic mathematical problem-solving skills. However, their effectiveness varies with increasing task complexity. While GPT-4.0 and GPT-MM show marginal improvements in providing detailed solutions, significant challenges persist, especially in moderate to complex modelling contexts where comprehending the nuances of tasks becomes challenging. Additionally, the study suggests that the openness of modelling tasks has a limited impact on performance, highlighting that mathematical and contextual complexities play more critical roles. The implications of these observations are discussed in terms of potential enhancements to teaching methodologies and the integration of AI tools like GPT in educational settings. This reiterates the importance of further research to fully understand the capabilities and limitations of AI tools and ensure their effective use in education.
CITATION STYLE
Spreitzer, C., Straser, O., Zehetmeier, S., & Maaß, K. (2024). Mathematical Modelling Abilities of Artificial Intelligence Tools: The Case of ChatGPT. Education Sciences, 14(7). https://doi.org/10.3390/educsci14070698
Mendeley helps you to discover research relevant for your work.