Abstract
Background: Refractive surgery research aims to optimally precategorize patients by their suitability for various types of surgery. Recent advances have led to the development of artificial intelligence–powered algorithms, including machine learning approaches, to assess risks and enhance workflow. Large language models (LLMs) like ChatGPT-4 (OpenAI LP) have emerged as potential general artificial intelligence tools that can assist across various disciplines, possibly including refractive surgery decision-making. However, their actual capabilities in precategorizing refractive surgery patients based on real-world parameters remain unexplored. Objective: This exploratory study aimed to validate ChatGPT-4’s capabilities in precategorizing refractive surgery patients based on commonly used clinical parameters. The goal was to assess whether ChatGPT-4’s performance when categorizing batch inputs is comparable to those made by a refractive surgeon. A simple binary set of categories (patient suitable for laser refractive surgery or not) as well as a more detailed set were compared. Methods: Data from 100 consecutive patients from a refractive clinic were anonymized and analyzed. Parameters included age, sex, manifest refraction, visual acuity, and various corneal measurements and indices from Scheimpflug imaging. This study compared ChatGPT-4’s performance with a clinician’s categorizations using Cohen κ coefficient, a chi-square test, a confusion matrix, accuracy, precision, recall, F1-score, and receiver operating characteristic area under the curve. Results: A statistically significant noncoincidental accordance was found between ChatGPT-4 and the clinician’s categorizations with a Cohen κ coefficient of 0.399 for 6 categories (95% CI 0.256-0.537) and 0.610 for binary categorization (95% CI 0.372-0.792). The model showed temporal instability and response variability, however. The chi-square test on 6 categories indicated an association between the 2 raters’ distributions (χ25=94.7, P
Author supplied keywords
- AI-powered algorithm
- ChatGPT
- ChatGPT-4
- artificial intelligence
- categorization
- clinical
- data analysis
- decision support systems
- decision-making
- eHealth
- health informatics
- large language model
- machine learning
- medical decision-making
- ophthalmology
- predictive modeling
- refractive surgery
- refractive surgical procedures
- risk assessment
Cite
CITATION STYLE
Ćirković, A., & Katz, T. (2023). Exploring the Potential of ChatGPT-4 in Predicting Refractive Surgery Categorizations: Comparative Study. JMIR Formative Research, 7(1). https://doi.org/10.2196/51798
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.