Background: Over the past year, the world has been captivated by the potential of artificial intelligence (AI). The appetite for AI in science, specifically healthcare is huge. It is imperative to understand the credibility of large language models in assisting the public in medical queries. Objective: To evaluate the ability of ChatGPT to provide reasonably accurate answers to public queries within the domain of Otolaryngology. Methods: Two board-certified otolaryngologists (HZ, RS) inputted 30 text-based patient queries into the ChatGPT-3.5 model. ChatGPT responses were rated by physicians on a scale (accurate, partially accurate, incorrect), while a similar 3-point scale involving confidence was given to layperson reviewers. Demographic data involving gender and education level was recorded for the public reviewers. Inter-rater agreement percentage was based on binomial distribution for calculating the 95% confidence intervals and performing significance tests. Statistical significance was defined as p
CITATION STYLE
Zalzal, H. G., Abraham, A., Cheng, J., & Shah, R. K. (2024). Can ChatGPT help patients answer their otolaryngology questions? Laryngoscope Investigative Otolaryngology, 9(1). https://doi.org/10.1002/lio2.1193
Mendeley helps you to discover research relevant for your work.