ChatGPT in Occupational Medicine: A Comparative Study with Human Experts

7Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The objective of this study is to evaluate ChatGPT’s accuracy and reliability in answering complex medical questions related to occupational health and explore the implications and limitations of AI in occupational health medicine. The study also provides recommendations for future research in this area and informs decision-makers about AI’s impact on healthcare. A group of physicians was enlisted to create a dataset of questions and answers on Italian occupational medicine legislation. The physicians were divided into two teams, and each team member was assigned a different subject area. ChatGPT was used to generate answers for each question, with/without legislative context. The two teams then evaluated human and AI-generated answers blind, with each group reviewing the other group’s work. Occupational physicians outperformed ChatGPT in generating accurate questions on a 5-point Likert score, while the answers provided by ChatGPT with access to legislative texts were comparable to those of professional doctors. Still, we found that users tend to prefer answers generated by humans, indicating that while ChatGPT is useful, users still value the opinions of occupational medicine professionals.

Cite

CITATION STYLE

APA

Padovan, M., Cosci, B., Petillo, A., Nerli, G., Porciatti, F., Scarinci, S., … Palla, A. (2024). ChatGPT in Occupational Medicine: A Comparative Study with Human Experts. Bioengineering, 11(1). https://doi.org/10.3390/bioengineering11010057

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free