Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care

34Citations
Citations of this article
65Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There is no study that comprehensively evaluates data on the readability and quality of "palliative care"information provided by artificial intelligence (AI) chatbots ChatGPT®, Bard®, Gemini®, Copilot®, Perplexity®. Our study is an observational and cross-sectional original research study. In our study, AI chatbots ChatGPT®, Bard®, Gemini®, Copilot®, and Perplexity® were asked to present the answers of the 100 questions most frequently asked by patients about palliative care. Responses from each 5 AI chatbots were analyzed separately. This study did not involve any human participants. Study results revealed significant differences between the readability assessments of responses from all 5 AI chatbots (P

Cite

CITATION STYLE

APA

Hanci, V., Ergün, B., Gül, Ş., Uzun, Ö., Erdemir, İ., & Hanci, F. B. (2024). Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care. Medicine (United States), 103(33), e39305. https://doi.org/10.1097/MD.0000000000039305

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free