Recent advances in large language models (LLMs) have generated significant interest in their application across various domains including healthcare. However, there is limited data on their safety and performance in real-world scenarios. This study uses data collected using an autonomous telemedicine clinical assistant. The assistant asks symptom-based questions to elicit patient concerns and allows patients to ask questions about their post-operative recovery. We utilise real-world postoperative questions posed to the assistant by a cohort of 120 patients to examine the safety and appropriateness of responses generated by a recent popular LLM by OpenAI, ChatGPT. We demonstrate that LLMs have the potential to helpfully address routine patient queries following routine surgery. However, important limitations around the safety of today's models exist which must be considered.
CITATION STYLE
Chowdhury, M., Lim, E., Higham, A., McKinnon, R., Ventoura, N., He, Y. V., & De Pennington, N. (2023). Can Large Language Models Safely Address Patient Questions Following Cataract Surgery? In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 131–137). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.clinicalnlp-1.17
Mendeley helps you to discover research relevant for your work.