Background: Hospital discharge summaries play an essential role in informing GPs of recent admissions to ensure excellent continuity of care and prevent adverse events; however, they are notoriously poorly written, time-consuming, and can result in delayed discharge. Aim: To evaluate the potential of artificial intelligence (AI) to produce high-quality discharge summaries equivalent to the level of a doctor who has completed the UK Foundation Programme. Design & setting: Feasibility study using 25 mock patient vignettes. Method: Twenty-five mock patient vignettes were written by the authors. Five junior doctors wrote discharge summaries from the case vignettes (five each). The same case vignettes were input into ChatGPT. In total, 50 discharge summaries were generated; 25 by Al and 25 by junior doctors. Quality and suitability were determined through both independent GP evaluators and adherence to a minimum dataset. Results: Of the 25 AI-written discharge summaries 100% were deemed by GPs to be of an acceptable quality compared with 92% of the junior doctor summaries. They both showed a mean compliance of 97% with the minimum dataset. In addition, the ability of GPs to determine if the summary was written by ChatGPT was poor, with only a 60% accuracy of detection. Similarly, when run through an AI-detection tool all were recognised as being very unlikely to be written by AI. Conclusion: AI has proven to produce discharge summaries of equivalent quality to a junior doctor who has completed the UK Foundation Programme; however, larger studies with real-world patient data with NHS-approved AI tools will need to be conducted.
CITATION STYLE
Clough, R. A. J., Sparkes, W. A., Clough, O. T., Sykes, J. T., Steventon, A. T., & King, K. (2024). Transforming healthcare documentation: harnessing the potential of AI to generate discharge summaries. BJGP Open, 8(1). https://doi.org/10.3399/BJGPO.2023.0116
Mendeley helps you to discover research relevant for your work.