Abstract
Background Hospital discharge summaries play an essential role in informing GPs of recent admissions to ensure excellent continuity of care and prevent adverse events, however, they are notoriously poorly written, time-consuming and can result in delayed discharge.
Aim To evaluate the potential of AI to produce high-quality discharge summaries equivalent to the level of a doctor who has completed the UK Foundation Programme.
Design & setting Feasibility study using 25 mock patient vignettes.
Method Using the 25 mock patient vignettes, 25 ChatGPT and 25 junior doctor written discharge summaries were generated. Quality and suitability was determined through both independent GP evaluators and adherence to a minimum dataset.
Results Of the 25 AI written discharge summaries 100% were deemed by GPs to be of an acceptable quality compared to that of 92% for the junior doctor summaries. They both showed a mean compliance of 97% with the minimum dataset. In addition, the ability of GPs to determine if the summary was written by ChatGPT was poor, with only a 60% accuracy of detection. Similarly, when run through an AI detection tool all were recognised as being very unlikely to be written by AI.
Conclusion AI has proven to produce discharge summaries of equivalent quality as a doctor who has completed the UK Foundation Programme, however, larger studies with real-world patient data with NHS-approved AI tools will need to be conducted.
- Received June 25, 2023.
- Revision received August 14, 2023.
- Accepted September 1, 2023.
- Copyright © 2023, The Authors
This article is Open Access: CC BY license (https://creativecommons.org/licenses/by/4.0/)