The study focused on the use of large language models (LLMs) to make clinical notes more understandable for patients. The tool developed simplifies and extracts information from these notes, adding context. The study found that augmentations made by the tool significantly increased patient understanding. However, it was also noticed that errors were more common in real donated notes than synthetic ones, emphasizing the importance of carefully written clinical notes. The study concludes that while LLMs can improve the patient experience with clinical notes, human intervention is still necessary to correct potential model errors.
Publication date: 17 Jan 2024
Project Page: https://arxiv.org/abs/2401.09637v1
Paper: https://arxiv.org/pdf/2401.09637