Can Current Explainability Help Provide References in Clinical Notes to Support Humans Annotate Medical Codes?

4Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The medical codes prediction problem from clinical notes has received substantial interest in the NLP community, and several recent studies have shown the state-of-the-art (SOTA) code prediction results of full-fledged deep learning-based methods. However, most previous SOTA works based on deep learning are still in early stages in terms of providing textual references and explanations of the predicted codes, despite the fact that this level of explainability of the prediction outcomes is critical to gaining trust from professional medical coders. This raises the important question of how well current explainability methods apply to advanced neural network models such as transformers to predict correct codes and present references in clinical notes that support code prediction. First, we present an explainable Read, Attend, and Code (xRAC) framework and assess two approaches, attention score-based xRAC-ATTN and modelagnostic knowledge-distillation-based xRACKD, through simplified but thorough humangrounded evaluations with SOTA transformerbased model, RAC. We find that the supporting evidence text highlighted by xRAC-ATTN is of higher quality than xRAC-KD whereas xRAC-KD has potential advantages in production deployment scenarios. More importantly, we show for the first time that, given the current state of explainability methodologies, using the SOTA medical codes prediction system still requires the expertise and competencies of professional coders, even though its prediction accuracy is superior to that of human coders. This, we believe, is a very meaningful step toward developing explainable and accurate machine learning systems for fully autonomous medical code prediction from clinical notes.

Cite

CITATION STYLE

APA

Kim, B. H., Deng, Z., Yu, P., & Ganapathi, V. (2022). Can Current Explainability Help Provide References in Clinical Notes to Support Humans Annotate Medical Codes? In LOUHI 2022 - 13th International Workshop on Health Text Mining and Information Analysis, Proceedings of the Workshop (pp. 26–34). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.louhi-1.3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free