On the Contribution of Per-ICD Attention Mechanisms to Classify Health Records in Languages With Fewer Resources than English

2Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

We introduce a multi-label text classifier with per-label attention for the classification of Electronic Health Records according to the International Classification of Diseases. We apply the model on two Electronic Health Records datasets with Discharge Summaries in two languages with fewer resources than English, Spanish and Swedish. Our model leverages the BERT Multilingual model (specifically the Wikipedia, as the model have been trained with 104 languages, including Spanish and Swedish, with the largest Wikipedia dumps) to share the language modelling capabilities across the languages. With the per-label attention, the model can compute the relevance of each word from the EHR towards the prediction of each label. For the experimental framework, we apply 157 labels from Chapter XI - Diseases of the Digestive System of the ICD, which makes the attention especially important as the model has to discriminate between similar diseases.

Cite

CITATION STYLE

APA

Blanco, A., Remmer, S., Pérez, A., Dalianis, H., & Casillas, A. (2021). On the Contribution of Per-ICD Attention Mechanisms to Classify Health Records in Languages With Fewer Resources than English. In International Conference Recent Advances in Natural Language Processing, RANLP (pp. 165–172). Incoma Ltd. https://doi.org/10.26615/978-954-452-072-4_020

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free