What Kind of Transformer Models to Use for the ICD-10 Codes Classification Task

2Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Coding according to the International Classification of Diseases (ICD)-10 and its clinical modifications (CM) is inherently complex and expensive. Natural Language Processing (NLP) assists by simplifying the analysis of unstructured data from electronic health records, thereby facilitating diagnosis coding. This study investigates the suitability of transformer models for ICD-10 classification, considering both encoder and encoder-decoder architectures. The analysis is performed on clinical discharge summaries from the Medical Information Mart for Intensive Care (MIMIC)-IV dataset, which contains an extensive collection of electronic health records. Pre-trained models such as BioBERT, ClinicalBERT, ClinicalLongformer, and ClinicalBigBird are adapted for the coding task, incorporating specific preprocessing techniques to enhance performance. The findings indicate that increasing context length improves accuracy, and that the difference in accuracy between encoder and encoder-decoder models is negligible.

Cite

CITATION STYLE

APA

Mansour, M., Yilmaz, F., Miletic, M., & Sariyar, M. (2024). What Kind of Transformer Models to Use for the ICD-10 Codes Classification Task. In Studies in Health Technology and Informatics (Vol. 316, pp. 1008–1012). IOS Press BV. https://doi.org/10.3233/SHTI240580

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free