Probing Pre-Trained Language Models for Disease Knowledge

6Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

Pre-trained language models such as ClinicalBERT have achieved impressive results on tasks such as medical Natural Language Inference. At first glance, this may suggest that these models are able to perform medical reasoning tasks, such as mapping symptoms to diseases. However, we find that standard benchmarks such as MedNLI contain relatively few examples that require such forms of reasoning. To better understand the medical reasoning capabilities of existing language models, in this paper we introduce DisKnE, a new benchmark for Disease Knowledge Evaluation. To construct this benchmark, we annotated each positive MedNLI example with the types of medical reasoning that are needed. We then created negative examples by corrupting these positive examples in an adversarial way. Furthermore, we define training-test splits per disease, ensuring that no knowledge about test diseases can be learned from the training data, and we canonicalize the formulation of the hypotheses to avoid the presence of artefacts. This leads to a number of binary classification problems, one for each type of reasoning and each disease. When analysing pre-trained models for the clinical/biomedical domain on the proposed benchmark, we find that their performance drops considerably.

Cite

CITATION STYLE

APA

Alghanmi, I., Espinosa-Anke, L., & Schockaert, S. (2021). Probing Pre-Trained Language Models for Disease Knowledge. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 3023–3033). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.266

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free