Deep Text Prior: Weakly Supervised Learning for Assertion Classification

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The success of neural networks is typically attributed to their ability to closely mimic relationships between features and labels observed in the training dataset. This, however, is only part of the answer: in addition to being fit to data, neural networks have been shown to be useful priors on the conditional distribution of labels given features and can be used as such even in the absence of trustworthy training labels. This feature of neural networks can be harnessed to train high quality models on low quality training data in tasks for which large high-quality ground truth datasets don’t exist. One of these problems is assertion classification in biomedical texts: discriminating between positive, negative and speculative statements about certain pathologies a patient may have. We present an assertion classification methodology based on recurrent neural networks, attention mechanism and two flavours of transfer learning (language modelling and heuristic annotation) that achieves state of the art results on MIMIC-CXR radiology reports.

Cite

CITATION STYLE

APA

Liventsev, V., Fedulova, I., & Dylov, D. (2019). Deep Text Prior: Weakly Supervised Learning for Assertion Classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11731 LNCS, pp. 243–257). Springer Verlag. https://doi.org/10.1007/978-3-030-30493-5_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free