Shallow Training is cheap but is it good enough? Experiments with Medical Fact Coding

0Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

A typical NLP system for medical fact coding uses multiple layers of supervision involving fact-attributes, relations and coding. Training such a system involves expensive and laborious annotation process involving all layers of the pipeline. In this work, we investigate the feasibility of a shallow medical coding model that trains only on fact annotations, while disregarding fact-attributes and relations, potentially saving considerable annotation time and costs. Our results show that the shallow system, despite using less supervision, is only 1.4% F1 points behind the multi-layered system on Disorders, and contrary to expectation, is able to improve over the latter by about 2.4% F1 points on Procedure facts. Further, our experiments also show that training the shallow system using only sentence-level fact labels with no span information has no negative effect on performance, indicating further cost savings through weak supervision.

Cite

CITATION STYLE

APA

Nallapati, R., & Florian, R. (2015). Shallow Training is cheap but is it good enough? Experiments with Medical Fact Coding. In ACL-IJCNLP 2015 - BioNLP 2015: Workshop on Biomedical Natural Language Processing, Proceedings of the Workshop (pp. 52–60). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w15-3806

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free