Neural token representations and negation and speculation scope detection in biomedical and general domain text

9Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.

Abstract

Since the introduction of context-aware token representation techniques such as Embeddings from Language Models (ELMo) and Bidirectional Encoder Representations from Transformers (BERT), there have been numerous reports on improved performance on a variety of natural language tasks. Nevertheless, the degree to which the resulting context-aware representations can encode information about morpho-syntactic properties of the tokens in a sentence remains unclear. In this paper, we investigate the application and impact of state-of-the-art neural token representations for automatic cue-conditional speculation and negation scope detection coupled with the independently computed morpho-syntactic information. Through this work, We establish a new state-of-the-art for the BioScope and NegPar corpora. Furthermore, we provide a thorough analysis of neural representations and additional features interactions, cue-representation for conditioning, discussing model behavior on different datasets and, finally, address the annotation-induced biases in the learned representations.

Cite

CITATION STYLE

APA

Sergeeva, E., Zhu, H., Tahmasebi, A., & Szolovits, P. (2019). Neural token representations and negation and speculation scope detection in biomedical and general domain text. In LOUHI@EMNLP 2019 - 10th International Workshop on Health Text Mining and Information Analysis, Proceedings (pp. 178–187). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d19-6221

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free