Adversarial BiLSTM-CRF Architectures for Extra-Propositional Scope Resolution

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Due to the ability of expressively representing narrative structures, proposition-aware learning models in text have been drawing more and more attentions in information extraction. Following this trend, recent studies go deeper into learning fine-grained extra-propositional structures, such as negation and speculation. However, most of elaborately-designed experiments reveal that existing extra-proposition models either fail to learn from the context or neglect to address cross-domain adaptation. In this paper, we attempt to systematically address the above challenges via an adversarial BiLSTM-CRF model, to jointly model the potential extra-propositions and their contexts. This is motivated by the superiority of sequential architecture in effectively encoding order information and long-range context dependency. On the basis, we come up with an adversarial neural architecture to learn the invariant and discriminative latent features across domains. Experimental results on the standard BioScope corpus show the superiority of the proposed neural architecture, which significantly outperforms the state-of-the-art on scope resolution in both in-domain and cross-domain scenarios.

Cite

CITATION STYLE

APA

Huang, R., Ye, J., Zou, B., Hong, Y., & Zhou, G. (2020). Adversarial BiLSTM-CRF Architectures for Extra-Propositional Scope Resolution. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12431 LNAI, pp. 156–168). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-60457-8_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free