Distilling discrimination and generalization knowledge for event detection via ?-representation learning

60Citations
Citations of this article
146Readers
Mendeley users who have this article in their library.

Abstract

Event detection systems rely on discrimination knowledge to distinguish ambiguous trigger words and generalization knowledge to detect unseen/sparse trigger words. Current neural event detection approaches focus on trigger-centric representations, which work well on distilling discrimination knowledge, but poorly on learning generalization knowledge. To address this problem, this paper proposes a ?-learning approach to distill discrimination and generalization knowledge by effectively decoupling, incrementally learning and adaptively fusing event representation. Experiments show that our method significantly outperforms previous approaches on unseen/sparse trigger words, and achieves state-of-the-art performance on both ACE2005 and KBP2017 datasets.

Cite

CITATION STYLE

APA

Lu, Y., Lin, H., Han, X., & Sun, L. (2020). Distilling discrimination and generalization knowledge for event detection via ?-representation learning. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 4366–4376). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1429

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free