How does context matter? On the robustness of event detection with context-selective mask generalization

22Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.

Abstract

Event detection (ED) aims to identify and classify event triggers in texts, which is a crucial subtask of event extraction (EE). Despite many advances in ED, the existing studies are typically centered on improving the overall performance of an ED model, which rarely consider the robustness of an ED model. This paper aims to fill this research gap by stressing the importance of robustness modeling in ED models. We first pinpoint three stark cases demonstrating the brittleness of the existing ED models. After analyzing the underlying reason, we propose a new training mechanism, called context-selective mask generalization for ED, which can effectively mine context-specific patterns for learning and robustify an ED model. The experimental results have confirmed the effectiveness of our model regarding defending against adversarial attacks, exploring unseen predicates, and tackling ambiguity cases. Moreover, a deeper analysis suggests that our approach can learn a complementary predictive bias with most ED models that use full context for feature learning.

Cite

CITATION STYLE

APA

Liu, J., Chen, Y., Liu, K., Jia, Y., & Sheng, Z. (2020). How does context matter? On the robustness of event detection with context-selective mask generalization. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 2523–2532). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.229

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free