SeqAttack: On Adversarial Attacks for Named Entity Recognition

22Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.

Abstract

Named Entity Recognition is a fundamental task in information extraction and is an essential element for various Natural Language Processing pipelines. Adversarial attacks have been shown to greatly affect the performance of text classification systems but knowledge about their effectiveness against named entity recognition models is limited. This paper investigates the effectiveness and portability of adversarial attacks from text classification to named entity recognition and the ability of adversarial training to counteract these attacks. We find that character-level and word-level attacks are the most effective, but adversarial training can grant significant protection at little to no expense of standard performance. Alongside our results, we also release SeqAttack, a framework to conduct adversarial attacks against token classification models (used in this work for named entity recognition) and a companion web application to inspect and cherry pick adversarial examples.

Cite

CITATION STYLE

APA

Simoncini, W., & Spanakis, G. (2021). SeqAttack: On Adversarial Attacks for Named Entity Recognition. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (pp. 308–318). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-demo.35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free