Detecting Annotation Errors in Morphological Data with the Transformer

0Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Annotation errors that stem from various sources are usually unavoidable when performing large-scale annotation of linguistic data. In this paper, we evaluate the feasibility of using the Transformer model to detect various types of annotator errors in type-based morphological datasets that contain inflected word forms. We evaluate our error detection model on four languages by injecting three different types of artificial errors into the data: (1) typographic errors, where single characters in the data are inserted, replaced, or deleted; (2) linguistic confusion errors where two inflected forms are systematically swapped; and (3) self-adversarial errors where the Transformer model itself is used to generate plausible-looking, but erroneous forms by retrieving high-scoring predictions from a Transformer search beam. Results show that the model can with perfect, or near-perfect recall detect errors in all three scenarios, even when significant amounts of the annotated data (5%-30%) are corrupted on all languages tested. Precision varies across the languages and types of errors, but is high enough that the model can reliably be used to flag suspicious entries in large datasets for further scrutiny by human annotators.

Cite

CITATION STYLE

APA

Liu, L., & Hulden, M. (2022). Detecting Annotation Errors in Morphological Data with the Transformer. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 166–174). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-short.19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free