Training a Neural Network in a Low-Resource Setting on Automatically Annotated Noisy Data

22Citations
Citations of this article
120Readers
Mendeley users who have this article in their library.

Abstract

Manually labeled corpora are expensive to create and often not available for low-resource languages or domains. Automatic labeling approaches are an alternative way to obtain labeled data in a quicker and cheaper way. However, these labels often contain more errors which can deteriorate a classifier's performance when trained on this data. We propose a noise layer that is added to a neural network architecture. This allows modeling the noise and train on a combination of clean and noisy data. We show that in a low-resource NER task we can improve performance by up to 35% by using additional, noisy data and handling the noise.

References Powered by Scopus

Long Short-Term Memory

76955Citations
N/AReaders
Get full text

GloVe: Global vectors for word representation

26891Citations
N/AReaders
Get full text

Learning from noisy large-scale datasets with minimal supervision

361Citations
N/AReaders
Get full text

Cited by Powered by Scopus

OCR on-the-go: Robust end-to-end systems for reading license plates & street signs

16Citations
N/AReaders
Get full text

Analysing the Noise Model Error for Realistic Noisy Label Data

13Citations
N/AReaders
Get full text

Handling noisy labels for robustly learning from self-training data for low-resource sequence labeling

10Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Hedderich, M. A., & Klakow, D. (2018). Training a Neural Network in a Low-Resource Setting on Automatically Annotated Noisy Data. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 12–18). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-3402

Readers over time

‘18‘19‘20‘21‘22‘23‘24‘25010203040

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 36

69%

Researcher 12

23%

Lecturer / Post doc 3

6%

Professor / Associate Prof. 1

2%

Readers' Discipline

Tooltip

Computer Science 52

81%

Engineering 5

8%

Linguistics 5

8%

Business, Management and Accounting 2

3%

Save time finding and organizing research with Mendeley

Sign up for free
0