Modeling input uncertainty in neural network dependency parsing

9Citations
Citations of this article
80Readers
Mendeley users who have this article in their library.

Abstract

Recently introduced neural network parsers allow for new approaches to circumvent data sparsity issues by modeling character level information and by exploiting raw data in a semi-supervised setting. Data sparsity is especially prevailing when transferring to nonstandard domains. In this setting, lexical normalization has often been used in the past to circumvent data sparsity. In this paper, we investigate whether these new neural approaches provide similar functionality as lexical normalization, or whether they are complementary. We provide experimental results which show that a separate normalization component improves performance of a neural network parser even if it has access to character level information as well as external word embeddings. Further improvements are obtained by a straightforward but novel approach in which the top-N best candidates provided by the normalization component are available to the parser.

References Powered by Scopus

Framewise phoneme classification with bidirectional LSTM and other neural network architectures

4541Citations
N/AReaders
Get full text

A fast and accurate dependency parser using neural networks

1446Citations
N/AReaders
Get full text

Transition-based dependency parsing with stack long short-term memory

474Citations
N/AReaders
Get full text

Cited by Powered by Scopus

TwittIrish: A Universal Dependencies Treebank of Tweets in Modern Irish

4Citations
N/AReaders
Get full text

An in-depth analysis of the effect of lexical normalization on the dependency parsing of social media

4Citations
N/AReaders
Get full text

Treebanking user-generated content: a UD based overview of guidelines, corpora and unified recommendations

3Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

van der Goot, R., & van Noord, G. (2018). Modeling input uncertainty in neural network dependency parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 (pp. 4984–4991). Association for Computational Linguistics. https://doi.org/10.18653/v1/d18-1542

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 22

69%

Researcher 6

19%

Professor / Associate Prof. 2

6%

Lecturer / Post doc 2

6%

Readers' Discipline

Tooltip

Computer Science 29

74%

Linguistics 5

13%

Engineering 3

8%

Business, Management and Accounting 2

5%

Save time finding and organizing research with Mendeley

Sign up for free