Unsupervised evaluation of parser robustness

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This article describes an automatic evaluation procedure for NLP system robustness under the strain of noisy and ill-formed input. The procedure requires no manual work or annotated resources. It is language and annotation scheme independent and produces reliable estimates on the robustness of NLP systems. The only requirement is an estimate on the NLP system accuracy. The procedure was applied to five parsers and one part-of-speech tagger on Swedish text. To establish the reliability of the procedure, a comparative evaluation involving annotated resources was carried out on the tagger and three of the parsers. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Bigert, J., Sjöbergh, J., Knutsson, O., & Sahlgren, M. (2005). Unsupervised evaluation of parser robustness. In Lecture Notes in Computer Science (Vol. 3406, pp. 142–154). Springer Verlag. https://doi.org/10.1007/978-3-540-30586-6_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free