This article describes an automatic evaluation procedure for NLP system robustness under the strain of noisy and ill-formed input. The procedure requires no manual work or annotated resources. It is language and annotation scheme independent and produces reliable estimates on the robustness of NLP systems. The only requirement is an estimate on the NLP system accuracy. The procedure was applied to five parsers and one part-of-speech tagger on Swedish text. To establish the reliability of the procedure, a comparative evaluation involving annotated resources was carried out on the tagger and three of the parsers. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Bigert, J., Sjöbergh, J., Knutsson, O., & Sahlgren, M. (2005). Unsupervised evaluation of parser robustness. In Lecture Notes in Computer Science (Vol. 3406, pp. 142–154). Springer Verlag. https://doi.org/10.1007/978-3-540-30586-6_14
Mendeley helps you to discover research relevant for your work.