Different metrics have been proposed for the estimation of how good a parser-produced syntactic tree is when judged by a correct tree from the treebank. The emphasis of measurement has been on the number of correct constituents in terms of constituent labels and bracketing accuracy. This article proposes the use of the NIST scheme as a better alternative for the evaluation of parser output in terms of correct match, substitution, deletion, and insertion. It describes an experiment to measure the performance of the Survey Parser that was used to complete the syntactic annotation of the International Corpus of English. This article will finally report empirical scores for the performance of the parser and outline some future research. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Fang, A. C. (2006). Evaluating the performance of the survey parser with the NIST scheme. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3878 LNCS, pp. 168–179). https://doi.org/10.1007/11671299_19
Mendeley helps you to discover research relevant for your work.