Natural language processing modules such as part-of-speech taggers, named-entity recognizers and syntactic parsers are commonly evaluated in isolation, under the assumption that artificial evaluation metrics for individual parts are predictive of practical performance of more complex language technology systems that perform practical tasks. Although this is an important issue in the design and engineering of systems that use natural language input, it is often unclear how the accuracy of an end-user application is affected by parameters that affect individual NLP modules. We explore this issue in the context of a specific task by examining the relationship between the accuracy of a syntactic parser and the overall performance of an information extraction system for biomedical text that includes the parser as one of its components. We present an empirical investigation of the relationship between factors that affect the accuracy of syntactic analysis, and how the difference in parse accuracy affects the overall system.
CITATION STYLE
Sagae, K., Miyao, Y., Sætre, R., & Tsujii, J. (2008). Evaluating the effects of treebank size in a practical application for parsing. In ACL-08: HLT - Software Engineering, Testing, and Quality Assurance for Natural Language Processing (pp. 14–20). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1622110.1622114
Mendeley helps you to discover research relevant for your work.