Abstract
Natural Language Processing systems are large-scale softwares, whose development involves many man-years of work, in terms of both coding and resource development. Given a dictionary of 110k lemmas, a few hundred syntactic analysis rules, 20k ngrams matrices and other resources, what will be the impact on a syntactic analyzer of adding a new possible category to a given verb? What will be the consequences of a new syntactic rules addition? Any modification may imply, besides what was expected, unforeseeable side-effects and the complexity of the system makes it difficult to guess the overall impact of even small changes. We present here a framework designed to effectively and iteratively improve the accuracy of our linguistic analyzer LIMA by iterative refinements of its linguistic resources. These improvements are continuously assessed by evaluating the analyzer performance against a reference corpus. Our first results show that this framework is really helpful towards this goal.
Cite
CITATION STYLE
de Chalendar, G., & Nouvel, D. (2009). Modular resource development and diagnostic evaluation framework for fast NLP system improvement. In NAACL HLT 2009 - Software Engineering, Testing, and Quality Assurance for Natural Language Processing, SETQA-NLP 2009 - Proceedings of the Workshop (pp. 65–73). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1621947.1621958
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.