Modular resource development and diagnostic evaluation framework for fast NLP system improvement

2Citations
Citations of this article
77Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Natural Language Processing systems are large-scale softwares, whose development involves many man-years of work, in terms of both coding and resource development. Given a dictionary of 110k lemmas, a few hundred syntactic analysis rules, 20k ngrams matrices and other resources, what will be the impact on a syntactic analyzer of adding a new possible category to a given verb? What will be the consequences of a new syntactic rules addition? Any modification may imply, besides what was expected, unforeseeable side-effects and the complexity of the system makes it difficult to guess the overall impact of even small changes. We present here a framework designed to effectively and iteratively improve the accuracy of our linguistic analyzer LIMA by iterative refinements of its linguistic resources. These improvements are continuously assessed by evaluating the analyzer performance against a reference corpus. Our first results show that this framework is really helpful towards this goal.

Cite

CITATION STYLE

APA

de Chalendar, G., & Nouvel, D. (2009). Modular resource development and diagnostic evaluation framework for fast NLP system improvement. In NAACL HLT 2009 - Software Engineering, Testing, and Quality Assurance for Natural Language Processing, SETQA-NLP 2009 - Proceedings of the Workshop (pp. 65–73). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1621947.1621958

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free