Towards a leaner evaluation process: Application to error correction systems

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While they follow similar procedures, evaluations of state of the art error correction systems always rely on different resources (collections of documents, evaluation metrics, dictionaries, …). In this context, error correction approaches cannot be directly compared without being re-implemented from scratch every time they have to be compared with a new one. In other domains such as Information Retrieval this problem is solved through Cranfield like experiments such as TREC [5] evaluation campaign. We propose a generic solution to overcome those evaluation difficulties through a modular evaluation platform which formalizes similarities between evaluation procedures and provides standard sets of instantiated resources for particular domains. While this was our main problem at first, in this article, the set of resources is dedicated to the evaluation of error correction systems. The idea is to provide the leanest way to evaluate error correction systems by implementing only the core algorithm and relying on the platform for everything else.

Cite

CITATION STYLE

APA

Renard, A., Calabretto, S., & Rumpler, B. (2013). Towards a leaner evaluation process: Application to error correction systems. In Lecture Notes in Business Information Processing (Vol. 141, pp. 228–242). Springer Verlag. https://doi.org/10.1007/978-3-642-40654-6_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free