A qualitative evaluation framework for paraphrase identification

10Citations
Citations of this article
65Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we present a new approach for the evaluation, error analysis, and interpretation of supervised and unsupervised Paraphrase Identification (PI) systems. Our evaluation framework makes use of a PI corpus annotated with linguistic phenomena to provide a better understanding and interpretation of the performance of various PI systems. Our approach allows for a qualitative evaluation and comparison of the PI models using human interpretable categories. It does not require modification of the training objective of the systems and does not place additional burden on the developers. We replicate several popular supervised and unsupervised PI systems. Using our evaluation framework we show that: 1) Each system performs differently with respect to a set of linguistic phenomena and makes qualitatively different kinds of errors; 2) Some linguistic phenomena are more challenging than others across all systems.

Cite

CITATION STYLE

APA

Kovatchev, V., Antònia Martí, M., Salamó, M., & Beltran, J. (2019). A qualitative evaluation framework for paraphrase identification. In International Conference Recent Advances in Natural Language Processing, RANLP (Vol. 2019-September, pp. 568–577). Incoma Ltd. https://doi.org/10.26615/978-954-452-056-4_067

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free