Towards Reference-free Text Simplification Evaluation with a BERT Siamese Network Architecture

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Text simplification (TS) aims to modify sentences to make their both content and structure easier to understand. Traditional n-gram matching-based TS evaluation metrics heavily rely on the exact token match and human-annotated simplified sentences. In this paper, we present a novel neural-network-based reference-free TS metric BETS that leverages pre-trained contextualized language representation models and large-scale paraphrasing datasets to evaluate simplicity and meaning preservation. We show that our metric, without collecting any costly human simplification reference, correlates better than existing metrics with human judgments for the quality of both overall simplification (+7.7%) and its key aspects, i.e., comparative simplicity (+11.2%) and meaning preservation (+9.2%).

Cite

CITATION STYLE

APA

Zhao, X., Durmus, E., & Yeung, D. Y. (2023). Towards Reference-free Text Simplification Evaluation with a BERT Siamese Network Architecture. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 13250–13264). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.838

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free