Edit Aware Representation Learning via Levenshtein Prediction

1Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a novel approach that employs token-level Levenshtein operations to learn a continuous latent space of vector representations to capture the underlying semantic information with regard to the document editing process. Though our model outperforms strong baselines when fine-tuned on edit-centric tasks, it is unclear if these results are due to domain similarities between fine-tuning and pre-training data, suggesting that the benefits of our proposed approach over regular masked language-modelling pre-training are limited.

Cite

CITATION STYLE

APA

Marrese-Taylor, E., Reid, M., & Solano, A. (2023). Edit Aware Representation Learning via Levenshtein Prediction. In ACL 2023 - 4th Workshop on Insights from Negative Results in NLP, Proceedings (pp. 53–58). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.insights-1.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free