RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting

3Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

Large Language Models (LLMs) have demonstrated impressive capabilities in creative tasks such as storytelling and Email generation. However, as LLMs are primarily trained on final text results rather than intermediate revisions, it might be challenging for them to perform text rewriting tasks. Most studies in the rewriting tasks focus on a particular transformation type within the boundaries of single sentences. In this work, we develop new strategies for instruction tuning and reinforcement learning to better align LLMs for cross-sentence rewriting tasks using diverse wording and structures expressed through natural languages including 1) generating rewriting instruction data from Wiki edits and public corpus through instruction generation and chain-of-thought prompting; 2) collecting comparison data for reward model training through a new ranking function. To facilitate this research, we introduce OPENREWRITEEVAL, a novel benchmark covers a wide variety of rewriting types expressed through natural language instructions. Our results show significant improvements over a variety of baselines.

Cite

CITATION STYLE

APA

Shu, L., Luo, L., Hoskere, J., Zhu, Y., Liu, Y., Tong, S., … Meng, L. (2024). RewriteLM: An Instruction-Tuned Large Language Model for Text Rewriting. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 18970–18980). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i17.29863

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free