Rapformer: Conditional Rap Lyrics Generation with Denoising Autoencoders

13Citations
Citations of this article
82Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The ability to combine symbols to generate language is a defining characteristic of human intelligence, particularly in the context of artistic story-telling through lyrics. We develop a method for synthesizing a rap verse based on the content of any text (e.g., a news article), or for augmenting pre-existing rap lyrics. Our method, called RAPFORMER, is based on training a Transformer-based denoising autoencoder to reconstruct rap lyrics from content words extracted from the lyrics, trying to preserve the essential meaning, while matching the target style. RAPFORMER features a novel BERT-based paraphrasing scheme for rhyme enhancement which increases the average rhyme density of output lyrics by 10%. Experimental results on three diverse input domains show that RAPFORMER is capable of generating technically fluent verses that offer a good trade-off between content preservation and style transfer. Furthermore, a Turing-test-like experiment reveals that RAPFORMER fools human lyrics experts 25% of the time.

Cite

CITATION STYLE

APA

Nikolov, N. I., Malmi, E., Northcutt, C. G., & Parisi, L. (2020). Rapformer: Conditional Rap Lyrics Generation with Denoising Autoencoders. In INLG 2020 - 13th International Conference on Natural Language Generation, Proceedings (pp. 360–373). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.inlg-1.42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free