This paper addresses the generation of stylized texts in a multilingual setup. A long short-term memory (LSTM) language model with extended phonetic and semantic embeddings is shown to capture poetic style when trained end-to-end without any expert knowledge. Phonetics seems to have a comparable contribution to the overall model performance as the information on the target author. The quality of the generated texts is estimated through bilingual evaluation understudy (BLEU), a new cross-entropy based metric, and a survey of human peers. When a style of target author is recognized by the humans, they do not seem to distinguish generated texts and originals.
CITATION STYLE
Yamshchikov, I. P., & Tikhonov, A. (2019). Learning literary style end-to-end with artificial neural networks. Advances in Science, Technology and Engineering Systems, 4(6), 115–125. https://doi.org/10.25046/aj040614
Mendeley helps you to discover research relevant for your work.