In recent years, generative adversarial networks (GANs) have started to attain promising results also in natural language generation. However, the existing models have paid limited attention to the semantic coherence of the generated sentences. For this reason, in this paper we propose a novel network - the Controlled TExt generation Relational Memory GAN (CTERM-GAN) - that uses an external input to influence the coherence of sentence generation. The network is composed of three main components: a generator based on a Relational Memory conditioned on the external input; a syntactic discriminator which learns to discriminate between real and generated sentences; and a semantic discriminator which assesses the coherence with the external conditioning. Our experiments on six probing datasets have showed that the model has been able to achieve interesting results, retaining or improving the syntactic quality of the generated sentences while significantly improving their semantic coherence with the given input.
CITATION STYLE
Betti, F., Ramponi, G., & Piccardi, M. (2020). Controlled Text Generation with Adversarial Learning. In INLG 2020 - 13th International Conference on Natural Language Generation, Proceedings (pp. 29–34). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.inlg-1.5
Mendeley helps you to discover research relevant for your work.