Text generation is a basic work of natural language processing, which plays an important role in dialogue system and intelligent translation. As a kind of deep learning framework, Generative Adversarial Networks (GAN) has been widely used in text generation. In combination with reinforcement learning, GAN uses the output of discriminator as reward signal of reinforcement learning to guide generator training, but the reward signal is a scalar and the guidance is weak. This paper proposes a text generation model named Feature-Guiding Generative Adversarial Networks (FGGAN). To solve the problem of insufficient feedback guidance from the discriminator network, FGGAN uses a feature guidance module to extract text features from the discriminator network, convert them into feature guidance vectors and feed them into the generator network for guidance. In addition, sampling is required to complete the sequence before feeding it into the discriminator to get feedback signal in text generation. However, the randomness and insufficiency of the sampling method lead to poor quality of generated text. This paper formulates text semantic rules to restrict the token of the next time step in the sequence generation process and remove semantically unreasonable tokens to improve the quality of generated text. Finally, text generation experiments are performed on different datasets and the results verify the effectiveness and superiority of FGGAN.
CITATION STYLE
Yang, Y., Dan, X., Qiu, X., & Gao, Z. (2020). FGGAN: Feature-Guiding Generative Adversarial Networks for Text Generation. IEEE Access, 8, 105217–105225. https://doi.org/10.1109/ACCESS.2020.2993928
Mendeley helps you to discover research relevant for your work.