Text-Guided Image Manipulation via Generative Adversarial Network with Referring Image Segmentation-Based Guidance

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This study proposes a novel text-guided image manipulation method that introduces referring image segmentation into a generative adversarial network. The proposed text-guided image manipulation method aims to manipulate images containing multiple objects while preserving text-unrelated regions. The proposed method assigns the task of distinguishing between text-related and unrelated regions in an image to segmentation guidance based on referring image segmentation. With this architecture, the adversarial generative network can focus on generating new attributes according to the text description and reconstructing text-unrelated regions. For the challenging input images with multiple objects, the experimental results demonstrate that the proposed method outperforms conventional methods in terms of image manipulation precision.

Cite

CITATION STYLE

APA

Watanabe, Y., Togo, R., Maeda, K., Ogawa, T., & Haseyama, M. (2023). Text-Guided Image Manipulation via Generative Adversarial Network with Referring Image Segmentation-Based Guidance. IEEE Access, 11, 42534–42545. https://doi.org/10.1109/ACCESS.2023.3269847

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free