Collaborative Generative AI: Integrating GPT-k for Efficient Editing in Text-to-Image Generation

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

The field of text-to-image (T2I) generation has garnered significant attention both within the research community and among everyday users. Despite the advancements of T2I models, a common issue encountered by users is the need for repetitive editing of input prompts in order to receive a satisfactory image, which is time-consuming and labor-intensive. Given the demonstrated text generation power of large-scale language models, such as GPT-k, we investigate the potential of utilizing such models to improve the prompt editing process for T2I generation. We conduct a series of experiments to compare the common edits made by humans and GPT-k, evaluate the performance of GPT-k in prompting T2I, and examine factors that may influence this process. We found that GPT-k models focus more on inserting modifiers while humans tend to replace words and phrases, which includes changes to the subject matter. Experimental results show that GPT-k are more effective in adjusting modifiers rather than predicting spontaneous changes in the primary subject matters. Adopting the edit suggested by GPT-k models may reduce the percentage of remaining edits by 20-30%.

Cite

CITATION STYLE

APA

Zhu, W., Wang, X., Lu, Y., Fu, T. J., Wang, X. E., Eckstein, M., & Wang, W. Y. (2023). Collaborative Generative AI: Integrating GPT-k for Efficient Editing in Text-to-Image Generation. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 11113–11122). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.685

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free