Brush Your Text: Synthesize Any Scene Text on Images via Diffusion Model

20Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Recently, diffusion-based image generation methods are credited for their remarkable text-to-image generation capabilities, while still facing challenges in accurately generating multilingual scene text images. To tackle this problem, we propose Diff-Text, which is a training-free scene text generation framework for any language. Our model outputs a photo-realistic image given a text of any language along with a textual description of a scene. The model leverages rendered sketch images as priors, thus arousing the potential multilingual-generation ability of the pre-trained Stable Diffusion. Based on the observation from the influence of the cross-attention map on object placement in generated images, we propose a localized attention constraint into the cross-attention layer to address the unreasonable positioning problem of scene text. Additionally, we introduce contrastive image-level prompts to further refine the position of the textual region and achieve more accurate scene text generation. Experiments demonstrate that our method outperforms the existing method in both the accuracy of text recognition and the naturalness of foreground-background blending. Code: https://github.com/ecnuljzhang/brush-your-text.

Cite

CITATION STYLE

APA

Zhang, L., Chen, X., Wang, Y., Lu, Y., & Qiao, Y. (2024). Brush Your Text: Synthesize Any Scene Text on Images via Diffusion Model. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 7215–7223). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i7.28550

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free