Rapid Diffusion: Building Domain-Specifc Text-to-Image Synthesizers with Fast Inference Speed

4Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Text-to-Image Synthesis (TIS) aims to generate images based on textual inputs. Recently, several large pre-trained diffusion models have been released to create high-quality images with pre-trained text encoders and diffusion-based image synthesizers. However, popular diffusion-based models from the open-source community cannot support industrial domain-specifc applications due to the lack of entity knowledge and low inference speed. In this paper, we propose Rapid Diffusion, a novel framework for training and deploying super-resolution, text-to-image latent diffusion models with rich entity knowledge injected and optimized networks. Furthermore, we employ BladeDISC, an end-to-end Artifcial Intelligence (AI) compiler, and FlashAttention techniques to optimize computational graphs of the generated models for online deployment. Experiments verify the effectiveness of our approach in terms of image quality and inference speed. In addition, we present industrial use cases and integrate Rapid Diffusion to an AI platform to show its practical values.

Cite

CITATION STYLE

APA

Liu, B., Lin, W., Duan, Z., Wang, C., Wu, Z., Zhang, Z., … Huang, J. (2023). Rapid Diffusion: Building Domain-Specifc Text-to-Image Synthesizers with Fast Inference Speed. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 5, pp. 295–304). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-industry.28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free