Cosplay has grown from its origins at fan conventions into a billion-dollar global dress phenomenon. To facilitate the imagination and reinterpretation of animated images as real garments, this paper presents an automatic costume-image generation method based on image-to-image translation. Cosplay items can be significantly diverse in their styles and shapes, and conventional methods cannot be directly applied to the wide variety of clothing images that are the focus of this study. To solve this problem, our method starts by collecting and preprocessing web images to prepare a cleaned, paired dataset of the anime and real domains. Then, we present a novel architecture for generative adversarial networks (GANs) to facilitate high-quality cosplay image generation. Our GAN consists of several effective techniques to bridge the two domains and improve both the global and local consistency of generated images. Experiments demonstrated that, with quantitative evaluation metrics, the proposed GAN performs better and produces more realistic images than conventional methods. Our codes and pretrained model are available on the web.
CITATION STYLE
Tango, K., Katsurai, M., Maki, H., & Goto, R. (2022). Anime-to-real clothing: Cosplay costume generation via image-to-image translation. Multimedia Tools and Applications, 81(20), 29505–29523. https://doi.org/10.1007/s11042-022-12576-x
Mendeley helps you to discover research relevant for your work.