Scalable Font Reconstruction with Dual Latent Manifolds

7Citations
Citations of this article
54Readers
Mendeley users who have this article in their library.

Abstract

We propose a deep generative model that performs typography analysis and font reconstruction by learning disentangled manifolds of both font style and character shape. Our approach enables us to massively scale up the number of character types we can effectively model compared to previous methods. Specifically, we infer separate latent variables representing character and font via a pair of inference networks which take as input sets of glyphs that either all share a character type, or belong to the same font. This design allows our model to generalize to characters that were not observed during training time, an important task in light of the relative sparsity of most fonts. We also put forward a new loss, adapted from prior work that measures likelihood using an adaptive distribution in a projected space, resulting in more natural images without requiring a discriminator. We evaluate on the task of font reconstruction over various datasets representing character types of many languages, and compare favorably to modern style transfer systems according to both automatic and manually-evaluated metrics.

Cite

CITATION STYLE

APA

Srivatsan, N., Wu, S., Barron, J. T., & Berg-Kirkpatrick, T. (2021). Scalable Font Reconstruction with Dual Latent Manifolds. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 3060–3072). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.244

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free