DeepFacePencil: Creating Face Images from Freehand Sketches

45Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we explore the task of generating photo-realistic face images from hand-drawn sketches. Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision. They typically utilize synthesized edge maps of face images as training data. However, these synthesized edge maps strictly align with the edges of the corresponding face images, which limit their generalization ability to real hand-drawn sketches with vast stroke diversity. To address this problem, we propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches, based on a novel dual generator image translation network during training. A novel spatial attention pooling (SAP) is designed to adaptively handle stroke distortions which are spatially varying to support various stroke styles and different level of details. We conduct extensive experiments and the results demonstrate the superiority of our model over existing methods on both image quality and model generalization to hand-drawn sketches.

Cite

CITATION STYLE

APA

Li, Y., Chen, X., Yang, B., Chen, Z., Cheng, Z., & Zha, Z. J. (2020). DeepFacePencil: Creating Face Images from Freehand Sketches. In MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia (pp. 991–999). Association for Computing Machinery, Inc. https://doi.org/10.1145/3394171.3413684

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free