Unified Application of Style Transfer for Face Swapping and Reenactment

0Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Face reenactment and face swap have gained a lot of attention due to their broad range of applications in computer vision. Although both tasks share similar objectives (e.g. manipulating expression and pose), existing methods do not explore the benefits of combining these two tasks. In this paper, we introduce a unified end-to-end pipeline for face swapping and reenactment. We propose a novel approach to isolated disentangled representation learning of specific visual attributes in an unsupervised manner. A combination of the proposed training losses allows us to synthesize results in a one-shot manner. The proposed method does not require subject-specific training. We compare our method against state-of-the-art methods for multiple public datasets of different complexities. The proposed method outperforms other SOTA methods in terms of realistic-looking face images.

Cite

CITATION STYLE

APA

Ngô, L. M., aan de Wiel, C., Karaoğlu, S., & Gevers, T. (2021). Unified Application of Style Transfer for Face Swapping and Reenactment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12626 LNCS, pp. 241–257). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-69541-5_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free