Arbitrary talking face generation via attentional audio-visual coherence learning

35Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

Talking face generation aims to synthesize a face video with precise lip synchronization as well as a smooth transition of facial motion over the entire video via the given speech clip and facial image. Most existing methods mainly focus on either disentangling the information in a single image or learning temporal information between frames. However, cross-modality coherence between audio and video information has not been well addressed during synthesis. In this paper, we propose a novel arbitrary talking face generation framework by discovering the audio-visual coherence via the proposed Asymmetric Mutual Information Estimator (AMIE). In addition, we propose a Dynamic Attention (DA) block by selectively focusing the lip area of the input image during the training stage, to further enhance lip synchronization. Experimental results on benchmark LRW dataset and GRID dataset transcend the state-of-the-art methods on prevalent metrics with robust high-resolution synthesizing on gender and pose variations.

Cite

CITATION STYLE

APA

Zhu, H., Huang, H., Li, Y., Zheng, A., & He, R. (2020). Arbitrary talking face generation via attentional audio-visual coherence learning. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 2362–2368). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/327

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free