Parallel and High-Fidelity Text-to-Lip Generation

4Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

As a key component of talking face generation, lip movements generation determines the naturalness and coherence of the generated talking face video. Prior literature mainly focuses on speech-to-lip generation while there is a paucity in text-to-lip (T2L) generation. T2L is a challenging task and existing end-to-end works depend on the attention mechanism and autoregressive (AR) decoding manner. However, the AR decoding manner generates current lip frame conditioned on frames generated previously, which inherently hinders the inference speed, and also has a detrimental effect on the quality of generated lip frames due to error propagation. This encourages the research of parallel T2L generation. In this work, we propose a parallel decoding model for fast and high-fidelity text-to-lip generation (ParaLip). Specifically, we predict the duration of the encoded linguistic features and model the target lip frames conditioned on the encoded linguistic features with their duration in a non-autoregressive manner. Furthermore, we incorporate the structural similarity index loss and adversarial learning to improve perceptual quality of generated lip frames and alleviate the blurry prediction problem. Extensive experiments conducted on GRID and TCD-TIMIT datasets demonstrate the superiority of proposed methods.

References Powered by Scopus

Image quality assessment: From error visibility to structural similarity

44876Citations
N/AReaders
Get full text

Convolutional sequence to sequence learning

2116Citations
N/AReaders
Get full text

Natural TTS Synthesis by Conditioning Wavenet on MEL Spectrogram Predictions

1980Citations
N/AReaders
Get full text

Cited by Powered by Scopus

DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video

11Citations
N/AReaders
Get full text

CA-Wav2Lip: Coordinate Attention-based Speech to Lip Synthesis in the Wild

8Citations
N/AReaders
Get full text

Faces that Speak: Jointly Synthesising Talking Face and Speech from Text

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Liu, J., Zhu, Z., Ren, Y., Huang, W., Huai, B., Yuan, N., & Zhao, Z. (2022). Parallel and High-Fidelity Text-to-Lip Generation. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 1738–1746). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i2.20066

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 1

100%

Readers' Discipline

Tooltip

Computer Science 1

100%

Save time finding and organizing research with Mendeley

Sign up for free