- Speech synchronized facial animation that controls onlythe movement of the mouth is typically perceived as woodenand unnatural. We propose a method to generate additionalfacial expressions such as movement of the head, the eyes,and the eyebrows fully automatically from the input speechsignal. This is achieved by extracting prosodic parameterssuch as pitch flow and power spectrum from the speech signaland using them to control facial animation parametersin accordance to results from paralinguistic research.
CITATION STYLE
Albrecht, I., Haber, J., & Seidel, H.-P. (2002). Automatic Generation of Non-Verbal Facial Expressions from Speech. In Advances in Modelling, Animation and Rendering (pp. 283–293). Springer London. https://doi.org/10.1007/978-1-4471-0103-1_18
Mendeley helps you to discover research relevant for your work.