Describing and animating complex communicative verbal and nonverbal behavior using eva-framework

3Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Multimodal interfaces incorporating embodied conversational agents enable the development of novel concepts with regard to interaction management tactics in responsive human-machine interfaces. Such interfaces provide several additional nonverbal communication channels, such as natural visualized speech, facial expression, and different body motions. In order to simulate reactive humanlike communicative behavior and attitude, the realization of motion relies on different behavioral analyses and realization tactics and approaches. This article proposes a novel environment for "online" visual modeling of humanlike communicative behavior, named EVA-framework. In this study we focus on visual speech and nonverbal behavior synthesis by using hierarchical XML-based behavioral events and expressively adjustable motion templates. The main goal of the presented abstract motion notation scheme, named EVA-Script, is to enable the synthesis of unique and responsive behavior. Copyright © 2014 Taylor & Francis Group, LLC.

Cite

CITATION STYLE

APA

Mlakar, I., Kačič, Z., & Rojc, M. (2014). Describing and animating complex communicative verbal and nonverbal behavior using eva-framework. Applied Artificial Intelligence, 28(5), 470–503. https://doi.org/10.1080/08839514.2014.905819

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free