Predicting Co-verbal Gestures

  • Brinkman W
  • Broekens J
  • Heylen D
ISSN: 16113349
N/ACitations
Citations of this article
1Readers
Mendeley users who have this article in their library.

Abstract

We present a dialogue architecture that addresses perception, planning and execution of multimodal dialogue behavior. Motivated by realtime human performance and modular architectural principles, the architecture is full-duplex (“open-mic”); prosody is continuously analyzed and used for mixed-control turntaking behaviors (reactive and deliberative) and incremental utterance production. The architecture is fine-grain and highly expandable; we are currently applying it in more complex multimodal interaction and dynamic task environments. We describe here the theoretical underpinnings behind the architecture, compare it to prior efforts, discuss the methodology and give a brief overview of its current runtime characteristics.

Cite

CITATION STYLE

APA

Brinkman, W. P., Broekens, J., & Heylen, D. (2015). Predicting Co-verbal Gestures. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2792, pp. 152–166). Retrieved from http://link.springer.com/10.1007/b12026

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free