Toward RNN Based Micro Non-verbal Behavior Generation for Virtual Listener Agents

5Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This work aims to develop a model to generate fine grained and reactive non-verbal idling behaviors of a virtual listener agent when a human user is talking to it. The target micro behaviors are facial expressions, head movements, and postures. The following two research questions then emerge. Whether these behaviors can be trained from the corresponding ones from the user’s behaviors? If the answer is true, what kind of learning model can get high precision? We explored the use of two recurrent neural network (RNN) models (Gated Recurrent Unit, GRU and Long Short-term Memory, LSTM) to learn these behaviors from a human-human data corpus of active listening conversation. The data corpus contains 16 elderly-speaker/young-listener sessions and was collected by ourselves. The results show that this task can be achieved to some degree even with the baseline multi-layer perceptron models. Also, GRU showed best performance among the three compared structures.

Cite

CITATION STYLE

APA

Huang, H. H., Fukuda, M., & Nishida, T. (2019). Toward RNN Based Micro Non-verbal Behavior Generation for Virtual Listener Agents. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11578 LNCS, pp. 53–63). Springer Verlag. https://doi.org/10.1007/978-3-030-21902-4_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free