We present work in progress on (verbal, facial, and gestural) modality selection in an embodied multilingual and multicultural conversation agent. In contrast to most of the recent proposals, which consider non-verbal behavior as being superimposed on and/or derived from the verbal modality, we argue for a holistic model that assigns modalities to individual content elements in accordance with semantic and contextual constraints as well as with cultural and personal characteristics of the addressee. Our model is thus in line with the SAIBA framework, although methodological differences become apparent at a more fine-grained level of realization.
CITATION STYLE
Ten-Ventura, C., Carlini, R., Dasiopoulou, S., Llorach Tó, G., & Wanner, L. (2017). Towards reasoned modality selection in an embodied conversation agent. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10498 LNAI, pp. 423–432). Springer Verlag. https://doi.org/10.1007/978-3-319-67401-8_52
Mendeley helps you to discover research relevant for your work.