Multimodal and mobile conversational Health and Fitness Companions

37Citations
Citations of this article
95Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. This paper describes how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. We present concrete system architectures, virtual, physical and mobile multimodal interfaces, and interaction management techniques for such Companions. In particular how knowledge representation and separation of low-level interaction modelling from high-level reasoning at the domain level makes it possible to implement distributed, but still coherent, interaction with Companions. The distribution is enabled by using a dialogue plan to communicate information from domain level planner to dialogue management and from there to a separate mobile interface. The model enables each part of the system to handle the same information from its own perspective without containing overlapping logic, and makes it possible to separate task-specific and conversational dialogue management from each other. In addition to technical descriptions, results from the first evaluations of the Companions interfaces are presented. © 2010 Elsevier Ltd. All rights reserved.

Cite

CITATION STYLE

APA

Turunen, M., Hakulinen, J., Ståhl, O., Gambäck, B., Hansen, P., Rodríguez Gancedo, M. C., … Cavazza, M. (2011). Multimodal and mobile conversational Health and Fitness Companions. Computer Speech and Language, 25(2), 192–209. https://doi.org/10.1016/j.csl.2010.04.004

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free