Cognitive principles in Robust multimodal interpretation

13Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Multimodal conversational interfaces provide a natural means for users to communicate with computer systems through multiple modalities such as speech and gesture. To build effective multimodal interfaces, automated interpretation of user multimodal inputs is important. Inspired by the previous investigation on cognitive status in multimodal human machine interaction, we have developed a greedy algorithm for interpreting user referring expressions (i.e., multimodal reference resolution This algorithm incorporates the cognitive principles of Conversational Implicature and Givenness Hierarchy and applies constraints from various sources (e.g., temporal, semantic, and contextual) to resolve references. Our empirical results have shown the advantage of this algorithm in efficiently resolving a variety of user references. Because of its simplicity and generality, this approach has the potential to improve the robustness of multimodal input interpretation. © 2006 AI Access Foundation. All rights reserved.

Cite

CITATION STYLE

APA

Chai, J. Y., Prasov, Z., & Qu, S. (2006). Cognitive principles in Robust multimodal interpretation. Journal of Artificial Intelligence Research, 27, 55–83. https://doi.org/10.1613/jair.1936

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free