K2: Animated agents that understand speech commands and perform actions

3Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a prototype dialogue system, K2, in which a user can instruct agents through speech input to manipulate various objects in a 3-D virtual world. The agents' action is presented to the user as an animation. To build such a system, we have to deal with some of the deeper issues of natural language processing such as ellipsis and anaphora resolution, handling vagueness, and so on. In this paper, we focus on three distinctive features of the K2 system: handling ill-formed speech input, plan-based anaphora resolution and handling vagueness in spatial expressions. After an overview of the system architecture, each of these features is described. We also look at the future research agenda of this system. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Tokugana, T., Funakoshi, K., & Tanaka, H. (2004). K2: Animated agents that understand speech commands and perform actions. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3157, pp. 635–643). Springer Verlag. https://doi.org/10.1007/978-3-540-28633-2_67

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free