Humans make ample use of deictic gesture and spoken reference in referring to perceived phenomena in the spatial environment, such as visible objects, sound sources, tactile objects, or even sources of smell and taste. Multimodal and natural interactive systems developers are beginning to face the challenges involved in making systems correctly interpret user input belonging to this general class of multimodal references. This paper addresses a first fragment of the general problem, i.e., spoken and/or 2D on-screen deictic gesture reference to graphics output scenes. The approach is to confront existing sketchy theory with new data and generalise the results to what may be a more comprehensive understanding of the problem. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Bernsen, N. O. (2006). Speech and 2D deictic gesture reference to virtual scenes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4021 LNAI, pp. 129–140). Springer Verlag. https://doi.org/10.1007/11768029_13
Mendeley helps you to discover research relevant for your work.