Speaking to see: A feasibility study of voice-assisted visual search

N/ACitations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The paper presents the concept, implementation, and a feasibility study of a user interface technique, named VAVS ("voice-assisted visual search"). VAVS employs user's voice input for assisting the user in searching for objects of interest in complex displays. User voice input is compared with attributes of visually presented objects and, if there is a match, the matching object is highlighted to help the user visually locate the object. The paper discusses differences between, on the one hand, VAVS and, on the other hand, voice commands and multimodal input techniques. An interactive prototype implementing the VAVS concept and employing a standard voice recognition program is described. The paper reports an empirical study, in which an object location task was carried out with and without VAVS. It was found that the VAVS condition was associated with higher performance and use satisfaction. The paper concludes with a discussion of directions for future work. © 2011 IFIP International Federation for Information Processing.

Cite

CITATION STYLE

APA

Kaptelinin, V., & Wåhlen, H. (2011). Speaking to see: A feasibility study of voice-assisted visual search. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6946 LNCS, pp. 444–451). https://doi.org/10.1007/978-3-642-23774-4_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free