Skip to main content

Using visual cues to leverage the use of speech input in the vehicle

Citations of this article
Mendeley users who have this article in their library.
Get full text


Touch and speech input often exist side-by-side in multimodal systems. Speech input has a number of advantages over touch, which are especially relevant in safety critical environments such as driving. However, information on large screens tempts drivers to use touch input for interaction. They lack an effective trigger, which reminds them that speech input might be the better choice. This work investigates the efficacy of visual cues to leverage the use of speech input while driving. We conducted a driving simulator experiment with 45 participants that examined the influence of visual cues, task type, driving scenario, and audio signals on the driver’s choice of modality, glance behavior and subjective ratings. The results indicate that visual cues can effectively promote speech input, without increasing visual distraction, or restricting the driver’s freedom to choose. We propose that our results can be applied to other applications such as smartphones or smart home applications.




Roider, F., Rümelin, S., & Gross, T. (2018). Using visual cues to leverage the use of speech input in the vehicle. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10809 LNCS, pp. 120–131). Springer Verlag.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free