Towards more transactional voice assistants: Investigating the potential for a multimodal voice-activated indoor navigation assistant for blind and sighted travelers

23Citations
Citations of this article
54Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Voice assistants (VAs) - like Amazon Alexa or Siri - ofer hands/eyes-free interactions that are benefcial to a range of users, including individuals who are blind, to fulfll tasks that are otherwise difcult or inaccessible. While these interfaces model conversational interactions to achieve simple tasks, there have been recent calls for VAs that model more transactional interactions for a wider range of complex activities. In this study, we explored the extension of VAs' capabilities in the context of indoor navigation through mixed-ability focus groups with blind and sighted airport travelers. We found high overlap in the difculties encountered by blind and sighted travelers, as well as shared interest in a voice-activated travel assistant to improve travel experiences. Leveraging user-elicited recommendations, we present interaction design examples that showcase customization of diferent and multiple modalities, which collectively demonstrate how VAs can more broadly achieve transactional interactions in complex task scenarios.

Cite

CITATION STYLE

APA

Abdolrahmani, A., Gupta, M., & Vader, M. L. (2021). Towards more transactional voice assistants: Investigating the potential for a multimodal voice-activated indoor navigation assistant for blind and sighted travelers. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3411764.3445638

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free