Interrogating Alexa: Holding Voice Assistants Accountable for Their Answers

4Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper reports on a preliminary comparative study of Alexa, Siri, and Google Assistant voice assistants (VA) that explores the origins of answers provided on each platform in an attempt to determine the extent that these origins influence responses. Questions were selected from Text Recognition (TREC) 2017 Live Question Answering (QA) Track Data, a collection of pre-Assessed questions that are part of the National Institute of Standards and Technology (NIST) TREC QA track. Responses were collected as voice memos and screen captures, then analyzed to determine the origins of each answer or set of answers provided. Results indicate that the origins of answers are different search engines, and that algorithm-centered processes in each voice assistant result in vast differences in answers to questions. Because the online search results provided as answers by voice assistants are influenced by content and structured data, technical communicators and UX practitioners can help ensure that voice assistants are able to provide accurate, complete, and ethical responses to users' questions.

Cite

CITATION STYLE

APA

Hocutt, D. (2021). Interrogating Alexa: Holding Voice Assistants Accountable for Their Answers. In Proceedings of the 39th ACM International Conference on the Design of Communication: Building Coalitions. Worldwide, SIGDOC 2021 (pp. 142–150). Association for Computing Machinery, Inc. https://doi.org/10.1145/3472714.3473634

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free