Since the introduction of the Amazon Echo, smart speakers have increasingly found their way into private households. What if the voice assistant could not only be heard but also seen? How would people then evaluate smart speakers? Based on the trend that smart speakers will start to integrate or even become displays, this article (1) presents a research prototype of a visualized smart speaker and (2) investigates how people perceive a visualized voice assistant (VA) by comparing three different human-like visualizations of the prototype. A software solution using Unity combined with a commercial smart speaker makes it possible to visualize the speech assistant. The prototype can record the interaction with the VA without sending sensitive data to the VA provider. We created three visualizations of a VA differing in their amount of human-like facial features based on this prototype. The online study with 51 participants reveals that visualizations with more facial features were perceived significantly more human-like than visualizations with fewer features. Furthermore, our results indicate that perceived anthropomorphism significantly influences how other human-like characteristics are attributed to the visualizations. Overall, our study gives initial insights into the growing segment of visualized VAs with implications for future cases of use and design.
CITATION STYLE
Wienrich, C., Ebner, F., & Carolus, A. (2022). Giving Alexa a Face - Implementing a New Research Prototype and Examining the Influences of Different Human-Like Visualizations on the Perception of Voice Assistants. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13304 LNCS, pp. 605–625). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-05412-9_41
Mendeley helps you to discover research relevant for your work.