Privacy Concerns in Chatbot Interactions

40Citations
Citations of this article
136Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Chatbots are increasingly used in a commercial context to make product- or service-related recommendations. By doing so, they collect personal information of the user, similar to other online services. While privacy concerns in an online (website-) context are widely studied, research in the context of chatbot-interaction is lacking. This study investigates the extent to which chatbots with human-like cues influence perceptions of anthropomorphism (i.e., attribution of human-like characteristics), privacy concerns, and consequently, information disclosure, attitudes and recommendation adherence. Findings show that a human-like chatbot leads to more information disclosure, and recommendation adherence mediated by higher perceived anthropomorphism and subsequently, lower privacy concerns in comparison to a machine-like chatbot. This result does not hold in comparison to a website; human-like chatbot and website were perceived as equally high in anthropomorphism. The results show the importance of both mediating concepts in regards to attitudinal and behavioral outcomes when interacting with chatbots.

Cite

CITATION STYLE

APA

Ischen, C., Araujo, T., Voorveld, H., van Noort, G., & Smit, E. (2020). Privacy Concerns in Chatbot Interactions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11970 LNCS, pp. 34–48). Springer. https://doi.org/10.1007/978-3-030-39540-7_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free