Socio-cognitive biases in folk AI ethics and risk discourse

  • Laakasuo M
  • Herzon V
  • Perander S
  • et al.
N/ACitations
Citations of this article
28Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The ongoing conversation on AI ethics and politics is in full swing and has spread to the general public. Rather than contributing by engaging with the issues and views discussed, we want to step back and comment on the widening conversation itself. We consider evolved human cognitive tendencies and biases, and how they frame and hinder the conversation on AI ethics. Primarily, we describe our innate human capacities known as folk theories and how we apply them to phenomena of different implicit categories. Through examples and empirical findings, we show that such tendencies specifically affect the key issues discussed in AI ethics. The central claim is that much of our mostly opaque intuitive thinking has not evolved to match the nature of AI, and this causes problems in democratizing AI ethics and politics. Developing awareness of how our intuitive thinking affects our more explicit views will add to the quality of the conversation.

Cite

CITATION STYLE

APA

Laakasuo, M., Herzon, V., Perander, S., Drosinou, M., Sundvall, J., Palomäki, J., & Visala, A. (2021). Socio-cognitive biases in folk AI ethics and risk discourse. AI and Ethics, 1(4), 593–610. https://doi.org/10.1007/s43681-021-00060-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free