This article argues in defence of human–robot friendship. I begin by outlining the standard Aristotelian view of friendship, according to which there are certain necessary conditions which x must meet in order to ‘be a friend’. I explain how the current literature typically uses this Aristotelian view to object to human–robot friendships on theoretical and ethical grounds. Theoretically, a robot cannot be our friend because it cannot meet the requisite necessary conditions for friendship. Ethically, human–robot friendships are wrong because they are deceptive (the robot does not actually meet the conditions for being a friend), and could also make it more likely that we will favour ‘perfect’ robots, and disrespect, exploit, or exclude other human beings. To argue against the above position, I begin by outlining and assessing current attempts to reject the theoretical argument—that we cannot befriend robots. I argue that the current attempts are problematic, and do little to support the claim that we can be friends with robots now (rather than in some future time). I then use the standard Aristotelian view as a touchstone to develop a new degrees-of-friendship view. On my view, it is theoretically possible for humans to have some degree of friendship with social robots now. I explain how my view avoids ethical concerns about human–robot friendships being deceptive, and/or leading to the disrespect, exploitation, or exclusion of other human beings.
CITATION STYLE
Ryland, H. (2021). It’s Friendship, Jim, but Not as We Know It: A Degrees-of-Friendship View of Human–Robot Friendships. Minds and Machines, 31(3), 377–393. https://doi.org/10.1007/s11023-021-09560-z
Mendeley helps you to discover research relevant for your work.