Robots that refuse to admit losing – A case study in game playing using self-serving bias in the humanoid robot MARC

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The research presented in this paper is part of a wider study investigating the role cognitive bias plays in developing long-term companionship between a robot and human. In this paper we discuss how the self-serving cognitive bias can play a role in robot-human interaction. One of the robots used in this study called MARC (See Fig. 1) was given a series of self-serving trait behaviours such as denying own faults for failures, blaming on others and bragging. Such fallible behaviours were compared to the robot’s non-biased friendly behaviours. In the current paper, we present comparisons of two case studies using the selfserving bias and a non-biased algorithm. It is hoped that such humanlike fallible characteristics can help in developing a more natural and believable companionship between Robots and Humans. The results of the current experiments show that the participants initially warmed to the robot with the self-serving traits.

Cite

CITATION STYLE

APA

Biswas, M., & Murray, J. (2016). Robots that refuse to admit losing – A case study in game playing using self-serving bias in the humanoid robot MARC. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9834 LNCS, pp. 538–548). Springer Verlag. https://doi.org/10.1007/978-3-319-43506-0_47

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free