Joint Commitments and Group Identification in Human-Robot Interaction

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper investigates the possibility of designing robots that are able to participate in commitments with human agents. In the first part of the article, we tackle some features that, we claim, make commitments crucial for human-human interactions. In particular, we focus on some reasons for believing that commitments can facilitate the planning and coordination of actions involving multiple agents: not only can commitments stabilize and perhaps even increase the motivation to contribute to other agents’ goals and to shared goals, they also reinforce agents’ willingness to rely on other agents’ contributions. In the second part, we turn our attention to human-robot interaction. Here, we elaborate on five problems that roboticists could encounter in the attempt to implement commitments in human-robot interactions, and we argue in favor of some possible solutions to those problems. Finally, in the last part of the paper we zoom in on joint commitments, i.e., on commitments held by a plurality of agents towards shared goals. Given that the concept of joint commitment invokes the notion of a group, we discuss some more specific challenges that would have to be met for human agents to group-identify with robots.

Cite

CITATION STYLE

APA

Salice, A., & Michael, J. (2017). Joint Commitments and Group Identification in Human-Robot Interaction. In Studies in the Philosophy of Sociality (pp. 179–199). Springer Science and Business Media B.V. https://doi.org/10.1007/978-3-319-53133-5_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free