Networks of social and moral norms in human and robot agents

37Citations
Citations of this article
67Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The most intriguing and ethically challenging roles of robots in society are those of collaborator and social partner. We propose that such robots must have the capacity to learn, represent, activate, and apply social and moral norms—they must have a norm capacity. We offer a theoretical analysis of two parallel questions: what constitutes this norm capacity in humans and how might we implement it in robots? We propose that the human norm system has four properties: flexible learning despite a general logical format, structured representations, context-sensitive activation, and continuous updating. We explore two possible models that describe how norms are cognitively represented and activated in context-specific ways and draw implications for robotic architectures that would implement either model.

Cite

CITATION STYLE

APA

Malle, B. F., Scheutz, M., & Austerweil, J. L. (2017). Networks of social and moral norms in human and robot agents. In Intelligent Systems, Control and Automation: Science and Engineering (Vol. 84, pp. 3–17). Kluwer Academic Publishers. https://doi.org/10.1007/978-3-319-46667-5_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free