Combining moral theory, modal logic and mas to create well-behaving artificial agents

23Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Witnessing a growing number of increasingly autonomous software agents we interact with or that operate on our behalf under circumstances that are not fully known in advance, we argue that there is a need to provide these agents with moral reasoning capabilities. Looking at the current literature on behaviour constraints and multi-agent (software) systems (MAS), one can distinguish various topics. The first topic concerns the analysis of various forms of restraint and their basis. This topic is at the core of moral philosophy. The second topic concerns the formalized specification of, and the reasoning about the constraints. The research on this topic focuses predominantly on the use of logic, mostly modal logic, and defeasible logic. The last topic is the MAS and implementation related topic of designing a working system in which there are rules that can be enforced and deviant behaviour be detected. Here we argue that all three topics need addressing and strong integration. The moral philosophical analysis is needed to provide a detailed conceptualization of the various forms of behaviour constraint and direction. This analysis goes beyond what is usual in the more technical/design focus. The (modal) logic provides the rigour required to ultimately allow implementation. The implementation itself is the ultimate objective. We outline the three components and demonstrate how they can be integrated. We observe here that we do not intend, or claim, that this moral reasoning is on par with human moral reasoning. Our claim is that the analysis of human moral reasoning may provide a useful model for constraining software agent behaviour. And, as equally important, it is recognizable by humans which is an important characteristic when it comes to 'human-artificial agent' interaction. Recognizing and understanding the precise basis for the behaviour constraint in the artificial entity will make the agent more trustful which, in its turn, will facilitate the acceptance of the use of and the interaction with artificial agents. © The Author(s) 2009.

Cite

CITATION STYLE

APA

Wiegel, V., & van den Berg, J. (2009). Combining moral theory, modal logic and mas to create well-behaving artificial agents. International Journal of Social Robotics, 1(3), 233–242. https://doi.org/10.1007/s12369-009-0023-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free