Autonomous agents are increasingly required to be able to make moral decisions. In these situations, the agent should be able to reason about the ethical bases of the decision and explain its decision in terms of the moral values involved. This is of special importance when the agent is interacting with a user and should understand the value priorities of the user in order to provide adequate support. This paper presents a model of agent behavior that takes into account user preferences and moral values.
CITATION STYLE
Cranefield, S., Winikoff, M., Dignum, V., & Dignum, F. (2017). No pizza for you: Value-based plan selection in BDI agents. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 178–184). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/26
Mendeley helps you to discover research relevant for your work.