Responsible autonomy

75Citations
Citations of this article
160Readers
Mendeley users who have this article in their library.

Abstract

As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.

Cite

CITATION STYLE

APA

Dignum, V. (2017). Responsible autonomy. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 4698–4704). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/655

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free