Can human and artificial agents share an autonomy, categorical imperative-based ethics and “moral” selfhood?

  • Nowak E
N/ACitations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

AI designers endeavour to improve ‘autonomy’ in artificial intelligent devices, as recent developments show. This chapter firstly argues against attributing metaphysical attitudes to AI and, simultaneously, in favor of improving autonomous AI which has been enabled to respect autonomy in human agents. This seems to be the only responsible way of making further advances in the field of autonomous social AI. Let us examine what is meant by claims such as designing our artificial alter egos and sharing moral selves with artificial humanoid devices as well as providing autonomous AI with an ethical framework modelled upon the core aspects of moral selfhood, e.g., making decisions which are based on autonomous law-giving, in Kantian terms.

Cite

CITATION STYLE

APA

Nowak, E. (2018). Can human and artificial agents share an autonomy, categorical imperative-based ethics and “moral” selfhood? Filozofia Publiczna i Edukacja Demokratyczna, 6(2), 169–208. https://doi.org/10.14746/fped.2017.6.2.20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free