In the field of artificial intelligence, autonomy is defined as an expected capacity for systems to behave without human control, and there are currently many items with such capabilities. However, the mechanisms for how autonomous behavior is generated in these systems are different than those in humans, thus machine autonomy is often misunderstood. In this paper, we rethink the autonomy of artificial systems, including how they should function, what autonomy truly means, and how society might accept such systems. First, we review the general meaning of autonomy within an ideological background of the relations between human beings and objects. We also discuss how machine autonomy can be realized, and how it differs from human autonomy. Based on these premises, we consider this as a relative difference, not a deficiency of machines, and seek any means by which the concept of autonomy can be fully extended to machines. We begin with a key aspect of autonomy, an early concept of the self. Then, the pain nervous system, which is expected to produce empathy, morality, and ethics, is introduced and compared with human autonomic nervous systems. Finally, a hierarchy of autonomous machines is introduced and discussed in the context of responsibility. This may cause social impacts on how to treat the artificial systems when they cause fatal errors. All stake holders should consider such cases together for the healthy development of science and technology that have important roles in our future symbiotic society.
CITATION STYLE
Asada, M. (2020). Rethinking Autonomy of Humans and Robots. Journal of Artificial Intelligence and Consciousness, 7(2), 141–153. https://doi.org/10.1142/S2705078520500083
Mendeley helps you to discover research relevant for your work.