The paper analyses which features of an agent architecture determine its Autonomy. I claim that Autonomy is a relational concept. First, Autonomy from environment (stimuli) is analysed, and the notion of Cognitive Reactivity is introduced to show how the cognitive architecture of the agent guarantees Stimulus-Autonomy and deals with the “Descartes problem” relative to the external “causes” of behaviour. Second, Social Autonomy is analysed (Autonomy from others). A distinction between Executive Autonomy and Motivational Autonomy is introduced. Some limitations that current postulates on Rational interacting agents could impose on their Autonomy are discussed. Architectural properties and postulates that guarantee a sufficient Autonomy in cognitive social agents are defined. These properties give the agent control over its own mental states (Beliefs and Goals). In particular, a “double filter” architecture against influence, is described. What guarantees agent’s control over its own Beliefs is specified: relevance, credibility, introspective competence. Particular attention is devoted to the “non negotiability of beliefs” (Pascal law): the fact that you cannot change the other’s Beliefs by using promises or threats. What guarantees agent’s control over its Goals is specified: self-interested goal adoption, and indirect influencing. Finally, it is argued how and why social dependence and power relations should limit the agent’s Autonomy.
Castelfranchi, C. (1995). Guarantees for autonomy in cognitive agent architecture. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 890, pp. 56–70). Springer Verlag. https://doi.org/10.1007/3-540-58855-8_3