On reasoning about other agents

16Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Drawing on on our work in the area of Distributed Artificial Intelligence, we propose the rudiments of an approach to reasoning about other agents. We attempt to relate current philosophical intuitions to theoretical foundations, and concentrate on the issue of modeling. The philosophical position we have put forth is a combination of Daniel Dennett's philosophy of the ladder of agenthood (consisting of rationality, intentionality, stance, reciprocity, communication, and consciousness) on one hand, and the utilitarian philosophy of selfish utility maximization on the other hand. Dennett's notion of a stance is fundamental to the issue of modeling other agents, and in interesting special cases ti leads to nesting of models. Our own framework, the Recursive Modeling Method (RMM), represents this nesting and lets an agent coordinate its actions with the actions of other agents, cooperate with them when appropriate, and rationally choose an optimal form of communication with them.

Cite

CITATION STYLE

APA

Gmytrasiewicz, P. J. (1996). On reasoning about other agents. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1037, pp. 143–155). Springer Verlag. https://doi.org/10.1007/3540608052_64

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free