Trust in distributed artificial intelligence

48Citations
Citations of this article
107Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A discussion of trust is presented which focuses on multiagent systems, from the point of view of one agent in a system. The roles trust plays in various forms of interaction are considered, with the view that trust allows interactions between agents where there may have been no effective interaction possible before trust. Trust allows parties to acknowledge that, whilst there is a risk in relationships with potentially malevolent agents, some form of interaction may produce benefits, where no interaction at all may not. In addition, accepting the risk allows the trusting agent to prepare itself for possibly irresponsible or untrustworthy behaviour, thus minimizing the potential damage caused. A formalism is introduced to clarify these notions, and to permit computer simulations. An important contribution of this work is that the formalism is not allen-compassing: there are some notions of trust that are excluded. What it describes is a specific view of trust.

Cite

CITATION STYLE

APA

Marsh, S. (1994). Trust in distributed artificial intelligence. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 830 LNAI, pp. 94–112). Springer Verlag. https://doi.org/10.1007/3-540-58266-5_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free