In this paper, we propose a multi-agent belief revision algorithm that utilizes knowledge about the reliability or trustworthiness (reputation) of information sources1. Incorporating reliability information into belief revision mechanisms is essential for agents in real world multi-agent systems. This research assumes the global truth is not available to individual agents and agents only maintain a local subjective perspective, which often is different from the perspective of others. This assumption is true for many domains where the global truth is not available (or infeasible to acquire and maintain) and the cost of collecting and maintaining a centralized global perspective is prohibitive. As an agent builds its local perspective, the variance on the quality of the incoming information depends on the originating information sources. Modeling the quality of incoming information is useful regardless of the level and type of security in a given system. This paper introduces the definition of the trust as the agent's confidence in the ability and intention of an information source to deliver correct information and reputation as the amount of trust an information source has created for itself through interactions with other agents. This economical (or monetary) perspective of reputation, viewing reputation as an asset, serves as social law that mandates staying trustworthy to other agents. Algorithms (direct and indirect) maintaining the model of the reputations of other information sources are also introduced.
CITATION STYLE
Barber, K. S., & Kim, J. (2001). Belief revision process based on trust:Agents evaluating reputation of information sources. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2246, pp. 73–82). Springer Verlag. https://doi.org/10.1007/3-540-45547-7_5
Mendeley helps you to discover research relevant for your work.