Reasoning with Uncertain and Conflicting Opinions in Open Reputation Systems

Citations of this article
Mendeley users who have this article in their library.


Reputation systems support users to distinguish between trustworthy and malicious or unreliable services. They collect and evaluate available user opinions about services and about other users in order to determine an estimation for the trustworthiness of a specified service. The usefulness of a reputation system highly depends on its underlying trust model, i.e., the representation of trust values and the methods to calculate with these trust vales. Several proposed trust models that allow representing degrees of trust, ignorance and distrust show undesired properties when conflicting opinions are combined. The proposed consensus operators usually eliminate the incurred degree of conflict and perform a re-normalization. We argue that this elimination causes counterintuitive effects and should thus be avoided. Therefore, we propose a new representation of trust values that reflects also the degree of conflict, and we develop a calculus and operators to compute reputation values. Our approach requires no re-normalizations and thus avoids the thereby caused undesired effects. © 2009 Elsevier B.V. All rights reserved.




Gutscher, A. (2009). Reasoning with Uncertain and Conflicting Opinions in Open Reputation Systems. Electronic Notes in Theoretical Computer Science, 244, 67–79.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free