How to use information theory to mitigate unfair rating attacks

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In rating systems, users want to construct accurate opinions based on ratings. However, the accuracy is bounded by the amount of information transmitted (leaked) by ratings. Rating systems are susceptible to unfair rating attacks. These attacks may decrease the amount of leaked information, by introducing noise. A robust trust system attempts to mitigate the effects of these attacks on the information leakage. Defenders cannot influence the actual ratings: being honest or from attackers. There are other ways for the defenders to keep the information leakage high: blocking/selecting the right advisors, observing transactions and offering more choices. Blocking suspicious advisors can only decrease robustness. If only a limited number of ratings can be used, however, then less suspicious advisors are better, and in case of a tie, newer advisors are better. Observing transactions increases robustness. Offering more choices may increase robustness.

Cite

CITATION STYLE

APA

Muller, T., Wang, D., Liu, Y., & Zhang, J. (2016). How to use information theory to mitigate unfair rating attacks. In IFIP Advances in Information and Communication Technology (Vol. 473, pp. 17–32). Springer New York LLC. https://doi.org/10.1007/978-3-319-41354-9_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free