Multi-armed bandit policies for reputation systems

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The robustness of reputation systems against manipulations have been widely studied. However, the study of how to use the reputation values computed by those systems are rare. In this paper, we draw the analogy between reputation systems and multi-armed bandit problems. We investigate how to use the multi-armed bandit selection policies in order to increase the robustness of reputation systems against malicious agents. To this end, we propose a model of an abstract service sharing system which uses such a bandit-based reputation system. Finally, in an empirical study, we show that some multi-armed bandits policies are more robust against manipulations but cost-free for the malicious agents whereas some other policies are manipulable but costly. © 2014 Springer International Publishing Switzerland.

Cite

CITATION STYLE

APA

Vallée, T., Bonnet, G., & Bourdon, F. (2014). Multi-armed bandit policies for reputation systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8473 LNAI, pp. 279–290). Springer Verlag. https://doi.org/10.1007/978-3-319-07551-8_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free