Prediction mechanisms that do not incentivize undesirable actions

22Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A potential downside of prediction markets is that they may incentivize agents to take undesirable actions in the real world. For example, a prediction market for whether a terrorist attack will happen may incentivize terrorism, and an in-house prediction market for whether a product will be successfully released may incentivize sabotage. In this paper, we study principal-aligned prediction mechanisms-mechanisms that do not incentivize undesirable actions. We characterize all principal-aligned proper scoring rules, and we show an "overpayment" result, which roughly states that with n agents, any prediction mechanism that is principal-aligned will, in the worst case, require the principal to pay Θ(n) times as much as a mechanism that is not. We extend our model to allow uncertainties about the principal's utility and restrictions on agents' actions, showing a richer characterization and a similar "overpayment" result. © 2009 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Shi, P., Conitzer, V., & Guo, M. (2009). Prediction mechanisms that do not incentivize undesirable actions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5929 LNCS, pp. 89–100). https://doi.org/10.1007/978-3-642-10841-9_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free