Peer prediction mechanisms incentivize agents to truthfully report their signals, in the absence of a verification mechanism, by comparing their reports with those of their peers. Prior work in this area is essentially restricted to the case of homogeneous agents, whose signal distributions are identical. This is limiting in many domains, where we would expect agents to differ in taste, judgment, and reliability. Although the Correlated Agreement (CA) mechanism [Shnayder et al. 2016a] can be extended to handle heterogeneous agents, there is a new challenge of efficiently estimating agent signal types. We solve this problem by clustering agents based on their reporting behavior, proposing a mechanism that works with clusters of agents, and designing algorithms that learn such a clustering. In this way, we also connect peer prediction with the Dawid and Skene [1979] literature on latent types. We retain the robustness against coordinated misreports of the CA mechanism, achieving an approximate incentive guarantee of ϵ-informed truthfulness. We show on real data that this incentive approximation is reasonable in practice, even with a small number of clusters.
CITATION STYLE
Agarwal, A., Mandal, D., Parkes, D. C., & Shah, N. (2020). Peer Prediction with Heterogeneous Users. ACM Transactions on Economics and Computation, 8(1). https://doi.org/10.1145/3381519
Mendeley helps you to discover research relevant for your work.