Resource Allocation Among Agents with MDP-Induced Preferences

  • Dolgov D
  • Durfee E
N/ACitations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

Allocating scarce resources among agents to maximize global utility is, in general, computationally challenging. We focus on problems where resources enable agents to execute actions in stochastic environments, modeled as Markov decision processes (MDPs), such that the value of a resource bundle is defined as the expected value of the optimal MDP policy realizable given these resources. We present an algorithm that simultaneously solves the resource-allocation and the policy-optimization problems. This allows us to avoid explicitly representing utilities over exponentially many resource bundles, leading to drastic (often exponential) reductions in computational complexity. We then use this algorithm in the context of self-interested agents to design a combinatorial auction for allocating resources. We empirically demonstrate the effectiveness of our approach by showing that it can, in minutes, optimally solve problems for which a straightforward combinatorial resource-allocation technique would require the agents to enumerate up to 2^100 resource bundles and the auctioneer to solve an NP-complete problem with an input of that size.

Cite

CITATION STYLE

APA

Dolgov, D. A., & Durfee, E. H. (2006). Resource Allocation Among Agents with MDP-Induced Preferences. Journal of Artificial Intelligence Research, 27, 505–549. https://doi.org/10.1613/jair.2102

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free