Semi-markov reinforcement learning for stochastic resource collection

14Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

We show that the task of collecting stochastic, spatially distributed resources (Stochastic Resource Collection, SRC) may be considered as a Semi-Markov-Decision-Process. Our Deep-Q-Network (DQN) based approach uses a novel scalable and transferable artificial neural network architecture. The concrete use-case of the SRC is an officer (single agent) trying to maximize the amount of fined parking violations in his area. We evaluate our approach on a environment based on the real-world parking data of the city of Melbourne. In small, hence simple, settings with short distances between resources and few simultaneous violations, our approach is comparable to previous work. When the size of the network grows (and hence the amount of resources) our solution significantly outperforms preceding methods. Moreover, applying a trained agent to a non-overlapping new area outperforms existing approaches.

Cite

CITATION STYLE

APA

Schmoll, S., & Schubert, M. (2020). Semi-markov reinforcement learning for stochastic resource collection. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 3349–3355). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/463

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free