Target surveillance in adversarial environments using POMDPs

15Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

This paper introduces an extension of the target surveillance problem in which the surveillance agent is exposed to an adversarial ballistic threat. The problem is formulated as a mixed observability Markov decision process (MOMDP), which is a factored variant of the partially observable Markov decision process, to account for state and dynamic uncertainties. The control policy resulting from solving the MOMDP aims to optimize the frequency of target observations and minimize exposure to the ballistic threat. The adversary's behavior is modeled with a level-k policy, which is used to construct the state transition of the MOMDP. The approach is empirically evaluated against a MOMDP adversary and against a human opponent in a target surveillance computer game. The empirical results demonstrate that, on average, level 3 MOMDP policies outperform lower level reasoning policies as well as human players.

Cite

CITATION STYLE

APA

Egorov, M., Kochenderfer, M. J., & Uudmae, J. J. (2016). Target surveillance in adversarial environments using POMDPs. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 2473–2479). AAAI press. https://doi.org/10.1609/aaai.v30i1.10126

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free