Collision Avoidance Using Partially Controlled Markov Decision Processes

9Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Optimal collision avoidance in stochastic environments requires accounting for the likelihood and costs of future sequences of outcomes in response to different sequences of actions. Prior work has investigated formulating the problem as a Markov decision process, discretizing the state space, and solving for the optimal strategy using dynamic programming. Experiments have shown that such an approach can be very effective, but scaling to higher-dimensional problems can be challenging due to the exponential growth of the discrete state space. This paper presents an approach that can greatly reduce the complexity of computing the optimal strategy in problems where only some of the dimensions of the problem are controllable. The approach is applied to aircraft collision avoidance where the system must recommend maneuvers to an imperfect pilot. © Springer-Verlag Berlin Heidelberg 2013.

Cite

CITATION STYLE

APA

Kochenderfer, M. J., & Chryssanthacopoulos, J. P. (2013). Collision Avoidance Using Partially Controlled Markov Decision Processes. In Communications in Computer and Information Science (Vol. 271, pp. 86–100). Springer Verlag. https://doi.org/10.1007/978-3-642-29966-7_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free