FTC of hidden Markov process with application to resource allocation in air operation

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

This paper investigates the feedback control of hidden Markov process (HMP) in the face of loss of some observation processes. The control action facilitates or impedes some particular transitions from an inferred current state in the attempt to maximize the probability that the HMP is driven to a desirable absorbing state. This control problem is motivated by the need for judicious resource allocation to win an air operation involving two opposing forces. The effectiveness of a receding horizon control scheme based on the inferred discrete state is examined. Tolerance to loss of sensors that help determine the state of the air operation is achieved through a decentralized scheme that estimates a continuous state from measurements of linear models with additive noise. The discrete state of the HMP is identified using three well-known detection schemes. The sub-optimal control policy based on the detected state is implemented on-line in a closed-loop, where the air operation is simulated as a stochastic process with SimEvents, and the measurement process is simulated for a range of single sensor loss rates.

Cite

CITATION STYLE

APA

Wu, N. E., & Ruschmann, M. C. (2011). FTC of hidden Markov process with application to resource allocation in air operation. Journal of Systems Engineering and Electronics, 22(1), 12–21. https://doi.org/10.3969/j.issn.1004-4132.2011.01.002

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free