Dynamic programming and stochastic control processes

0Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.

Abstract

Consider a system S specified at any time t by a finite dimensional vector x(t) satisfying a vector differential equation dx/dt = g[x, r(t), f(t)], x(0) = c, where c is the initial state, r(t) is a random forcing term possessing a known distribution, and f(t) is a forcing term chosen, via a feedback process, so as to minimize the expected value of a functional J(x) = f{hook}0T h(x - y, t) dG(t), where y(t) is a known function, or chosen so as to minimize the functional defined by the probability that max0 ≦ t ≦ T h(x - y, t) exceed a specified bound. It is shown how the functional equation technique of dynamic programming may be used to obtain a new computational and analytic approach to problems of this genre. The limited memory capacity of present-day digital computers limits the routine application of these techniques to first and second order systems at the moment, with limited application to higher order systems. © 1958.

Cite

CITATION STYLE

APA

Bellman, R. (1958). Dynamic programming and stochastic control processes. Information and Control, 1(3), 228–239. https://doi.org/10.1016/S0019-9958(58)80003-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free