Approximating Markov Chain Approach to Optimal Feedback Control of a Flexible Needle

3Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a computationally efficient approach for the intra-operative update of the feedback control policy for the steerable needle in the presence of the motion uncertainty. The solution to dynamic programming (DP) equations, to obtain the optimal control policy, is difficult or intractable for nonlinear problems such as steering flexible needle in soft tissue. We use the method of approximating Markov chain to approximate the continuous (and controlled) process with its discrete and locally consistent counterpart. This provides the ground to examine the linear programming (LP) approach to solve the imposed DP problem that significantly reduces the computational demand. A concrete example of the two-dimensional (2D) needle steering is considered to investigate the effectiveness of the LP method for both deterministic and stochastic systems. We compare the performance of the LP-based policy with the results obtained through more computationally demanding algorithm, iterative policy space approximation. Finally, the reliability of the LP-based policy dealing with motion and parametric uncertainties as well as the effect of insertion point/angle on the probability of success is investigated.

Cite

CITATION STYLE

APA

Sovizi, J., Kumar, S., & Krovi, V. (2016). Approximating Markov Chain Approach to Optimal Feedback Control of a Flexible Needle. Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME, 138(11). https://doi.org/10.1115/1.4033834

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free