In minimally invasive surgery, dense 3D surface reconstruction is important for surgical navigation and integrating pre- and intra-operative data. Despite recent developments in 3D tissue deformation techniques, their general applicability is limited by specific constraints and underlying assumptions. The need for accurate and robust tissue deformation recovery has motivated research into fusing multiple visual cues for depth recovery. In this paper, a Markov Random Field (MRF) based Bayesian belief propagation framework has been proposed for the fusion of different depth cues. By using the underlying MRF structure to ensure spatial continuity in an image, the proposed method offers the possibility of inferring surface depth by fusing the posterior node probabilities in a node's Markov blanket together with the monocular and stereo depth maps. Detailed phantom validation and in vivo results are provided to demonstrate the accuracy, robustness, and practical value of the technique. © 2008 Springer Berlin Heidelberg.
CITATION STYLE
Lo, B., Scarzanella, M. V., Stoyanov, D., & Yang, G. Z. (2008). Belief propagation for depth cue fusion in minimally invasive surgery. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5242 LNCS, pp. 104–112). Springer Verlag. https://doi.org/10.1007/978-3-540-85990-1_13
Mendeley helps you to discover research relevant for your work.