The ability to learn from user behavior during image segmentation to replicate the innate human ability to adapt shape delineation to contextually specific local information is an important area of study in image understanding. Current approaches to image segmentation usually incorporate specific designs, either relying on generic image features or specific prior knowledge, which usually prevent their application in different contextual settings. In this paper, a general segmentation framework based on reinforcement learning is proposed. It demonstrates how user-specific behavior can be assimilated in-situ for effective model adaptation and learning. It incorporates a two-layer reinforcement learning algorithm that constructs the model from accumulated experience during user interaction. As the algorithm learns 'pervasively' whilst the user performs manual segmentation, no additional steps are required for the training process, allowing the method to adapt and improve its accuracy as experience is acquired. Detailed validation of the method on in-vivo magnetic resonance (MR) data demonstrates the practical value of the technique in significantly reducing the level of user interaction required, whilst maintaining the overall segmentation accuracy. © 2011 Springer-Verlag.
CITATION STYLE
Wang, L., Merrifield, R., & Yang, G. Z. (2011). Reinforcement learning for context aware segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6893 LNCS, pp. 627–634). https://doi.org/10.1007/978-3-642-23626-6_77
Mendeley helps you to discover research relevant for your work.