A Concurrent SOM-Based Chan-Vese Model for Image Segmentation

13Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Concurrent Self Organizing Maps (CSOM s) deal with the pattern classification problem in a parallel processing way, aiming to minimize a suitable objective function. Similarly, Active Contour Models (ACM s) (e.g., the Chan-Vese (CV) model) deal with the image segmentation problem as an optimization problem by minimizing a suitable energy functional. The effectiveness of ACM s is a real challenge in many computer vision applications. In this paper, we propose a novel regional ACM, which relies on a CSOM to approximate the foreground and background image intensity distributions in a supervised way, and to drive the active-contour evolution accordingly. We term our model Concurrent Self Organizing Map-based Chan-Vese (CSOM-CV) model. Its main idea is to concurrently integrate the global information extracted by a CSOM from a few supervised pixels into the level-set framework of the CV model to build an effective ACM. Experimental results show the effectiveness of CSOM-CV in segmenting synthetic and real images, when compared with the stand-alone CV and CSOM models. © Springer International Publishing Switzerland 2014.

Cite

CITATION STYLE

APA

Abdelsamea, M. M., Gnecco, G., & Gaber, M. M. (2014). A Concurrent SOM-Based Chan-Vese Model for Image Segmentation. In Advances in Intelligent Systems and Computing (Vol. 295, pp. 199–208). Springer Verlag. https://doi.org/10.1007/978-3-319-07695-9_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free