An automatic method for video character segmentation

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an automatic segmentation system for characters in text color images cropped from natural images or videos based on a new neuronal architecture insuring fast processing and robustness against noise, variations in illumination, complex background and low resolution. An off-line training phase on a set of synthetic text color images, where the exact character positions are known, allows adjusting the neural parameters and thus building an optimal non linear filter which extracts the best features in order to robustly detect the border positions between characters. The proposed method is tested on a set of synthetic text images to precisely evaluate its performance according to noise, and on a set of complex text images collected from video frames and web pages to evaluate its performance on real images. The results are encouraging with a good segmentation rate of 89.12% and a recognition rate of 81.94% on a set of difficult text images collected from video frames and from web pages. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Saidane, Z., & Garcia, C. (2008). An automatic method for video character segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5112 LNCS, pp. 557–566). https://doi.org/10.1007/978-3-540-69812-8_55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free