Robust scene text detection for multi-script languages using deep learning

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Text detection in natural images has been a high demand for a lot real-life applications such as image retrieval and self-navigation. This work deals with the problem of robust text detection especially for multi-script in natural scene images. Unlike the existing works that consider multi-script characters as groups of text fragments, we consider them as non-connected components. Specifically, we firstly propose a novel representation named Linked Extremal Regions (LER) to extract full characters instead of fragments of scene characters. Secondly, we propose a two-stage convolution neural net- works for discriminating multi-script texts in clutter background images for more robust text detection. Experimental results on three well-known datasets, namely, ICDAR 2011, 2013 and MSRA-TD500, demonstrate that the proposed method outperforms the state-of-the-art methods, and is also language independent.

Cite

CITATION STYLE

APA

Liu, R. Z., Sun, X., Xu, H., Shivakumara, P., Su, F., Lu, T., & Yang, R. (2017). Robust scene text detection for multi-script languages using deep learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10132 LNCS, pp. 329–340). Springer Verlag. https://doi.org/10.1007/978-3-319-51811-4_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free