DeepStroke: Understanding glyph structure with semantic segmentation and tabu search

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Glyphs in many writing systems (e.g., Chinese) are composed of a sequence of strokes written in a specific order. Glyph structure interpreting (i.e., stroke extraction) is one of the most important processing steps in many tasks including aesthetic quality evaluation, handwriting synthesis, character recognition, etc. However, existing methods that rely heavily on accurate shape matching are not only time-consuming but also unsatisfactory in stroke extraction performance. In this paper, we propose a novel method based on semantic segmentation and tabu search to interpret the structure of Chinese glyphs. Specifically, we first employ an improved Fully Convolutional Network (FCN), DeepStroke, to extract strokes, and then use the tabu search to obtain the order how these strokes are drawn. We also build the Chinese Character Stroke Segmentation Dataset (CCSSD) consisting of 67630 character images that can be equally classified into 10 different font styles. This dataset provides a benchmark for both stroke extraction and semantic segmentation tasks. Experimental results demonstrate the effectiveness and efficiency of our method and validate its superiority against the state of the art.

Cite

CITATION STYLE

APA

Wang, W., Lian, Z., Tang, Y., & Xiao, J. (2020). DeepStroke: Understanding glyph structure with semantic segmentation and tabu search. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11961 LNCS, pp. 353–364). Springer. https://doi.org/10.1007/978-3-030-37731-1_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free