In this paper, two new techniques to correct the OCR errors are proposed, recurrent neural networks with Long-Short Term Memory (LSTM), andWeighted Finite State Transducers (WFSTs) with contextdependent confusion rules. Both methods are applied on OCR results of Latin, and Urdu Script. Especially Urdu script is very challenging to OCR. For building an error model using context-dependent confusion rules, the OCR confusions which appear in the recognition outputs are translated into edit operations using Levenshtein edit distance algorithm. The new LSTM model avoids the calculations that occur in searching the language model and it also makes the language model eligible to correct unseen incorrect words. Our generic approaches are language independent. The proposed supervised LSTM model is compared with the context-dependent error model and state-of-the-art single rule-based methods. The evaluation on Latin script shows the error rate of LSTM is 0.48%, error model is 0.68% and the rule-based model is 1.0%. The evaluation shows that the accuracy of LSTM model on the Urdu testset is 1.58%, while the accuracy of the error model is 3.8% and OCR recognition results is 6.9% for Urdu testset. LSTM showed best performance on both Latin and Urdu script. As such, experiments show that LSTM performs very well in language techniques, especially, post-processing.
CITATION STYLE
Al Azawi, M., Ul Hasan, A., Liwicki, M., & Breuel, T. M. (2014). Character-level alignment using WFST and LSTM for post-processing in multi-script recognition systems - A comparative study. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8814, pp. 379–386). Springer Verlag. https://doi.org/10.1007/978-3-319-11758-4_41
Mendeley helps you to discover research relevant for your work.