On probability calibration of recurrent text recognition network

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Optical text recognition has seen continual improvement in character accuracy over the past decade. However, as error persists, it is crucial to know when and where a recognition error occurs. Studies have shown that recent development of deep convolutional neural networks tends to increase calibration errors, compared to traditional classifiers such as SVM. Yet, the calibration error in deep neural networks for sequential text recognition has not been studied in the literature. This paper addresses the probability misalignment problem in unsegmented text recognition models. We analyze the causes of probability misalignment in the popular recurrent text recognition model, the attention encoder-decoder model, and propose a novel probability calibration algorithm for individual character predictions. Experiments show that the proposed methods not only reduce expected calibration error, but also improve the character prediction accuracy. In our experiments, calibration error on authentic industrial datasets improved as much as 68% compared to original text recognizer outputs.

Cite

CITATION STYLE

APA

Zhu, X., Wang, J., Hong, Z., Guo, J., & Xiao, J. (2019). On probability calibration of recurrent text recognition network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11955 LNCS, pp. 425–436). Springer. https://doi.org/10.1007/978-3-030-36718-3_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free