Self-attention Networks for Non-recurrent Handwritten Text Recognition

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Handwritten text recognition is still an unsolved problem in the field of machine learning. Nevertheless, this technology has improved considerably in the last decade in part thanks to advancements in recurrent neural networks. Unfortunately, due to their sequential nature, recurrent models cannot be effectively parallelised during training. Meanwhile, in natural language processing research, the transformer has recently become the dominant architecture; replacing the recurrent networks that were once popular. These new models are far more efficient to train than their predecessors because their primary building block, the self-attention network, can process sequences entirely non-recurrently. This work demonstrates that self-attention networks can replace the recurrent networks of state-of-the-art handwriting recognition models and achieve competitive error rates, while reducing the time required to train and the number of parameters significantly.

Cite

CITATION STYLE

APA

d’Arce, R., Norton, T., Hannuna, S., & Cristianini, N. (2022). Self-attention Networks for Non-recurrent Handwritten Text Recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13639 LNCS, pp. 389–403). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-21648-0_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free