Lacuna Reconstruction: Self-Supervised Pre-Training for Low-Resource Historical Document Transcription

4Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

We present a self-supervised pre-training approach for learning rich visual language representations for both handwritten and printed historical document transcription. After supervised fine-tuning of our pre-trained encoder representations for low-resource document transcription on two languages, (1) a heterogeneous set of handwritten Islamicate manuscript images and (2) early modern English printed documents, we show a meaningful improvement in recognition accuracy over the same supervised model trained from scratch with as few as 30 line image transcriptions for training. Our masked language modelstyle pre-training strategy, where the model is trained to be able to identify the true masked visual representation from distractors sampled from within the same line, encourages learning robust contextualized language representations invariant to scribal writing style and printing noise present across documents.

Cite

CITATION STYLE

APA

Vogler, N., Allen, J. P., Miller, M. T., & Berg-Kirkpatrick, T. (2022). Lacuna Reconstruction: Self-Supervised Pre-Training for Low-Resource Historical Document Transcription. In Findings of the Association for Computational Linguistics: NAACL 2022 - Findings (pp. 206–216). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-naacl.15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free