Multimodal interactive transcription of ancient text images

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The amount of digitized legacy documents has been rising dramatically over the last years due mainly to the increasing number of on-line digital libraries publishing this kind of documents. On one hand, the vast majority of these documents remain waiting to be transcribed into a textual electronic format (such as ASCII or PDF) that would provide historians and other researchers new ways of indexing, consulting and querying these documents. On the other hand, in some cases, adequate transcriptions of the handwritten text images are already available. This drives an increasing need to align images and transcriptions in order to make it more comfortable the consulting of these documents. In this work two systems are presented to deal with these issues. The first one aims at transcribing these documents using a interactive-predictive approach, which integrates user corrective-feedback actions in the proper recognition process. The second one presents an alignment method based on the Viterbi algorithm to find mappings between word images of a given handwritten document and their respective (ASCII) words on its given transcription. © 2012 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Romero, V., Sánchez, J. A., Toselli, A. H., & Vidal, E. (2012). Multimodal interactive transcription of ancient text images. In Communications in Computer and Information Science (Vol. 247 CCIS, pp. 63–73). https://doi.org/10.1007/978-3-642-27978-2_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free