WUKONG-READER: Multi-modal Pre-training for Fine-grained Visual Document Understanding

8Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Unsupervised pre-training on millions of digital-born or scanned documents has shown promising advances in visual document understanding (VDU). While various vision-language pre-training objectives are studied in existing solutions, the document textline, as an intrinsic granularity in VDU, has seldom been explored so far. A document textline usually contains words that are spatially and semantically correlated, which can be easily obtained from OCR engines. In this paper, we propose WUKONG-READER, trained with new pre-training objectives to leverage the structural knowledge nested in document textlines. We introduce textline-region contrastive learning to achieve fine-grained alignment between the visual regions and texts of document textlines. Furthermore, masked region modeling and textline-grid matching are also designed to enhance the visual and layout representations of textlines. Experiments show that WUKONG-READER brings superior performance on various VDU tasks in both English and Chinese. The fine-grained alignment over textlines also empowers WUKONG-READER with promising localization ability.

Cite

CITATION STYLE

APA

Bai, H., Liu, Z., Meng, X., Li, W., Liu, S., Luo, Y., … Liu, Q. (2023). WUKONG-READER: Multi-modal Pre-training for Fine-grained Visual Document Understanding. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 13386–13401). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.748

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free