DUBLIN: Visual Document Understanding By Language-Image Network

1Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we present DUBLIN, a pixel-based model for visual document understanding that does not rely on OCR. DUBLIN can process both images and texts in documents just by the pixels and handle diverse document types and tasks. DUBLIN is pretrained on a large corpus of document images with novel tasks that enhance its visual and linguistic abilities. We evaluate DUBLIN on various benchmarks and show that it achieves state-of-the-art performance on extractive tasks such as DocVQA, InfoVQA, AI2D, OCR-VQA, RefExp, and CORD, as well as strong performance on abstraction datasets such as VisualMRC and text captioning. Our model demonstrates the potential of OCR-free document processing and opens new avenues for applications and research.

Cite

CITATION STYLE

APA

Aggarwal, K., Khandelwal, A., Tanmay, K., Khan, O. M., Liu, Q., Choudhury, M., … Tiwary, S. (2023). DUBLIN: Visual Document Understanding By Language-Image Network. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Industry Track (pp. 693–706). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-industry.65

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free