DocGraphLM: Documental Graph Language Model for Information Extraction

5Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Advances in Visually Rich Document Understanding (VrDU) have enabled information extraction and question answering over documents with complex layouts. Two tropes of architectures have emerged-transformer-based models inspired by LLMs, and Graph Neural Networks. In this paper, we introduce DocGraphLM, a novel framework that combines pre-trained language models with graph semantics. To achieve this, we propose 1) a joint encoder architecture to represent documents, and 2) a novel link prediction approach to reconstruct document graphs. DocGraphLM predicts both directions and distances between nodes using a convergent joint loss function that prioritizes neighborhood restoration and downweighs distant node detection. Our experiments on three SotA datasets show consistent improvement on IE and QA tasks with the adoption of graph features. Moreover, we report that adopting the graph features accelerates convergence in the learning process druing training, despite being solely constructed through link prediction.

Cite

CITATION STYLE

APA

Wang, D., Ma, Z., Nourbakhsh, A., Gu, K., & Shah, S. (2023). DocGraphLM: Documental Graph Language Model for Information Extraction. In SIGIR 2023 - Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 1944–1948). Association for Computing Machinery, Inc. https://doi.org/10.1145/3539618.3591975

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free