Layout-Aware information extraction for document-grounded dialogue: Dataset, method and demonstration

7Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Building document-grounded dialogue systems have received growing interest as documents convey a wealth of human knowledge and commonly exist in enterprises. Wherein, how to comprehend and retrieve information from documents is a challenging research problem. Previous work ignores the visual property of documents and treats them as plain text, resulting in incomplete modality. In this paper, we propose a Layout-Aware document-level Information Extraction dataset, LIE, to facilitate the study of extracting both structural and semantic knowledge from visually rich documents (VRDs), so as to generate accurate responses in dialogue systems. LIE contains 62k annotations of three extraction tasks from 4,061 pages in product and official documents, becoming the largest VRD-based information extraction dataset to the best of our knowledge. We also develop benchmark methods that extend the token-based language model to consider layout features like humans. Empirical results show that layout is critical for VRD-based extraction, and system demonstration also verifies that the extracted knowledge can help locate the answers that users care about.

Cite

CITATION STYLE

APA

Zhang, Z., Yu, B., Yu, H., Liu, T., Fu, C., Li, J., … Li, Y. (2022). Layout-Aware information extraction for document-grounded dialogue: Dataset, method and demonstration. In MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia (pp. 7252–7260). Association for Computing Machinery, Inc. https://doi.org/10.1145/3503161.3548765

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free