XFUND: A Benchmark Dataset for Multilingual Visually Rich Form Understanding

66Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.

Abstract

Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and pre-trained LayoutXLM models have been publicly available at https://aka.ms/layoutxlm.

Cite

CITATION STYLE

APA

Xu, Y., Lv, T., Cui, L., Wang, G., Lu, Y., Florencio, D., … Wei, F. (2022). XFUND: A Benchmark Dataset for Multilingual Visually Rich Form Understanding. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 3214–3224). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.253

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free