LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding
Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/microsoft/unilm/tree/master/layoutxlmOfficialpytorch★ 0
- github.com/huggingface/transformerspytorch★ 158,292
- github.com/PaddlePaddle/PaddleOCRpaddle★ 72,845
- github.com/facebookresearch/data2vec_visionpytorch★ 80
- github.com/2024-MindSpore-1/Code3/tree/main/VI-LayoutXLMmindspore★ 0
- github.com/PaddlePaddle/PaddleNLP/tree/develop/paddlenlp/transformers/layoutxlmpaddle★ 0
Abstract
Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually-rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. In this paper, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. To accurately evaluate LayoutXLM, we also introduce a multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese), and key-value pairs are manually labeled for each language. Experiment results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The pre-trained LayoutXLM model and the XFUND dataset are publicly available at https://aka.ms/layoutxlm.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| RVL-CDIP | LayoutXLM | Accuracy | 95.21 | — | Unverified |