LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking
Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/microsoft/unilm/tree/master/layoutlmv3Officialpytorch★ 0
- github.com/huggingface/transformerspytorch★ 158,292
- github.com/MindSpore-scientific-2/code-14/tree/main/layoutlmv3mindspore★ 0
- github.com/pwc-1/Paper-9/tree/main/layoutlmv3mindspore★ 0
Abstract
Self-supervised pre-training techniques have achieved remarkable progress in Document AI. Most multimodal pre-trained models use a masked language modeling objective to learn bidirectional representations on the text modality, but they differ in pre-training objectives for the image modality. This discrepancy adds difficulty to multimodal representation learning. In this paper, we propose LayoutLMv3 to pre-train multimodal Transformers for Document AI with unified text and image masking. Additionally, LayoutLMv3 is pre-trained with a word-patch alignment objective to learn cross-modal alignment by predicting whether the corresponding image patch of a text word is masked. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model for both text-centric and image-centric Document AI tasks. Experimental results show that LayoutLMv3 achieves state-of-the-art performance not only in text-centric tasks, including form understanding, receipt understanding, and document visual question answering, but also in image-centric tasks such as document image classification and document layout analysis. The code and models are publicly available at https://aka.ms/layoutlmv3.