SOTAVerified

PP-DocBee2: Improved Baselines with Efficient Data for Multimodal Document Understanding

2025-06-22Code Available0· sign in to hype

Kui Huang, Xinrong Chen, Wenyu Lv, Jincheng Liao, Guanzhong Wang, Yi Liu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This report introduces PP-DocBee2, an advanced version of the PP-DocBee, designed to enhance multimodal document understanding. Built on a large multimodal model architecture, PP-DocBee2 addresses the limitations of its predecessor through key technological improvements, including enhanced synthetic data quality, improved visual feature fusion strategy, and optimized inference methodologies. These enhancements yield an 11.4\% performance boost on internal benchmarks for Chinese business documents, and reduce inference latency by 73.0\% to the vanilla version. A key innovation of our work is a data quality optimization strategy for multimodal document tasks. By employing a large-scale multimodal pre-trained model to evaluate data, we apply a novel statistical criterion to filter outliers, ensuring high-quality training data. Inspired by insights into underutilized intermediate features in multimodal models, we enhance the ViT representational capacity by decomposing it into layers and applying a novel feature fusion strategy to improve complex reasoning. The source code and pre-trained model are available at https://github.com/PaddlePaddle/PaddleMIX.

Tasks

Reproductions