SOTAVerified

Rethinking the Evaluation of Pre-trained Text-and-Layout Models from an Entity-Centric Perspective

2024-02-04Code Available1· sign in to hype

Chong Zhang, Yixi Zhao, Chenshu Yuan, Yi Tu, Ya Guo, Qi Zhang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recently developed pre-trained text-and-layout models (PTLMs) have shown remarkable success in multiple information extraction tasks on visually-rich documents. However, the prevailing evaluation pipeline may not be sufficiently robust for assessing the information extraction ability of PTLMs, due to inadequate annotations within the benchmarks. Therefore, we claim the necessary standards for an ideal benchmark to evaluate the information extraction ability of PTLMs. We then introduce EC-FUNSD, an entity-centric benckmark designed for the evaluation of semantic entity recognition and entity linking on visually-rich documents. This dataset contains diverse formats of document layouts and annotations of semantic-driven entities and their relations. Moreover, this dataset disentangles the falsely coupled annotation of segment and entity that arises from the block-level annotation of FUNSD. Experiment results demonstrate that state-of-the-art PTLMs exhibit overfitting tendencies on the prevailing benchmarks, as their performance sharply decrease when the dataset bias is removed.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
EC-FUNSDGeoLayoutLMF183.62Unverified
EC-FUNSDGeoLayoutLMF186.18Unverified

Reproductions