SOTAVerified

Texts or Images? A Fine-grained Analysis on the Effectiveness of Input Representations and Models for Table Question Answering

2025-05-20Code Available0· sign in to hype

Wei Zhou, Mohsen Mesgar, Heike Adel, Annemarie Friedrich

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In table question answering (TQA), tables are encoded as either texts or images. Prior work suggests that passing images of tables to multi-modal large language models (MLLMs) performs comparably to or even better than using textual input with large language models (LLMs). However, the lack of controlled setups limits fine-grained distinctions between these approaches. In this paper, we conduct the first controlled study on the effectiveness of several combinations of table representations and models from two perspectives: question complexity and table size. We build a new benchmark based on existing TQA datasets. In a systematic analysis of seven pairs of MLLMs and LLMs, we find that the best combination of table representation and model varies across setups. We propose FRES, a method selecting table representations dynamically, and observe a 10% average performance improvement compared to using both representations indiscriminately.

Tasks

Reproductions