SOTAVerified

Table Transformers for Imputing Textual Attributes

2024-08-04Code Available0· sign in to hype

Ting-Ruen Wei, YuAn Wang, Yoshitaka Inoue, Hsin-Tai Wu, Yi Fang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Missing data in tabular dataset is a common issue as the performance of downstream tasks usually depends on the completeness of the training dataset. Previous missing data imputation methods focus on numeric and categorical columns, but we propose a novel end-to-end approach called Table Transformers for Imputing Textual Attributes (TTITA) based on the transformer to impute unstructured textual columns using other columns in the table. We conduct extensive experiments on three datasets, and our approach shows competitive performance outperforming baseline models such as recurrent neural networks and Llama2. The performance improvement is more significant when the target sequence has a longer length. Additionally, we incorporate multi-task learning to simultaneously impute for heterogeneous columns, boosting the performance for text imputation. We also qualitatively compare with ChatGPT for realistic applications.

Tasks

Reproductions