SOTAVerified

Are We Really Making Much Progress in Text Classification? A Comparative Review

2022-04-08Code Available1· sign in to hype

Lukas Galke, Ansgar Scherp, Andor Diera, Fabian Karl, Bao Xin Lin, Bhakti Khera, Tim Meuser, Tushar Singhal

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We analyze various methods for single-label and multi-label text classification across well-known datasets, categorizing them into bag-of-words, sequence-based, graph-based, and hierarchical approaches. Despite the surge in methods like graph-based models, encoder-only pre-trained language models, notably BERT, remain state-of-the-art. However, recent findings suggest simpler models like logistic regression and trigram-based SVMs outperform newer techniques. While decoder-only generative language models show promise in learning with limited data, they lag behind encoder-only models in performance. We emphasize the superiority of discriminative language models like BERT over generative models for supervised tasks. Additionally, we highlight the literature's lack of robustness in method comparisons, particularly concerning basic hyperparameter optimizations like learning rate in fine-tuning encoder-only language models. Data availability: The source code is available at https://github.com/drndr/multilabel-text-clf All datasets used for our experiments are publicly available except the NYT dataset.

Tasks

Reproductions