SOTAVerified

SubRegWeigh: Effective and Efficient Annotation Weighing with Subword Regularization

2024-09-10Code Available0· sign in to hype

Kohei Tsuji, Tatsuya Hiraoka, Yuchang Cheng, Tomoya Iwakura

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

NLP datasets may still contain annotation errors, even when they are manually annotated. Researchers have attempted to develop methods to automatically reduce the adverse effect of errors in datasets. However, existing methods are time-consuming because they require many trained models to detect errors. This paper proposes a time-saving method that utilizes a tokenization technique called subword regularization to simulate multiple error detection models for detecting errors. Our proposed method, SubRegWeigh, can perform annotation weighting four to five times faster than the existing method. Additionally, SubRegWeigh improved performance in document classification and named entity recognition tasks. In experiments with pseudo-incorrect labels, SubRegWeigh clearly identifies pseudo-incorrect labels as annotation errors. Our code is available at https://github.com/4ldk/SubRegWeigh .

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CoNLL++LUKE + SubRegWeigh (K-means)F196.12Unverified
CoNLL++RoBERTa + SubRegWeigh (K-means)F195.45Unverified
CoNLL 2003 (English)RoBERTa + SubRegWeigh (K-means)F193.81Unverified
CoNLL 2003 (English)LUKE + SubRegWeigh (K-means)F194.2Unverified
CoNLL-2020RoBERTa + SubRegWeigh (K-means)F194.96Unverified
CoNLL-2020LUKE + SubRegWeigh (K-means)F195.31Unverified
WNUT 2017RoBERTa + SubRegWeigh (K-means)F160.29Unverified

Reproductions