SOTAVerified

Features or Spurious Artifacts? Data-centric Baselines for Fair and Robust Hate Speech Detection

2022-07-01NAACL 2022Code Available0· sign in to hype

Alan Ramponi, Sara Tonelli

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Avoiding to rely on dataset artifacts to predict hate speech is at the cornerstone of robust and fair hate speech detection. In this paper we critically analyze lexical biases in hate speech detection via a cross-platform study, disentangling various types of spurious and authentic artifacts and analyzing their impact on out-of-distribution fairness and robustness. We experiment with existing approaches and propose simple yet surprisingly effective data-centric baselines. Our results on English data across four platforms show that distinct spurious artifacts require different treatments to ultimately attain both robustness and fairness in hate speech detection. To encourage research in this direction, we release all baseline models and the code to compute artifacts, pointing it out as a complementary and necessary addition to the data statements practice.

Tasks

Reproductions