SOTAVerified

Bias Beyond English: Evaluating Social Bias and Debiasing Methods in a Low-Resource Setting

2025-04-15Unverified0· sign in to hype

Ej Zhou, Weiming Lu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Social bias in language models can potentially exacerbate social inequalities. Despite it having garnered wide attention, most research focuses on English data. In a low-resource scenario, the models often perform worse due to insufficient training data. This study aims to leverage high-resource language corpora to evaluate bias and experiment with debiasing methods in low-resource languages. We evaluated the performance of recent multilingual models in five languages: English (eng), Chinese (zho), Russian (rus), Indonesian (ind) and Thai (tha), and analyzed four bias dimensions: gender, religion, nationality, and race-color. By constructing multilingual bias evaluation datasets, this study allows fair comparisons between models across languages. We have further investigated three debiasing methods-CDA, Dropout, SenDeb-and demonstrated that debiasing methods from high-resource languages can be effectively transferred to low-resource ones, providing actionable insights for fairness research in multilingual NLP.

Tasks

Reproductions