SOTAVerified

Towards Detection of Subjective Bias using Contextualized Word Embeddings

2020-02-16Code Available0· sign in to hype

Tanvi Dadu, Kartikey Pant, Radhika Mamidi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Subjective bias detection is critical for applications like propaganda detection, content recommendation, sentiment analysis, and bias neutralization. This bias is introduced in natural language via inflammatory words and phrases, casting doubt over facts, and presupposing the truth. In this work, we perform comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus(WNC). The dataset consists of 360k labeled instances, from Wikipedia edits that remove various instances of the bias. We further propose BERT-based ensembles that outperform state-of-the-art methods like BERT_large by a margin of 5.6 F1 score.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Wiki Neutrality CorpusRoBERTa+ALBERTF170.4Unverified

Reproductions