SOTAVerified

Hate Speech Detection

Hate speech detection is the task of detecting if communication such as text, audio, and so on contains hatred and or encourages violence towards a person or a group of people. This is usually based on prejudice against 'protected characteristics' such as their ethnicity, gender, sexual orientation, religion, age et al. Some example benchmarks are ETHOS and HateXplain. Models can be evaluated with metrics like the F-score or F-measure.

Papers

Showing 276300 of 507 papers

TitleStatusHype
TweetBLM: A Hate Speech Dataset and Analysis of Black Lives Matter-related Microblogs on Twitter:0
Tw-StAR at SemEval-2019 Task 5: N-gram embeddings for Hate Speech Detection in Multilingual Tweets0
UA at SemEval-2019 Task 5: Setting A Strong Linear Baseline for Hate Speech Detection0
UMUTextStats: A linguistic feature extraction tool for Spanish0
Understanding and Interpreting the Impact of User Context in Hate Speech Detection0
Unraveling Social Perceptions & Behaviors towards Migrants on Twitter0
Unsupervised Domain Adaptation for Hate Speech Detection Using a Data Augmentation Approach0
Unsupervised Embeddings with Graph Auto-Encoders for Multi-domain and Multilingual Hate Speech Detection0
Untangling Hate Speech Definitions: A Semantic Componential Analysis Across Cultures and Domains0
Unveiling Social Media Comments with a Novel Named Entity Recognition System for Identity Groups0
VAIS Hate Speech Detection System: A Deep Learning based Approach for System Combination0
Vista.ue at SemEval-2019 Task 5: Single Multilingual Hate Speech Detection Model0
Voice for the Voiceless: Active Sampling to Detect Comments Supporting the Rohingyas0
Watching the Watchers: A Comparative Fairness Audit of Cloud-based Content Moderation Services0
What is the social benefit of hate speech detection research? A Systematic Review0
When More is not Necessary Better: Multilingual Auxiliary Tasks for Zero-Shot Cross-Lingual Transfer of Hate Speech Detection Models0
When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks0
Whose Emotions and Moral Sentiments Do Language Models Reflect?0
Why Swear? Analyzing and Inferring the Intentions of Vulgar Expressions0
YNU NLP at SemEval-2019 Task 5: Attention and Capsule Ensemble for Identifying Hate Speech0
You Are What You Tweet: Profiling Users by Past Tweets to Improve Hate Speech Detection0
Z-AGI Labs at ClimateActivism 2024: Stance and Hate Event Detection on Social Media0
Zero-shot Cross-lingual Content Filtering: Offensive Language and Hate Speech Detection0
Effect of Word Embedding Models on Hate and Offensive Speech Detection0
On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning0
Show:102550
← PrevPage 12 of 21Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1BiLSTM + static BEF1-score0.8Unverified
2BERTF1-score0.79Unverified
3BiLSTM+Attention+FTF1-score0.77Unverified
4OPT-175B (few-shot)F1-score0.76Unverified
5CNN+Attention+FT+GVF1-score0.74Unverified
6OPT-175B (one-shot)F1-score0.71Unverified
7OPT-175B (zero-shot)F1-score0.67Unverified
8SVMF1-score0.66Unverified
9Random ForestsF1-score0.64Unverified
10Davinci (zero-shot)F1-score0.63Unverified
#ModelMetricClaimedVerifiedStatus
1BERT-MRPAUROC0.86Unverified
2BERT-RPAUROC0.85Unverified
3BERT-HateXplain [Attn]AUROC0.85Unverified
4BERT-HateXplain [LIME]AUROC0.85Unverified
5BERT [Attn]AUROC0.84Unverified
6BiRNN-HateXplain [Attn]AUROC0.81Unverified
7BiRNN-Attn [Attn]AUROC0.8Unverified
8CNN-GRU [LIME]AUROC0.79Unverified
9BiRNN [LIME]AUROC0.77Unverified
10XG-HSI-BERTAccuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1MLARAMHamming Loss0.29Unverified
2MLkNNHamming Loss0.16Unverified
3Binary RelevanceHamming Loss0.14Unverified
4Neural Classifier ChainsHamming Loss0.13Unverified
5Neural Binary RelevanceHamming Loss0.11Unverified
#ModelMetricClaimedVerifiedStatus
1Mozafari et al., 2019AAA50.94Unverified
2SVMAAA46.51Unverified
3Kennedy et al., 2020AAA45.5Unverified
#ModelMetricClaimedVerifiedStatus
1HateBERTMacro F10.74Unverified
2BERTMacro F10.72Unverified
#ModelMetricClaimedVerifiedStatus
1mBertAccuracy0.83Unverified
2Logistic RegressionAccuracy0.7Unverified
#ModelMetricClaimedVerifiedStatus
1HXP + CLAP + CLIPTEST F1 (macro)0.85Unverified
2BERT + ViT + MFCCTEST F1 (macro)0.79Unverified
#ModelMetricClaimedVerifiedStatus
1HateBERTMacro F10.49Unverified
2BERTMacro F10.48Unverified
#ModelMetricClaimedVerifiedStatus
1HateBERTMacro F10.81Unverified
2BERTMacro F10.8Unverified
#ModelMetricClaimedVerifiedStatus
1Multilingual BERTF1-score0.75Unverified
2AutoMLF1-score0.74Unverified
#ModelMetricClaimedVerifiedStatus
1AOM mBERTF10.85Unverified
#ModelMetricClaimedVerifiedStatus
1BaselineF10.7Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-large-STMacro F180.7Unverified
#ModelMetricClaimedVerifiedStatus
1Baseline BERT (task A)F10.77Unverified