SOTAVerified

Sarcasm Detection

The goal of Sarcasm Detection is to determine whether a sentence is sarcastic or non-sarcastic. Sarcasm is a type of phenomenon with specific perlocutionary effects on the hearer, such as to break their pattern of expectation. Consequently, correct understanding of sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and, frequently some real world facts.

Source: Attentional Multi-Reading Sarcasm Detection

Papers

Showing 3140 of 266 papers

TitleStatusHype
InterCLIP-MEP: Interactive CLIP and Memory-Enhanced Predictor for Multi-modal Sarcasm DetectionCode1
Impact of emoji exclusion on the performance of Arabic sarcasm detection models0
CofiPara: A Coarse-to-fine Paradigm for Multimodal Sarcasm Target Identification with Large Multimodal ModelsCode1
Generalizable Sarcasm Detection Is Just Around The Corner, Of Course!Code0
On Prompt Sensitivity of ChatGPT in Affective Computing0
Mixture-of-Prompt-Experts for Multi-modal Semantic Understanding0
Multi-modal Semantic Understanding with Contrastive Cross-modal Feature AlignmentCode0
MIKO: Multimodal Intention Knowledge Distillation from Large Language Models for Social-Media Commonsense Discovery0
KoCoSa: Korean Context-aware Sarcasm Detection DatasetCode0
InfFeed: Influence Functions as a Feedback to Improve the Performance of Subjective Tasks0
Show:102550
← PrevPage 4 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PaLM 2(few-shot, k=3, CoT)Accuracy84.8Unverified
2PaLM 2 (few-shot, k=3, Direct)Accuracy78.7Unverified
3PaLM 540B (few-shot, k=3)Accuracy78.1Unverified
4BLOOM 176B (few-shot, k=3)Accuracy72.47Unverified
5Bloomberg GPT (few-shot, k=3)Accuracy69.66Unverified
6GPT-NeoX (few-shot, k=3)Accuracy62.36Unverified
7Chinchilla-70B (few-shot, k=5)Accuracy58.6Unverified
8Gopher-280B (few-shot, k=5)Accuracy48.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT+Aspect-based approachesF10.74Unverified
2RoBERTa_large - (Separated Context-Response)F10.72Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa_large (Context-Response)F10.77Unverified
2BERTF10.73Unverified
#ModelMetricClaimedVerifiedStatus
1CASCADEAccuracy77Unverified
2Bag-of-BigramsAccuracy75.8Unverified
#ModelMetricClaimedVerifiedStatus
1Bag-of-BigramsAccuracy76.5Unverified
2CASCADEAccuracy74Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa + Mutation Data AugmentationF1-Score0.41Unverified
#ModelMetricClaimedVerifiedStatus
1MUStARD++Precision70.2Unverified
#ModelMetricClaimedVerifiedStatus
1Bag-of-WordsAvg F127Unverified
#ModelMetricClaimedVerifiedStatus
1BARTR136.88Unverified