SOTAVerified

Sarcasm Detection

The goal of Sarcasm Detection is to determine whether a sentence is sarcastic or non-sarcastic. Sarcasm is a type of phenomenon with specific perlocutionary effects on the hearer, such as to break their pattern of expectation. Consequently, correct understanding of sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and, frequently some real world facts.

Source: Attentional Multi-Reading Sarcasm Detection

Papers

Showing 1120 of 266 papers

TitleStatusHype
CofiPara: A Coarse-to-fine Paradigm for Multimodal Sarcasm Target Identification with Large Multimodal ModelsCode1
Multi-Modal Sarcasm Detection via Cross-Modal Graph Convolutional NetworkCode1
Overview of the WANLP 2021 Shared Task on Sarcasm and Sentiment Detection in ArabicCode1
Perceived and Intended Sarcasm Detection with Graph Attention NetworksCode1
A Multimodal Corpus for Emotion Recognition in SarcasmCode1
“Did you really mean what you said?” : Sarcasm Detection in Hindi-English Code-Mixed Data using Bilingual Word EmbeddingsCode1
Affective and Contextual Embedding for Sarcasm DetectionCode1
DIP: Dual Incongruity Perceiving Network for Sarcasm DetectionCode1
MMoE: Enhancing Multimodal Models with Mixtures of Multimodal Interaction ExpertsCode1
InterCLIP-MEP: Interactive CLIP and Memory-Enhanced Predictor for Multi-modal Sarcasm DetectionCode1
Show:102550
← PrevPage 2 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PaLM 2(few-shot, k=3, CoT)Accuracy84.8Unverified
2PaLM 2 (few-shot, k=3, Direct)Accuracy78.7Unverified
3PaLM 540B (few-shot, k=3)Accuracy78.1Unverified
4BLOOM 176B (few-shot, k=3)Accuracy72.47Unverified
5Bloomberg GPT (few-shot, k=3)Accuracy69.66Unverified
6GPT-NeoX (few-shot, k=3)Accuracy62.36Unverified
7Chinchilla-70B (few-shot, k=5)Accuracy58.6Unverified
8Gopher-280B (few-shot, k=5)Accuracy48.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT+Aspect-based approachesF10.74Unverified
2RoBERTa_large - (Separated Context-Response)F10.72Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa_large (Context-Response)F10.77Unverified
2BERTF10.73Unverified
#ModelMetricClaimedVerifiedStatus
1CASCADEAccuracy77Unverified
2Bag-of-BigramsAccuracy75.8Unverified
#ModelMetricClaimedVerifiedStatus
1Bag-of-BigramsAccuracy76.5Unverified
2CASCADEAccuracy74Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa + Mutation Data AugmentationF1-Score0.41Unverified
#ModelMetricClaimedVerifiedStatus
1MUStARD++Precision70.2Unverified
#ModelMetricClaimedVerifiedStatus
1Bag-of-WordsAvg F127Unverified
#ModelMetricClaimedVerifiedStatus
1BARTR136.88Unverified