SOTAVerified

Sarcasm Detection

The goal of Sarcasm Detection is to determine whether a sentence is sarcastic or non-sarcastic. Sarcasm is a type of phenomenon with specific perlocutionary effects on the hearer, such as to break their pattern of expectation. Consequently, correct understanding of sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and, frequently some real world facts.

Source: Attentional Multi-Reading Sarcasm Detection

Papers

Showing 221230 of 266 papers

TitleStatusHype
Bi-ISCA: Bidirectional Inter-Sentence Contextual Attention Mechanism for Detecting Sarcasm in User Generated Noisy Short Text0
BloombergGPT: A Large Language Model for Finance0
BNS-Net: A Dual-channel Sarcasm Detection Method Considering Behavior-level and Sentence-level Conflicts0
Bootstrapped Learning of Emotion Hashtags \#hashtags4you0
Building a Bridge: A Method for Image-Text Sarcasm Detection Without Pretraining on Image-Text Data0
C-Net: Contextual Network for Sarcasm Detection0
CNN- and LSTM-based Claim Classification in Online User Comments0
Combining Context-Free and Contextualized Representations for Arabic Sarcasm Detection and Sentiment Identification0
Commander-GPT: Fully Unleashing the Sarcasm Detection Capability of Multi-Modal Large Language Models0
Computational Sarcasm0
Show:102550
← PrevPage 23 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PaLM 2(few-shot, k=3, CoT)Accuracy84.8Unverified
2PaLM 2 (few-shot, k=3, Direct)Accuracy78.7Unverified
3PaLM 540B (few-shot, k=3)Accuracy78.1Unverified
4BLOOM 176B (few-shot, k=3)Accuracy72.47Unverified
5Bloomberg GPT (few-shot, k=3)Accuracy69.66Unverified
6GPT-NeoX (few-shot, k=3)Accuracy62.36Unverified
7Chinchilla-70B (few-shot, k=5)Accuracy58.6Unverified
8Gopher-280B (few-shot, k=5)Accuracy48.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT+Aspect-based approachesF10.74Unverified
2RoBERTa_large - (Separated Context-Response)F10.72Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa_large (Context-Response)F10.77Unverified
2BERTF10.73Unverified
#ModelMetricClaimedVerifiedStatus
1CASCADEAccuracy77Unverified
2Bag-of-BigramsAccuracy75.8Unverified
#ModelMetricClaimedVerifiedStatus
1Bag-of-BigramsAccuracy76.5Unverified
2CASCADEAccuracy74Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa + Mutation Data AugmentationF1-Score0.41Unverified
#ModelMetricClaimedVerifiedStatus
1MUStARD++Precision70.2Unverified
#ModelMetricClaimedVerifiedStatus
1Bag-of-WordsAvg F127Unverified
#ModelMetricClaimedVerifiedStatus
1BARTR136.88Unverified