SOTAVerified

Sarcasm Detection

The goal of Sarcasm Detection is to determine whether a sentence is sarcastic or non-sarcastic. Sarcasm is a type of phenomenon with specific perlocutionary effects on the hearer, such as to break their pattern of expectation. Consequently, correct understanding of sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and, frequently some real world facts.

Source: Attentional Multi-Reading Sarcasm Detection

Papers

Showing 181190 of 266 papers

TitleStatusHype
Reasoning with Multimodal Sarcastic Tweets via Modeling Cross-Modality Contrast and Semantic Association0
A Comprehensive Analysis of Preprocessing for Word Representation Learning in Affective Tasks0
Sentiment and Emotion help Sarcasm? A Multi-task Learning Framework for Multi-Modal Sarcasm, Sentiment and Emotion Analysis0
Sarcasm Detection in Tweets with BERT and GloVe Embeddings0
Augmenting Data for Sarcasm Detection with Unlabeled Conversation Context0
Sarcasm Detection using Context Separators in Online Discourse0
Happy Are Those Who Grade without Seeing: A Multi-Task Learning Approach to Grade Essays Using Gaze BehaviourCode0
Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media0
A Report on the 2020 Sarcasm Detection Shared Task0
Urban Dictionary Embeddings for Slang NLP Applications0
Show:102550
← PrevPage 19 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1PaLM 2(few-shot, k=3, CoT)Accuracy84.8Unverified
2PaLM 2 (few-shot, k=3, Direct)Accuracy78.7Unverified
3PaLM 540B (few-shot, k=3)Accuracy78.1Unverified
4BLOOM 176B (few-shot, k=3)Accuracy72.47Unverified
5Bloomberg GPT (few-shot, k=3)Accuracy69.66Unverified
6GPT-NeoX (few-shot, k=3)Accuracy62.36Unverified
7Chinchilla-70B (few-shot, k=5)Accuracy58.6Unverified
8Gopher-280B (few-shot, k=5)Accuracy48.3Unverified
#ModelMetricClaimedVerifiedStatus
1BERT+Aspect-based approachesF10.74Unverified
2RoBERTa_large - (Separated Context-Response)F10.72Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa_large (Context-Response)F10.77Unverified
2BERTF10.73Unverified
#ModelMetricClaimedVerifiedStatus
1CASCADEAccuracy77Unverified
2Bag-of-BigramsAccuracy75.8Unverified
#ModelMetricClaimedVerifiedStatus
1Bag-of-BigramsAccuracy76.5Unverified
2CASCADEAccuracy74Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa + Mutation Data AugmentationF1-Score0.41Unverified
#ModelMetricClaimedVerifiedStatus
1MUStARD++Precision70.2Unverified
#ModelMetricClaimedVerifiedStatus
1Bag-of-WordsAvg F127Unverified
#ModelMetricClaimedVerifiedStatus
1BARTR136.88Unverified