SOTAVerified

Sentiment Analysis

Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.

Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.

More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.

Further readings:

Papers

Showing 28012850 of 5630 papers

TitleStatusHype
We Usually Don't Like Going to the Dentist: Using Common Sense to Detect Irony on Twitter0
What's up on Twitter? Catch up with TWIST!0
What BERTs and GPTs know about your brand? Probing contextual language models for affect associations0
What confuses BERT? Linguistic Evaluation of Sentiment Analysis on Telecom Customer Opinion0
What Does a TextCNN Learn?0
What do LLMs Know about Financial Markets? A Case Study on Reddit Market Sentiment Analysis0
What Emotions Make One or Five Stars? Understanding Ratings of Online Product Reviews by Sentiment Analysis and XAI0
What is Sentiment Meant to Mean to Language Models?0
``What Is Your Evidence?'' A Study of Controversial Topics on Social Media0
What Models Know About Their Attackers: Deriving Attacker Information From Latent Representations0
What Sentiment and Fun Facts We Learnt Before FIFA World Cup Qatar 2022 Using Twitter and AI0
What Sentiments Can Be Found in Medical Forums?0
What we really want to find by Sentiment Analysis: The Relationship between Computational Models and Psychological State0
What we write about when we write about causality: Features of causal statements across large-scale social discourse0
What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning0
Wheel of Life: an initial investigation. Topic-Related Polarity Visualization in Personal Stories0
When and Why a Model Fails? A Human-in-the-loop Error Detection Framework for Sentiment Analysis0
When and Why does a Model Fail? A Human-in-the-loop Error Detection Framework for Sentiment Analysis0
When Are Tree Structures Necessary for Deep Learning of Representations?0
When Crowd Meets Persona: Creating a Large-Scale Open-Domain Persona Dialogue Corpus0
When does CLIP generalize better than unimodal models? When judging human-centric concepts0
When does deep multi-task learning work for loosely related document classification tasks?0
When More is not Necessary Better: Multilingual Auxiliary Tasks for Zero-Shot Cross-Lingual Transfer of Hate Speech Detection Models0
When Saliency Meets Sentiment: Understanding How Image Content Invokes Emotion and Sentiment0
When Word Embeddings Become Endangered0
Where does active travel fit within local community narratives of mobility space and place?0
Which is Making the Contribution: Modulating Unimodal and Cross-modal Dynamics for Multimodal Sentiment Analysis0
Who cares about Sarcastic Tweets? Investigating the Impact of Sarcasm on Sentiment Analysis.0
Who Did What to Whom? A Contrastive Study of Syntacto-Semantic Dependencies0
`Who would have thought of that!': A Hierarchical Topic Model for Extraction of Sarcasm-prevalent Topics and Sarcasm Detection0
Why is "Problems" Predictive of Positive Sentiment? A Case Study of Explaining Unintuitive Features in Sentiment Classification0
Why Question Answering using Sentiment Analysis and Word Classes0
Why Words Alone Are Not Enough: Error Analysis of Lexicon-based Polarity Classifier for Czech0
WiDe-analysis: Enabling One-click Content Moderation Analysis on Wikipedia's Articles for Deletion0
Wikipedia Titles As Noun Tag Predictors0
Will Affective Computing Emerge from Foundation Models and General AI? A First Evaluation on ChatGPT0
Will\_go at SemEval-2020 Task 9: An Accurate Approach for Sentiment Analysis on Hindi-English Tweets Based on Bert and Pesudo Label Strategy0
Will it Blend? Blending Weak and Strong Labeled Data in a Neural Network for Argumentation Mining0
WisdoM: Improving Multimodal Sentiment Analysis by Fusing Contextual World Knowledge0
WLV at SemEval-2018 Task 3: Dissecting Tweets in Search of Irony0
Word2rate: training and evaluating multiple word embeddings as statistical transitions0
Word2Vec and Doc2Vec in Unsupervised Sentiment Analysis of Clinical Discharge Summaries0
Word Affect Intensities0
Word Embedding Algorithms as Generalized Low Rank Models and their Canonical Form0
Word Embedding and Topic Modeling Enhanced Multiple Features for Content Linking and Argument / Sentiment Labeling in Online Forums0
Word Embedding and WordNet Based Metaphor Identification and Interpretation0
Word Embedding-based Antonym Detection using Thesauri and Distributional Information0
Word Embedding Evaluation for Sinhala0
Word Embeddings for Banking Industry0
Word Embeddings for Code-Mixed Language Processing0
Show:102550
← PrevPage 57 of 113Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Word+ES (Scratch)Attack Success Rate100Unverified
2MT-DNN-SMARTAccuracy97.5Unverified
3T5-11BAccuracy97.5Unverified
4MUPPET Roberta LargeAccuracy97.4Unverified
5T5-3BAccuracy97.4Unverified
6ALBERTAccuracy97.1Unverified
7StructBERTRoBERTa ensembleAccuracy97.1Unverified
8XLNet (single model)Accuracy97Unverified
9SMARTRoBERTaDev Accuracy96.9Unverified
10ELECTRAAccuracy96.9Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-large with LlamBERTAccuracy96.68Unverified
2RoBERTa-largeAccuracy96.54Unverified
3XLNetAccuracy96.21Unverified
4Heinsen Routing + RoBERTa LargeAccuracy96.2Unverified
5RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy96.1Unverified
6GraphStarAccuracy96Unverified
7DV-ngrams-cosine with NB sub-sampling + RoBERTa.baseAccuracy95.94Unverified
8DV-ngrams-cosine + RoBERTa.baseAccuracy95.92Unverified
9Roberta_Large ST + Cosine Similarity LossAccuracy95.9Unverified
10BERT large finetune UDAAccuracy95.8Unverified
#ModelMetricClaimedVerifiedStatus
1Llama-3.3-70B + CAPOAccuracy62.27Unverified
2Mistral-Small-24B + CAPOAccuracy 60.2Unverified
3Heinsen Routing + RoBERTa LargeAccuracy59.8Unverified
4RoBERTa-large+Self-ExplainingAccuracy59.1Unverified
5Qwen2.5-32B + CAPOAccuracy 59.07Unverified
6Heinsen Routing + GPT-2Accuracy58.5Unverified
7BCN+Suffix BiLSTM-Tied+CoVeAccuracy56.2Unverified
8BERT LargeAccuracy55.5Unverified
9LM-CPPF RoBERTa-baseAccuracy54.9Unverified
10BCN+ELMoAccuracy54.7Unverified
#ModelMetricClaimedVerifiedStatus
1Char-level CNNError4.88Unverified
2SVDCNNError4.74Unverified
3LEAMError4.69Unverified
4fastText, h=10, bigramError4.3Unverified
5SWEM-hierError4.19Unverified
6SRNNError3.96Unverified
7M-ACNNError3.89Unverified
8DNC+CUWError3.6Unverified
9CCCapsNetError3.52Unverified
10Block-sparse LSTMError3.27Unverified
#ModelMetricClaimedVerifiedStatus
1Millions of EmojiTraining Time1,500Unverified
2VLAWEAccuracy93.3Unverified
3RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy92.5Unverified
4AnglE-LLaMA-7BAccuracy91.09Unverified
5byte mLSTM7Accuracy86.8Unverified
6MEANAccuracy84.5Unverified
7RNN-CapsuleAccuracy83.8Unverified
8Capsule-BAccuracy82.3Unverified
9SuBiLSTM-TiedAccuracy81.6Unverified
10USE_T+CNNAccuracy81.59Unverified