SOTAVerified

Sentiment Analysis

Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.

Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.

More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.

Further readings:

Papers

Showing 49014950 of 5630 papers

TitleStatusHype
Learning to select data for transfer learning with Bayesian OptimizationCode0
FABSA: An aspect-based sentiment analysis dataset of user reviewsCode0
Do Transformer Models Show Similar Attention Patterns to Task-Specific Human Gaze?Code0
FaiMA: Feature-aware In-context Learning for Multi-domain Aspect-based Sentiment AnalysisCode0
The interplay between language similarity and script on a novel multi-layer Algerian dialect corpusCode0
Don't Count, Predict! An Automatic Approach to Learning Sentiment Lexicons for Short TextCode0
Learning to Skim TextCode0
FASSILA: A Corpus for Algerian Dialect Fake News Detection and Sentiment AnalysisCode0
The Many-to-Many Mapping Between the Concordance Correlation Coefficient and the Mean Square ErrorCode0
Fast and accurate sentiment classification using an enhanced Naive Bayes modelCode0
AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News ArticlesCode0
Fast Dawid-Skene: A Fast Vote Aggregation Scheme for Sentiment ClassificationCode0
RomanSetu: Efficiently unlocking multilingual capabilities of Large Language Models via RomanizationCode0
Domain-Specific Language Model Post-Training for Indonesian Financial NLPCode0
Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream TasksCode0
FastTrees: Parallel Latent Tree-Induction for Faster Sequence EncodingCode0
A review of Spanish corpora annotated with negationCode0
RPN: A Word Vector Level Data Augmentation Algorithm in Deep Learning for Language UnderstandingCode0
On the Applicability of Zero-Shot Cross-Lingual Transfer Learning for Sentiment Classification in Distant Language PairsCode0
Adversarial Deep Averaging Networks for Cross-Lingual Sentiment ClassificationCode0
RSM-NLP at BLP-2023 Task 2: Bangla Sentiment Analysis using Weighted and Majority Voted Fine-Tuned TransformersCode0
Learning Word Importance with the Neural Bag-of-Words ModelCode0
Learning Word Meta-Embeddings by AutoencodingCode0
Do LLMs Think Fast and Slow? A Causal Study on Sentiment AnalysisCode0
Unsupervised Sentiment Analysis for Code-mixed DataCode0
Analyzing sports commentary in order to automatically recognize events and extract insightsCode0
SubRegWeigh: Effective and Efficient Annotation Weighing with Subword RegularizationCode0
Learning Word Vectors for Sentiment AnalysisCode0
A Review of Different Word Embeddings for Sentiment Classification using Deep LearningCode0
Domain-Expanded ASTE: Rethinking Generalization in Aspect Sentiment Triplet ExtractionCode0
Domain-Adversarial Neural NetworksCode0
Left-Center-Right Separated Neural Network for Aspect-based Sentiment Analysis with Rotatory AttentionCode0
Domain Adversarial Fine-Tuning as an Effective RegularizerCode0
Sentiment-based Candidate Selection for NMTCode0
Sentiment-based Candidate Selection for NMTCode0
FEET: A Framework for Evaluating Embedding TechniquesCode0
ferret: a Framework for Benchmarking Explainers on TransformersCode0
The merits of Universal Language Model Fine-tuning for Small Datasets -- a case with Dutch book reviewsCode0
FEUP at SemEval-2017 Task 5: Predicting Sentiment Polarity and Intensity with Financial Word EmbeddingsCode0
Domain Adapted Word Embeddings for Improved Sentiment ClassificationCode0
LemmaTag: Jointly Tagging and Lemmatizing for Morphologically Rich Languages with BRNNsCode0
Less Grammar, More FeaturesCode0
Domain Adaptation from ScratchCode0
TraceNet: Tracing and Locating the Key Elements in Sentiment AnalysisCode0
FIESTA: Fast IdEntification of State-of-The-Art models using adaptive bandit algorithmsCode0
Domain Adaptation for Arabic Cross-Domain and Cross-Dialect Sentiment Analysis from Contextualized Word EmbeddingCode0
Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer LearningCode0
FiLMing Multimodal Sarcasm Detection with AttentionCode0
Less Learn Shortcut: Analyzing and Mitigating Learning of Spurious Feature-Label CorrelationCode0
"Let's Eat Grandma": Does Punctuation Matter in Sentence Representation?Code0
Show:102550
← PrevPage 99 of 113Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Word+ES (Scratch)Attack Success Rate100Unverified
2MT-DNN-SMARTAccuracy97.5Unverified
3T5-11BAccuracy97.5Unverified
4MUPPET Roberta LargeAccuracy97.4Unverified
5T5-3BAccuracy97.4Unverified
6ALBERTAccuracy97.1Unverified
7StructBERTRoBERTa ensembleAccuracy97.1Unverified
8XLNet (single model)Accuracy97Unverified
9SMARTRoBERTaDev Accuracy96.9Unverified
10ELECTRAAccuracy96.9Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-large with LlamBERTAccuracy96.68Unverified
2RoBERTa-largeAccuracy96.54Unverified
3XLNetAccuracy96.21Unverified
4Heinsen Routing + RoBERTa LargeAccuracy96.2Unverified
5RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy96.1Unverified
6GraphStarAccuracy96Unverified
7DV-ngrams-cosine with NB sub-sampling + RoBERTa.baseAccuracy95.94Unverified
8DV-ngrams-cosine + RoBERTa.baseAccuracy95.92Unverified
9Roberta_Large ST + Cosine Similarity LossAccuracy95.9Unverified
10BERT large finetune UDAAccuracy95.8Unverified
#ModelMetricClaimedVerifiedStatus
1Llama-3.3-70B + CAPOAccuracy62.27Unverified
2Mistral-Small-24B + CAPOAccuracy 60.2Unverified
3Heinsen Routing + RoBERTa LargeAccuracy59.8Unverified
4RoBERTa-large+Self-ExplainingAccuracy59.1Unverified
5Qwen2.5-32B + CAPOAccuracy 59.07Unverified
6Heinsen Routing + GPT-2Accuracy58.5Unverified
7BCN+Suffix BiLSTM-Tied+CoVeAccuracy56.2Unverified
8BERT LargeAccuracy55.5Unverified
9LM-CPPF RoBERTa-baseAccuracy54.9Unverified
10BCN+ELMoAccuracy54.7Unverified
#ModelMetricClaimedVerifiedStatus
1Char-level CNNError4.88Unverified
2SVDCNNError4.74Unverified
3LEAMError4.69Unverified
4fastText, h=10, bigramError4.3Unverified
5SWEM-hierError4.19Unverified
6SRNNError3.96Unverified
7M-ACNNError3.89Unverified
8DNC+CUWError3.6Unverified
9CCCapsNetError3.52Unverified
10Block-sparse LSTMError3.27Unverified
#ModelMetricClaimedVerifiedStatus
1Millions of EmojiTraining Time1,500Unverified
2VLAWEAccuracy93.3Unverified
3RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy92.5Unverified
4AnglE-LLaMA-7BAccuracy91.09Unverified
5byte mLSTM7Accuracy86.8Unverified
6MEANAccuracy84.5Unverified
7RNN-CapsuleAccuracy83.8Unverified
8Capsule-BAccuracy82.3Unverified
9SuBiLSTM-TiedAccuracy81.6Unverified
10USE_T+CNNAccuracy81.59Unverified