SOTAVerified

Sentiment Analysis

Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.

Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.

More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.

Further readings:

Papers

Showing 501550 of 5630 papers

TitleStatusHype
Decision Stream: Cultivating Deep Decision TreesCode1
Emergence of Grounded Compositional Language in Multi-Agent PopulationsCode1
A Structured Self-attentive Sentence EmbeddingCode1
ATR4S: Toolkit with State-of-the-art Automatic Terms Recognition Methods in ScalaCode1
Sentiment Analysis of Twitter Data for Predicting Stock Market MovementsCode1
Bag of Tricks for Efficient Text ClassificationCode1
MOSI: Multimodal Corpus of Sentiment Intensity and Subjectivity Analysis in Online Opinion VideosCode1
Adversarial Training Methods for Semi-Supervised Text ClassificationCode1
Character-level Convolutional Networks for Text ClassificationCode1
Domain-Adversarial Training of Neural NetworksCode1
Convolutional Neural Networks for Sentence ClassificationCode1
Good Debt or Bad Debt: Detecting Semantic Orientations in Economic TextsCode1
Behavioral Factors in Interactive Training of Text ClassifiersCode1
Closing the Loop: Fast, Interactive Semi-Supervised Annotation With Queries on Features and InstancesCode1
Cross-Lingual Adaptation using Structural Correspondence LearningCode1
AdaptiSent: Context-Aware Adaptive Attention for Multimodal Aspect-Based Sentiment Analysis0
DCR: Quantifying Data Contamination in LLMs EvaluationCode0
AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News ArticlesCode0
SentiDrop: A Multi Modal Machine Learning model for Predicting Dropout in Distance Learning0
GNN-CNN: An Efficient Hybrid Model of Convolutional and Graph Neural Networks for Text RepresentationCode0
FINN-GL: Generalized Mixed-Precision Extensions for FPGA-Accelerated LSTMs0
Unpacking Generative AI in Education: Computational Modeling of Teacher and Student Perspectives in Social Media Discourse0
Characterizing Linguistic Shifts in Croatian News via Diachronic Word EmbeddingsCode0
A Multi-Agent Probabilistic Inference Framework Inspired by Kairanban-Style CoT System with IdoBata Conversation for Debiasing0
Analyzing Emotions in Bangla Social Media Comments Using Machine Learning and LIME0
Advancing Exchange Rate Forecasting: Leveraging Machine Learning and AI for Enhanced Accuracy in Global Financial Markets0
AraReasoner: Evaluating Reasoning-Based LLMs for Arabic NLP0
Quantum Graph Transformer for NLP Sentiment Classification0
How do datasets, developers, and models affect biases in a low-resourced language?0
Artificial Intelligence and Civil Discourse: How LLMs Moderate Climate Change Conversations0
CAtCh: Cognitive Assessment through Cookie ThiefCode0
Reasoning or Overthinking: Evaluating Large Language Models on Financial Sentiment Analysis0
Sentiment Analysis in Learning Management Systems Understanding Student Feedback at Scale0
Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-TuningCode0
FinBERT2: A Specialized Bidirectional Encoder for Bridging the Gap in Finance-Specific Deployment of Large Language Models0
Multi-Domain ABSA Conversation Dataset Generation via LLMs for Real-World Evaluation and Model Comparison0
MMAFFBen: A Multilingual and Multimodal Affective Analysis Benchmark for Evaluating LLMs and VLMsCode0
Predicting Human Depression with Hybrid Data Acquisition utilizing Physical Activity Sensing and Social Media Feeds0
Sentiment Simulation using Generative AI Agents0
The Role of Diversity in In-Context Learning for Large Language Models0
Analyzing Political Bias in LLMs via Target-Oriented Sentiment Classification0
Hermes@DravidianLangTech 2025: Sentiment Analysis of Dravidian Languages using XLM-RoBERTaCode0
CrosGrpsABS: Cross-Attention over Syntactic and Semantic Graphs for Aspect-Based Sentiment Analysis in a Low-Resource Language0
Improving Bangla Linguistics: Advanced LSTM, Bi-LSTM, and Seq2Seq Models for Translating Sylheti to Modern Bangla0
Task Specific Pruning with LLM-Sieve: How Many Parameters Does Your Task Really Need?0
Can AI Read Between The Lines? Benchmarking LLMs On Financial Nuance0
An Empirical Study on Configuring In-Context Learning Demonstrations for Unleashing MLLMs' Sentimental Perception Capability0
On Multilingual Encoder Language Model Compression for Low-Resource Languages0
Omni TM-AE: A Scalable and Interpretable Embedding Model Using the Full Tsetlin Machine State Space0
LLaMAs Have Feelings Too: Unveiling Sentiment and Emotion Representations in LLaMA Models Through Probing0
Show:102550
← PrevPage 11 of 113Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Word+ES (Scratch)Attack Success Rate100Unverified
2MT-DNN-SMARTAccuracy97.5Unverified
3T5-11BAccuracy97.5Unverified
4MUPPET Roberta LargeAccuracy97.4Unverified
5T5-3BAccuracy97.4Unverified
6ALBERTAccuracy97.1Unverified
7StructBERTRoBERTa ensembleAccuracy97.1Unverified
8XLNet (single model)Accuracy97Unverified
9SMARTRoBERTaDev Accuracy96.9Unverified
10ELECTRAAccuracy96.9Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-large with LlamBERTAccuracy96.68Unverified
2RoBERTa-largeAccuracy96.54Unverified
3XLNetAccuracy96.21Unverified
4Heinsen Routing + RoBERTa LargeAccuracy96.2Unverified
5RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy96.1Unverified
6GraphStarAccuracy96Unverified
7DV-ngrams-cosine with NB sub-sampling + RoBERTa.baseAccuracy95.94Unverified
8DV-ngrams-cosine + RoBERTa.baseAccuracy95.92Unverified
9Roberta_Large ST + Cosine Similarity LossAccuracy95.9Unverified
10BERT large finetune UDAAccuracy95.8Unverified
#ModelMetricClaimedVerifiedStatus
1Llama-3.3-70B + CAPOAccuracy62.27Unverified
2Mistral-Small-24B + CAPOAccuracy 60.2Unverified
3Heinsen Routing + RoBERTa LargeAccuracy59.8Unverified
4RoBERTa-large+Self-ExplainingAccuracy59.1Unverified
5Qwen2.5-32B + CAPOAccuracy 59.07Unverified
6Heinsen Routing + GPT-2Accuracy58.5Unverified
7BCN+Suffix BiLSTM-Tied+CoVeAccuracy56.2Unverified
8BERT LargeAccuracy55.5Unverified
9LM-CPPF RoBERTa-baseAccuracy54.9Unverified
10BCN+ELMoAccuracy54.7Unverified
#ModelMetricClaimedVerifiedStatus
1Char-level CNNError4.88Unverified
2SVDCNNError4.74Unverified
3LEAMError4.69Unverified
4fastText, h=10, bigramError4.3Unverified
5SWEM-hierError4.19Unverified
6SRNNError3.96Unverified
7M-ACNNError3.89Unverified
8DNC+CUWError3.6Unverified
9CCCapsNetError3.52Unverified
10Block-sparse LSTMError3.27Unverified
#ModelMetricClaimedVerifiedStatus
1Millions of EmojiTraining Time1,500Unverified
2VLAWEAccuracy93.3Unverified
3RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy92.5Unverified
4AnglE-LLaMA-7BAccuracy91.09Unverified
5byte mLSTM7Accuracy86.8Unverified
6MEANAccuracy84.5Unverified
7RNN-CapsuleAccuracy83.8Unverified
8Capsule-BAccuracy82.3Unverified
9SuBiLSTM-TiedAccuracy81.6Unverified
10USE_T+CNNAccuracy81.59Unverified