SOTAVerified

Sentiment Analysis

Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.

Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.

More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.

Further readings:

Papers

Showing 276300 of 5630 papers

TitleStatusHype
Uniform Discretized Integrated Gradients: An effective attribution based method for explaining large language models0
Fine-Grained Sentiment Analysis of Electric Vehicle User Reviews: A Bidirectional LSTM Approach to Capturing Emotional Intensity in Chinese Text0
Pragmatic Metacognitive Prompting Improves LLM Performance on Sarcasm Detection0
Multimodal Sentiment Analysis Based on BERT and ResNet0
Were You Helpful -- Predicting Helpful Votes from Amazon Reviews0
Multi-Granularity Tibetan Textual Adversarial Attack Method Based on Masked Language ModelCode0
A Comprehensive Evaluation of Large Language Models on Aspect-Based Sentiment Analysis0
Data Uncertainty-Aware Learning for Multimodal Aspect-based Sentiment Analysis0
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability0
PGSO: Prompt-based Generative Sequence Optimization Network for Aspect-based Sentiment Analysis0
The Impact of Generative AI on Student Churn and the Future of Formal Education0
Enhancing Sentiment Analysis in Bengali Texts: A Hybrid Approach Using Lexicon-Based Algorithm and Pretrained Language Model Bangla-BERT0
Stock Price Prediction using Multi-Faceted Information based on Deep Recurrent Neural Networks0
Topic Modeling and Sentiment Analysis on Japanese Online Media's Coverage of Nuclear Energy0
SentiXRL: An advanced large language Model Framework for Multilingual Fine-Grained Emotion Classification in Complex Text Environment0
On Limitations of LLM as Annotator for Low Resource Languages0
Synthetic Data Generation with LLM for Improved Depression Prediction0
BERT or FastText? A Comparative Analysis of Contextual as well as Non-Contextual Embeddings0
"Stupid robot, I want to speak to a human!" User Frustration Detection in Task-Oriented Dialog Systems0
Exploring Large Language Models for Multimodal Sentiment Analysis: Challenges, Benchmarks, and Future Directions0
TANGNN: a Concise, Scalable and Effective Graph Neural Networks with Top-m Attention Mechanism for Graph Representation LearningCode0
Understanding the Impact of News Articles on the Movement of Market Index: A Case on Nifty 500
Comparative Analysis of Pooling Mechanisms in LLMs: A Sentiment Analysis Perspective0
Public sentiments on the fourth industrial revolution: An unsolicited public opinion poll from Twitter0
Sentiment Analysis of Economic Text: A Lexicon-Based Approach0
Show:102550
← PrevPage 12 of 226Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Word+ES (Scratch)Attack Success Rate100Unverified
2T5-11BAccuracy97.5Unverified
3MT-DNN-SMARTAccuracy97.5Unverified
4T5-3BAccuracy97.4Unverified
5MUPPET Roberta LargeAccuracy97.4Unverified
6StructBERTRoBERTa ensembleAccuracy97.1Unverified
7ALBERTAccuracy97.1Unverified
8XLNet (single model)Accuracy97Unverified
9SMARTRoBERTaDev Accuracy96.9Unverified
10ELECTRAAccuracy96.9Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-large with LlamBERTAccuracy96.68Unverified
2RoBERTa-largeAccuracy96.54Unverified
3XLNetAccuracy96.21Unverified
4Heinsen Routing + RoBERTa LargeAccuracy96.2Unverified
5RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy96.1Unverified
6GraphStarAccuracy96Unverified
7DV-ngrams-cosine with NB sub-sampling + RoBERTa.baseAccuracy95.94Unverified
8DV-ngrams-cosine + RoBERTa.baseAccuracy95.92Unverified
9Roberta_Large ST + Cosine Similarity LossAccuracy95.9Unverified
10BERT large finetune UDAAccuracy95.8Unverified
#ModelMetricClaimedVerifiedStatus
1Llama-3.3-70B + CAPOAccuracy62.27Unverified
2Mistral-Small-24B + CAPOAccuracy 60.2Unverified
3Heinsen Routing + RoBERTa LargeAccuracy59.8Unverified
4RoBERTa-large+Self-ExplainingAccuracy59.1Unverified
5Qwen2.5-32B + CAPOAccuracy 59.07Unverified
6Heinsen Routing + GPT-2Accuracy58.5Unverified
7BCN+Suffix BiLSTM-Tied+CoVeAccuracy56.2Unverified
8BERT LargeAccuracy55.5Unverified
9LM-CPPF RoBERTa-baseAccuracy54.9Unverified
10BCN+ELMoAccuracy54.7Unverified
#ModelMetricClaimedVerifiedStatus
1Char-level CNNError4.88Unverified
2SVDCNNError4.74Unverified
3LEAMError4.69Unverified
4fastText, h=10, bigramError4.3Unverified
5SWEM-hierError4.19Unverified
6SRNNError3.96Unverified
7M-ACNNError3.89Unverified
8DNC+CUWError3.6Unverified
9CCCapsNetError3.52Unverified
10Block-sparse LSTMError3.27Unverified
#ModelMetricClaimedVerifiedStatus
1Millions of EmojiTraining Time1,500Unverified
2VLAWEAccuracy93.3Unverified
3RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy92.5Unverified
4AnglE-LLaMA-7BAccuracy91.09Unverified
5byte mLSTM7Accuracy86.8Unverified
6MEANAccuracy84.5Unverified
7RNN-CapsuleAccuracy83.8Unverified
8Capsule-BAccuracy82.3Unverified
9SuBiLSTM-TiedAccuracy81.6Unverified
10USE_T+CNNAccuracy81.59Unverified