SOTAVerified

Sentiment Analysis

Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.

Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.

More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.

Further readings:

Papers

Showing 47014750 of 5630 papers

TitleStatusHype
LABR: A Large Scale Arabic Sentiment Analysis BenchmarkCode0
Empirical Study of Text Augmentation on Social Media Text in VietnameseCode0
New Adversarial Image Detection Based on Sentiment AnalysisCode0
Boosting Zero-Shot Crosslingual Performance using LLM-Based Augmentations with Effective Data SelectionCode0
Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in FinanceCode0
A Study of fastText Word Embedding Effects in Document Classification in Bangla LanguageCode0
A Robust Predictive Model for Stock Price Prediction Using Deep Learning and Natural Language ProcessingCode0
Language Fusion for Parameter-Efficient Cross-lingual TransferCode0
Boosting Data Analytics With Synthetic Volume ExpansionCode0
Reordering Examples Helps during Priming-based Few-Shot LearningCode0
General Cross-Architecture Distillation of Pretrained Language Models into Matrix EmbeddingsCode0
Entity-Level Sentiment Analysis (ELSA): An exploratory task surveyCode0
Replicability Analysis for Natural Language Processing: Testing Significance with Multiple DatasetsCode0
Entity-Level Sentiment: More than the Sum of Its PartsCode0
Statistically Evaluating Social Media Sentiment Trends towards COVID-19 Non-Pharmaceutical Interventions with Event StudiesCode0
LANGUAGE MODEL EMBEDDINGS IMPROVE SENTIMENT ANALYSIS IN RUSSIANCode0
Representation Learning for Text-level Discourse ParsingCode0
NEZHA: Neural Contextualized Representation for Chinese Language UnderstandingCode0
Representation Mapping: A Novel Approach to Generate High-Quality Multi-Lingual Emotion LexiconsCode0
Controlling the Interaction Between Generation and Inference in Semi-Supervised Variational Autoencoders Using Importance WeightingCode0
Language Representation Models for Fine-Grained Sentiment ClassificationCode0
Towards Multi-Sense Cross-Lingual Alignment of Contextual EmbeddingsCode0
EmotionGIF-Yankee: A Sentiment Classifier with Robust Model Based Ensemble MethodsCode0
ERNIE-Doc: A Retrospective Long-Document Modeling TransformerCode0
ERNIE: Enhanced Language Representation with Informative EntitiesCode0
emojiSpace: Spatial Representation of EmojisCode0
When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial NLPCode0
ASTE Transformer Modelling Dependencies in Aspect-Sentiment Triplet ExtractionCode0
Large language model for Bible sentiment analysis: Sermon on the MountCode0
NILC-USP at SemEval-2017 Task 4: A Multi-view Ensemble for Twitter Sentiment AnalysisCode0
Sentiment Analysis of Yelp Reviews: A Comparison of Techniques and ModelsCode0
An Operator Theoretic Approach for Analyzing Sequence Neural NetworksCode0
Contrasting Human- and Machine-Generated Word-Level Adversarial Examples for Text ClassificationCode0
Emoji Prediction in Tweets using BERTCode0
ETMS@IITKGP at SemEval-2022 Task 10: Structured Sentiment Analysis Using A Generative ApproachCode0
Continual Few-Shot Learning for Text ClassificationCode0
EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet AllocationCode0
The Devil is in the Details: Evaluating Limitations of Transformer-based Methods for Granular TasksCode0
An Automated Text Categorization Framework based on Hyperparameter OptimizationCode0
Large language models for newspaper sentiment analysis during COVID-19: The GuardianCode0
The Document Vectors Using Cosine Similarity RevisitedCode0
Contextual Inter-modal Attention for Multi-modal Sentiment AnalysisCode0
Emoji-Powered Representation Learning for Cross-Lingual Sentiment ClassificationCode0
A Context-free Arabic Emoji Sentiment Lexicon (CF-Arab-ESL)Code0
How Effectively Do LLMs Extract Feature-Sentiment Pairs from App Reviews?Code0
Sentiment Analysis on Financial News Headlines using Training Dataset AugmentationCode0
UDALM: Unsupervised Domain Adaptation through Language ModelingCode0
Evaluating Methods for Extraction of Aspect Terms in Opinion Texts in Portuguese - the Challenges of Implicit AspectsCode0
Contextual Explanation NetworksCode0
NL-FIIT at IEST-2018: Emotion Recognition utilizing Neural Networks and Multi-level PreprocessingCode0
Show:102550
← PrevPage 95 of 113Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Word+ES (Scratch)Attack Success Rate100Unverified
2MT-DNN-SMARTAccuracy97.5Unverified
3T5-11BAccuracy97.5Unverified
4MUPPET Roberta LargeAccuracy97.4Unverified
5T5-3BAccuracy97.4Unverified
6ALBERTAccuracy97.1Unverified
7StructBERTRoBERTa ensembleAccuracy97.1Unverified
8XLNet (single model)Accuracy97Unverified
9SMARTRoBERTaDev Accuracy96.9Unverified
10ELECTRAAccuracy96.9Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-large with LlamBERTAccuracy96.68Unverified
2RoBERTa-largeAccuracy96.54Unverified
3XLNetAccuracy96.21Unverified
4Heinsen Routing + RoBERTa LargeAccuracy96.2Unverified
5RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy96.1Unverified
6GraphStarAccuracy96Unverified
7DV-ngrams-cosine with NB sub-sampling + RoBERTa.baseAccuracy95.94Unverified
8DV-ngrams-cosine + RoBERTa.baseAccuracy95.92Unverified
9Roberta_Large ST + Cosine Similarity LossAccuracy95.9Unverified
10BERT large finetune UDAAccuracy95.8Unverified
#ModelMetricClaimedVerifiedStatus
1Llama-3.3-70B + CAPOAccuracy62.27Unverified
2Mistral-Small-24B + CAPOAccuracy 60.2Unverified
3Heinsen Routing + RoBERTa LargeAccuracy59.8Unverified
4RoBERTa-large+Self-ExplainingAccuracy59.1Unverified
5Qwen2.5-32B + CAPOAccuracy 59.07Unverified
6Heinsen Routing + GPT-2Accuracy58.5Unverified
7BCN+Suffix BiLSTM-Tied+CoVeAccuracy56.2Unverified
8BERT LargeAccuracy55.5Unverified
9LM-CPPF RoBERTa-baseAccuracy54.9Unverified
10BCN+ELMoAccuracy54.7Unverified
#ModelMetricClaimedVerifiedStatus
1Char-level CNNError4.88Unverified
2SVDCNNError4.74Unverified
3LEAMError4.69Unverified
4fastText, h=10, bigramError4.3Unverified
5SWEM-hierError4.19Unverified
6SRNNError3.96Unverified
7M-ACNNError3.89Unverified
8DNC+CUWError3.6Unverified
9CCCapsNetError3.52Unverified
10Block-sparse LSTMError3.27Unverified
#ModelMetricClaimedVerifiedStatus
1Millions of EmojiTraining Time1,500Unverified
2VLAWEAccuracy93.3Unverified
3RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy92.5Unverified
4AnglE-LLaMA-7BAccuracy91.09Unverified
5byte mLSTM7Accuracy86.8Unverified
6MEANAccuracy84.5Unverified
7RNN-CapsuleAccuracy83.8Unverified
8Capsule-BAccuracy82.3Unverified
9SuBiLSTM-TiedAccuracy81.6Unverified
10USE_T+CNNAccuracy81.59Unverified