SOTAVerified

Sentiment Analysis

Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.

Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.

More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.

Further readings:

Papers

Showing 10011050 of 5630 papers

TitleStatusHype
Aspect-based Sentiment Analysis of Scientific ReviewsCode0
Domain-Specific Language Model Post-Training for Indonesian Financial NLPCode0
DragonVerseQA: Open-Domain Long-Form Context-Aware Question-AnsweringCode0
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment AnalysisCode0
Bug Destiny Prediction in Large Open-Source Software Repositories through Sentiment Analysis and BERT Topic ModelingCode0
Do LLMs Think Fast and Slow? A Causal Study on Sentiment AnalysisCode0
Aspect-based Sentiment Analysis in Question Answering ForumsCode0
Domain-Adversarial Neural NetworksCode0
Adaptation of Deep Bidirectional Multilingual Transformers for Russian LanguageCode0
On the Use of Modality-Specific Large-Scale Pre-Trained Encoders for Multimodal Sentiment AnalysisCode0
Building a Sentiment Corpus of Tweets in Brazilian PortugueseCode0
Building a Sentiment Corpus of Tweets in Brazilian PortugueseCode0
A Hybrid Approach for Aspect-Based Sentiment Analysis Using Deep Contextual Word Embeddings and Hierarchical AttentionCode0
A Benchmark Study of Contrastive Learning for Arabic Social MeaningCode0
Domain Adversarial Fine-Tuning as an Effective RegularizerCode0
Building Large-Scale English and Korean Datasets for Aspect-Level Sentiment Analysis in Automotive DomainCode0
openXBOW - Introducing the Passau Open-Source Crossmodal Bag-of-Words ToolkitCode0
Building Odia Shallow ParserCode0
A Challenge Dataset and Effective Models for Aspect-Based Sentiment AnalysisCode0
Opinion Mining Using Pre-Trained Large Language Models: Identifying the Type, Polarity, Intensity, Expression, and Source of Private StatesCode0
Domain-Expanded ASTE: Rethinking Generalization in Aspect Sentiment Triplet ExtractionCode0
Efficient Low-rank Multimodal Fusion with Modality-Specific FactorsCode0
Does local pruning offer task-specific models to learn effectively ?Code0
Does Transliteration Help Multilingual Language Modeling?Code0
DOER: Dual Cross-Shared RNN for Aspect Term-Polarity Co-ExtractionCode0
A Deeper Look into Sarcastic Tweets Using Deep Convolutional Neural NetworksCode0
Does It Make Sense to Explain a Black Box With Another Black Box?Code0
Domain Adaptation for Arabic Cross-Domain and Cross-Dialect Sentiment Analysis from Contextualized Word EmbeddingCode0
Aspect-Based Relational Sentiment Analysis Using a Stacked Neural Network ArchitectureCode0
Document Embedding with Paragraph VectorsCode0
Dockerface: an Easy to Install and Use Faster R-CNN Face Detector in a Docker ContainerCode0
Document-level Multi-aspect Sentiment Classification by Jointly Modeling Users, Aspects, and Overall RatingsCode0
Domain Adaptation from ScratchCode0
Diverse Few-Shot Text Classification with Multiple MetricsCode0
Distributionally Robust Classifiers in Sentiment AnalysisCode0
Divide (Text) and Conquer (Sentiment): Improved Sentiment Classification by Constituent Conflict ResolutionCode0
Distinguishing affixoid formations from compoundsCode0
A Deep CNN Architecture with Novel Pooling Layer Applied to Two Sudanese Arabic Sentiment DatasetsCode0
Distributed Representations of Sentences and DocumentsCode0
Diving Deep into Sentiment: Understanding Fine-tuned CNNs for Visual Sentiment PredictionCode0
Distilling neural networks into skipgram-level decision listsCode0
A longitudinal sentiment analysis of Sinophobia during COVID-19 using large language modelsCode0
Distilling Task-Specific Knowledge from BERT into Simple Neural NetworksCode0
Heuristic-enhanced Candidates Selection strategy for GPTs tackle Few-Shot Aspect-Based Sentiment AnalysisCode0
Distilling Fine-grained Sentiment Understanding from Large Language ModelsCode0
Distilling the Knowledge of Romanian BERTs Using Multiple TeachersCode0
Domain Adapted Word Embeddings for Improved Sentiment ClassificationCode0
Disambiguation of Verbal ShiftersCode0
Discovering Highly Influential Shortcut Reasoning: An Automated Template-Free ApproachCode0
All-but-the-Top: Simple and Effective Postprocessing for Word RepresentationsCode0
Show:102550
← PrevPage 21 of 113Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Word+ES (Scratch)Attack Success Rate100Unverified
2MT-DNN-SMARTAccuracy97.5Unverified
3T5-11BAccuracy97.5Unverified
4MUPPET Roberta LargeAccuracy97.4Unverified
5T5-3BAccuracy97.4Unverified
6ALBERTAccuracy97.1Unverified
7StructBERTRoBERTa ensembleAccuracy97.1Unverified
8XLNet (single model)Accuracy97Unverified
9SMARTRoBERTaDev Accuracy96.9Unverified
10ELECTRAAccuracy96.9Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-large with LlamBERTAccuracy96.68Unverified
2RoBERTa-largeAccuracy96.54Unverified
3XLNetAccuracy96.21Unverified
4Heinsen Routing + RoBERTa LargeAccuracy96.2Unverified
5RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy96.1Unverified
6GraphStarAccuracy96Unverified
7DV-ngrams-cosine with NB sub-sampling + RoBERTa.baseAccuracy95.94Unverified
8DV-ngrams-cosine + RoBERTa.baseAccuracy95.92Unverified
9Roberta_Large ST + Cosine Similarity LossAccuracy95.9Unverified
10BERT large finetune UDAAccuracy95.8Unverified
#ModelMetricClaimedVerifiedStatus
1Llama-3.3-70B + CAPOAccuracy62.27Unverified
2Mistral-Small-24B + CAPOAccuracy 60.2Unverified
3Heinsen Routing + RoBERTa LargeAccuracy59.8Unverified
4RoBERTa-large+Self-ExplainingAccuracy59.1Unverified
5Qwen2.5-32B + CAPOAccuracy 59.07Unverified
6Heinsen Routing + GPT-2Accuracy58.5Unverified
7BCN+Suffix BiLSTM-Tied+CoVeAccuracy56.2Unverified
8BERT LargeAccuracy55.5Unverified
9LM-CPPF RoBERTa-baseAccuracy54.9Unverified
10BCN+ELMoAccuracy54.7Unverified
#ModelMetricClaimedVerifiedStatus
1Char-level CNNError4.88Unverified
2SVDCNNError4.74Unverified
3LEAMError4.69Unverified
4fastText, h=10, bigramError4.3Unverified
5SWEM-hierError4.19Unverified
6SRNNError3.96Unverified
7M-ACNNError3.89Unverified
8DNC+CUWError3.6Unverified
9CCCapsNetError3.52Unverified
10Block-sparse LSTMError3.27Unverified
#ModelMetricClaimedVerifiedStatus
1Millions of EmojiTraining Time1,500Unverified
2VLAWEAccuracy93.3Unverified
3RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy92.5Unverified
4AnglE-LLaMA-7BAccuracy91.09Unverified
5byte mLSTM7Accuracy86.8Unverified
6MEANAccuracy84.5Unverified
7RNN-CapsuleAccuracy83.8Unverified
8Capsule-BAccuracy82.3Unverified
9SuBiLSTM-TiedAccuracy81.6Unverified
10USE_T+CNNAccuracy81.59Unverified