SOTAVerified

Sentiment Analysis

Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.

Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.

More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.

Further readings:

Papers

Showing 48514900 of 5630 papers

TitleStatusHype
Learning Semantic Sentence Embeddings using Sequential Pair-wise DiscriminatorCode0
Exploring Conditional Text Generation for Aspect-Based Sentiment AnalysisCode0
Exploring Contrast Consistency of Open-Domain Question Answering Systems on Minimally Edited QuestionsCode0
Learning Semantic Sentence Embeddings using Sequential Pair-wise DiscriminatorCode0
Aspect-Category-Opinion-Sentiment Extraction Using Generative Transformer ModelCode0
RoBERTa-BiLSTM: A Context-Aware Hybrid Model for Sentiment AnalysisCode0
Unsupervised Learning of Explainable Parse Trees for Improved GeneralisationCode0
Towards Target-dependent Sentiment Classification in News ArticlesCode0
Learning Sentiment-Specific Word Embedding for Twitter Sentiment ClassificationCode0
Making the Best Use of Review Summary for Sentiment AnalysisCode0
Vector of Locally-Aggregated Word Embeddings (VLAWE): A Novel Document-level RepresentationCode0
UIO at SemEval-2023 Task 12: Multilingual fine-tuning for sentiment classification in low-resource languagesCode0
Robust Gram EmbeddingsCode0
Robustifying Sentiment Classification by Maximally Exploiting Few CounterfactualsCode0
Exploring Online Depression Forums via Text Mining: A Comparison of Reddit and a Curated Online ForumCode0
Word-Level Uncertainty Estimation for Black-Box Text Classifiers using RNNsCode0
Learning the Difference that Makes a Difference with Counterfactually-Augmented DataCode0
One-Teacher and Multiple-Student Knowledge Distillation on Sentiment ClassificationCode0
Towards Understanding In-Context Learning with Contrastive Demonstrations and Saliency MapsCode0
Adversarial Self-Attention for Language UnderstandingCode0
Learning from Explanations with Neural Execution TreeCode0
Economy Watchers Survey Provides Datasets and Tasks for Japanese Financial DomainCode0
bgGLUE: A Bulgarian General Language Understanding Evaluation BenchmarkCode0
Learning to Compose Task-Specific Tree StructuresCode0
E2TP: Element to Tuple Prompting Improves Aspect Sentiment Tuple PredictionCode0
Verifying Properties of Tsetlin MachinesCode0
Exploring the Relationship Between Algorithm Performance, Vocabulary, and Run-Time in Text ClassificationCode0
Exploring Tokenization Strategies and Vocabulary Sizes for Enhanced Arabic Language ModelsCode0
On Guaranteed Optimal Robust Explanations for NLP ModelsCode0
Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text ClassificationCode0
Exponential MachinesCode0
Dynamic Compositionality in Recursive Neural Networks with Structure-aware Tag RepresentationsCode0
Expressively vulgar: The socio-dynamics of vulgarity and its effects on sentiment analysis in social mediaCode0
On Identifying Disaster-Related Tweets: Matching-based or Learning-based?Code0
DragonVerseQA: Open-Domain Long-Form Context-Aware Question-AnsweringCode0
Learning to Distinguish Hypernyms and Co-HyponymsCode0
Extensible Multi-Granularity Fusion Network for Aspect-based Sentiment AnalysisCode0
Constituency Lattice Encoding for Aspect Term ExtractionCode0
Aspect-based summarization of pros and cons in unstructured product reviewsCode0
Learning to Few-Shot Learn Across Diverse Natural Language Classification TasksCode0
Confident Learning: Estimating Uncertainty in Dataset LabelsCode0
Toward Tag-free Aspect Based Sentiment Analysis: A Multiple Attention Network ApproachCode0
Learning to Generate Reviews and Discovering SentimentCode0
Subjective Logic EncodingsCode0
Robust Training under Linguistic AdversityCode0
Learning to Play Chess from Textbooks (LEAP): a Corpus for Evaluating Chess Moves based on Sentiment AnalysisCode0
Compressing Word Embeddings via Deep Compositional Code LearningCode0
Robust Unsupervised Domain Adaptation for Neural Networks via Moment AlignmentCode0
Comprehensive dataset of user-submitted articles with ideological and extreme bias from RedditCode0
Double Embeddings and CNN-based Sequence Labeling for Aspect ExtractionCode0
Show:102550
← PrevPage 98 of 113Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Word+ES (Scratch)Attack Success Rate100Unverified
2MT-DNN-SMARTAccuracy97.5Unverified
3T5-11BAccuracy97.5Unverified
4MUPPET Roberta LargeAccuracy97.4Unverified
5T5-3BAccuracy97.4Unverified
6ALBERTAccuracy97.1Unverified
7StructBERTRoBERTa ensembleAccuracy97.1Unverified
8XLNet (single model)Accuracy97Unverified
9SMARTRoBERTaDev Accuracy96.9Unverified
10ELECTRAAccuracy96.9Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-large with LlamBERTAccuracy96.68Unverified
2RoBERTa-largeAccuracy96.54Unverified
3XLNetAccuracy96.21Unverified
4Heinsen Routing + RoBERTa LargeAccuracy96.2Unverified
5RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy96.1Unverified
6GraphStarAccuracy96Unverified
7DV-ngrams-cosine with NB sub-sampling + RoBERTa.baseAccuracy95.94Unverified
8DV-ngrams-cosine + RoBERTa.baseAccuracy95.92Unverified
9Roberta_Large ST + Cosine Similarity LossAccuracy95.9Unverified
10BERT large finetune UDAAccuracy95.8Unverified
#ModelMetricClaimedVerifiedStatus
1Llama-3.3-70B + CAPOAccuracy62.27Unverified
2Mistral-Small-24B + CAPOAccuracy 60.2Unverified
3Heinsen Routing + RoBERTa LargeAccuracy59.8Unverified
4RoBERTa-large+Self-ExplainingAccuracy59.1Unverified
5Qwen2.5-32B + CAPOAccuracy 59.07Unverified
6Heinsen Routing + GPT-2Accuracy58.5Unverified
7BCN+Suffix BiLSTM-Tied+CoVeAccuracy56.2Unverified
8BERT LargeAccuracy55.5Unverified
9LM-CPPF RoBERTa-baseAccuracy54.9Unverified
10BCN+ELMoAccuracy54.7Unverified
#ModelMetricClaimedVerifiedStatus
1Char-level CNNError4.88Unverified
2SVDCNNError4.74Unverified
3LEAMError4.69Unverified
4fastText, h=10, bigramError4.3Unverified
5SWEM-hierError4.19Unverified
6SRNNError3.96Unverified
7M-ACNNError3.89Unverified
8DNC+CUWError3.6Unverified
9CCCapsNetError3.52Unverified
10Block-sparse LSTMError3.27Unverified
#ModelMetricClaimedVerifiedStatus
1Millions of EmojiTraining Time1,500Unverified
2VLAWEAccuracy93.3Unverified
3RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy92.5Unverified
4AnglE-LLaMA-7BAccuracy91.09Unverified
5byte mLSTM7Accuracy86.8Unverified
6MEANAccuracy84.5Unverified
7RNN-CapsuleAccuracy83.8Unverified
8Capsule-BAccuracy82.3Unverified
9SuBiLSTM-TiedAccuracy81.6Unverified
10USE_T+CNNAccuracy81.59Unverified