SOTAVerified

Sentiment Analysis

Sentiment Analysis is the task of classifying the polarity of a given text. For instance, a text-based tweet can be categorized into either "positive", "negative", or "neutral". Given the text and accompanying labels, a model can be trained to predict the correct sentiment.

Sentiment Analysis techniques can be categorized into machine learning approaches, lexicon-based approaches, and even hybrid methods. Some subcategories of research in sentiment analysis include: multimodal sentiment analysis, aspect-based sentiment analysis, fine-grained opinion analysis, language specific sentiment analysis.

More recently, deep learning techniques, such as RoBERTa and T5, are used to train high-performing sentiment classifiers that are evaluated using metrics like F1, recall, and precision. To evaluate sentiment analysis systems, benchmark datasets like SST, GLUE, and IMDB movie reviews are used.

Further readings:

Papers

Showing 451500 of 5630 papers

TitleStatusHype
Learning to Encode Position for Transformer with Continuous Dynamical ModelCode1
AraBERT: Transformer-based Model for Arabic Language UnderstandingCode1
Investigating Typed Syntactic Dependencies for Targeted Sentiment Classification Using Graph Attention Neural NetworkCode1
KryptoOracle: A Real-Time Cryptocurrency Price Prediction Platform Using Twitter SentimentsCode1
Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection and Sentiment Analysis in ConversationCode1
Robustness Verification for TransformersCode1
Utilizing BERT Intermediate Layers for Aspect Based Sentiment Analysis and Natural Language InferenceCode1
Improving Domain-Adapted Sentiment Classification by Deep Adversarial Mutual LearningCode1
Adversarial Training for Aspect-Based Sentiment Analysis with BERTCode1
RobBERT: a Dutch RoBERTa-based Language ModelCode1
Predictive analysis of Bitcoin price considering social sentimentsCode1
Latent Opinions Transfer Network for Target-Oriented Opinion Words ExtractionCode1
T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted AttackCode1
BERTje: A Dutch BERT ModelCode1
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized OptimizationCode1
Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment AnalysisCode1
ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram RepresentationsCode1
Q8BERT: Quantized 8Bit BERTCode1
Soft-Label Dataset Distillation and Text Dataset DistillationCode1
Exploiting BERT for End-to-End Aspect-based Sentiment AnalysisCode1
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighterCode1
Learning Latent Parameters without Human Response Patterns: Item Response Theory with Artificial CrowdsCode1
RoBERTa: A Robustly Optimized BERT Pretraining ApproachCode1
Pars-ABSA: an Aspect-based Sentiment Analysis dataset for PersianCode1
XLNet: Generalized Autoregressive Pretraining for Language UnderstandingCode1
An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment AnalysisCode1
Progressive Self-Supervised Attention Learning for Aspect-Level Sentiment AnalysisCode1
ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine TweetsCode1
How to Fine-Tune BERT for Text Classification?Code1
Unsupervised Data Augmentation for Consistency TrainingCode1
DocBERT: BERT for Document ClassificationCode1
Induction Networks for Few-Shot Text ClassificationCode1
Simplifying Graph Convolutional NetworksCode1
Language Models are Unsupervised Multitask LearnersCode1
Learning to Remember More with Less MemorizationCode1
LSICC: A Large Scale Informal Chinese CorpusCode1
A Unified Model for Opinion Target Extraction and Target Sentiment PredictionCode1
Stronger Data Poisoning Attacks Break Data Sanitization DefensesCode1
Graph Convolutional Networks for Text ClassificationCode1
Comparative Studies of Detecting Abusive Language on TwitterCode1
The Natural Language Decathlon: Multitask Learning as Question AnsweringCode1
Cold-Start Aware User and Product Attention for Sentiment ClassificationCode1
Universal Sentence EncoderCode1
Deep contextualized word representationsCode1
Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMsCode1
Black-box Generation of Adversarial Text Sequences to Evade Deep Learning ClassifiersCode1
Review highlights: opinion mining on reviews: a hybrid model for rule selection in aspect extractionCode1
Bayesian Sparsification of Recurrent Neural NetworksCode1
Dual Rectified Linear Units (DReLUs): A Replacement for Tanh Activation Functions in Quasi-Recurrent Neural NetworksCode1
``Liar, Liar Pants on Fire'': A New Benchmark Dataset for Fake News DetectionCode1
Show:102550
← PrevPage 10 of 113Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Word+ES (Scratch)Attack Success Rate100Unverified
2MT-DNN-SMARTAccuracy97.5Unverified
3T5-11BAccuracy97.5Unverified
4MUPPET Roberta LargeAccuracy97.4Unverified
5T5-3BAccuracy97.4Unverified
6ALBERTAccuracy97.1Unverified
7StructBERTRoBERTa ensembleAccuracy97.1Unverified
8XLNet (single model)Accuracy97Unverified
9SMARTRoBERTaDev Accuracy96.9Unverified
10ELECTRAAccuracy96.9Unverified
#ModelMetricClaimedVerifiedStatus
1RoBERTa-large with LlamBERTAccuracy96.68Unverified
2RoBERTa-largeAccuracy96.54Unverified
3XLNetAccuracy96.21Unverified
4Heinsen Routing + RoBERTa LargeAccuracy96.2Unverified
5RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy96.1Unverified
6GraphStarAccuracy96Unverified
7DV-ngrams-cosine with NB sub-sampling + RoBERTa.baseAccuracy95.94Unverified
8DV-ngrams-cosine + RoBERTa.baseAccuracy95.92Unverified
9Roberta_Large ST + Cosine Similarity LossAccuracy95.9Unverified
10BERT large finetune UDAAccuracy95.8Unverified
#ModelMetricClaimedVerifiedStatus
1Llama-3.3-70B + CAPOAccuracy62.27Unverified
2Mistral-Small-24B + CAPOAccuracy 60.2Unverified
3Heinsen Routing + RoBERTa LargeAccuracy59.8Unverified
4RoBERTa-large+Self-ExplainingAccuracy59.1Unverified
5Qwen2.5-32B + CAPOAccuracy 59.07Unverified
6Heinsen Routing + GPT-2Accuracy58.5Unverified
7BCN+Suffix BiLSTM-Tied+CoVeAccuracy56.2Unverified
8BERT LargeAccuracy55.5Unverified
9LM-CPPF RoBERTa-baseAccuracy54.9Unverified
10BCN+ELMoAccuracy54.7Unverified
#ModelMetricClaimedVerifiedStatus
1Char-level CNNError4.88Unverified
2SVDCNNError4.74Unverified
3LEAMError4.69Unverified
4fastText, h=10, bigramError4.3Unverified
5SWEM-hierError4.19Unverified
6SRNNError3.96Unverified
7M-ACNNError3.89Unverified
8DNC+CUWError3.6Unverified
9CCCapsNetError3.52Unverified
10Block-sparse LSTMError3.27Unverified
#ModelMetricClaimedVerifiedStatus
1Millions of EmojiTraining Time1,500Unverified
2VLAWEAccuracy93.3Unverified
3RoBERTa-large 355M + Entailment as Few-shot LearnerAccuracy92.5Unverified
4AnglE-LLaMA-7BAccuracy91.09Unverified
5byte mLSTM7Accuracy86.8Unverified
6MEANAccuracy84.5Unverified
7RNN-CapsuleAccuracy83.8Unverified
8Capsule-BAccuracy82.3Unverified
9SuBiLSTM-TiedAccuracy81.6Unverified
10USE_T+CNNAccuracy81.59Unverified